Sample records for cost optimized test

  1. Optimal periodic proof test based on cost-effective and reliability criteria

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    An exploratory study for the optimization of periodic proof tests for fatigue-critical structures is presented. The optimal proof load level and the optimal number of periodic proof tests are determined by minimizing the total expected (statistical average) cost, while the constraint on the allowable level of structural reliability is satisfied. The total expected cost consists of the expected cost of proof tests, the expected cost of structures destroyed by proof tests, and the expected cost of structural failure in service. It is demonstrated by numerical examples that significant cost saving and reliability improvement for fatigue-critical structures can be achieved by the application of the optimal periodic proof test. The present study is relevant to the establishment of optimal maintenance procedures for fatigue-critical structures.

  2. GMOtrack: generator of cost-effective GMO testing strategies.

    PubMed

    Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana

    2009-01-01

    Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.

  3. Advanced in-duct sorbent injection for SO{sub 2} control. Topical report No. 2, Subtask 2.2: Design optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenhoover, W.A.; Stouffer, M.R.; Withum, J.A.

    1994-12-01

    The objective of this research project is to develop second-generation duct injection technology as a cost-effective SO{sub 2} control option for the 1990 Clean Air Act Amendments. Research is focused on the Advanced Coolside process, which has shown the potential for achieving the performance targets of 90% SO{sub 2} removal and 60% sorbent utilization. In Subtask 2.2, Design Optimization, process improvement was sought by optimizing sorbent recycle and by optimizing process equipment for reduced cost. The pilot plant recycle testing showed that 90% SO{sub 2} removal could be achieved at sorbent utilizations up to 75%. This testing also showed thatmore » the Advanced Coolside process has the potential to achieve very high removal efficiency (90 to greater than 99%). Two alternative contactor designs were developed, tested and optimized through pilot plant testing; the improved designs will reduce process costs significantly, while maintaining operability and performance essential to the process. Also, sorbent recycle handling equipment was optimized to reduce cost.« less

  4. Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey Dewayne

    2004-01-01

    The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.

  5. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  6. Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Story, George

    2015-01-01

    Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. One remaining issue is the cost of hybrids versus the existing launch propulsion systems. This paper will review the known state-of-the-art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.

  7. Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Story, George

    2014-01-01

    Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and later on solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. A remaining issue is the cost of hybrids vs the existing launch propulsion systems. This paper will review the known state of the art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.

  8. Optimal use of colonoscopy and fecal immunochemical test for population-based colorectal cancer screening: a cost-effectiveness analysis using Japanese data.

    PubMed

    Sekiguchi, Masau; Igarashi, Ataru; Matsuda, Takahisa; Matsumoto, Minori; Sakamoto, Taku; Nakajima, Takeshi; Kakugawa, Yasuo; Yamamoto, Seiichiro; Saito, Hiroshi; Saito, Yutaka

    2016-02-01

    There have been few cost-effectiveness analyses of population-based colorectal cancer screening in Japan, and there is no consensus on the optimal use of total colonoscopy and the fecal immunochemical test for colorectal cancer screening with regard to cost-effectiveness and total colonoscopy workload. The present study aimed to examine the cost-effectiveness of colorectal cancer screening using Japanese data to identify the optimal use of total colonoscopy and fecal immunochemical test. We developed a Markov model to assess the cost-effectiveness of colorectal cancer screening offered to an average-risk population aged 40 years or over. The cost, quality-adjusted life-years and number of total colonoscopy procedures required were evaluated for three screening strategies: (i) a fecal immunochemical test-based strategy; (ii) a total colonoscopy-based strategy; (iii) a strategy of adding population-wide total colonoscopy at 50 years to a fecal immunochemical test-based strategy. All three strategies dominated no screening. Among the three, Strategy 1 was dominated by Strategy 3, and the incremental cost per quality-adjusted life-years gained for Strategy 2 against Strategies 1 and 3 were JPY 293 616 and JPY 781 342, respectively. Within the Japanese threshold (JPY 5-6 million per QALY gained), Strategy 2 was the most cost-effective, followed by Strategy 3; however, Strategy 2 required more than double the number of total colonoscopy procedures than the other strategies. The total colonoscopy-based strategy could be the most cost-effective for population-based colorectal cancer screening in Japan. However, it requires more total colonoscopy procedures than the other strategies. Depending on total colonoscopy capacity, the strategy of adding total colonoscopy for individuals at a specified age to a fecal immunochemical test-based screening may be an optimal solution. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Minimum cost to control bovine tuberculosis in cow-calf herds

    PubMed Central

    Smith, Rebecca L.; Tauer, Loren W.; Sanderson, Michael W.; Grohn, Yrjo T.

    2014-01-01

    Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2–3 month testing interval. However, the optimal testing interval for governments was increased to 2–4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal’s herd of origin can be identified would result in faster detection times. PMID:24703601

  10. Minimum cost to control bovine tuberculosis in cow-calf herds.

    PubMed

    Smith, Rebecca L; Tauer, Loren W; Sanderson, Michael W; Gröhn, Yrjo T

    2014-07-01

    Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2-3 month testing interval. However, the optimal testing interval for governments was increased to 2-4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal's herd of origin can be identified would result in faster detection times. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  12. The extension of the thermal-vacuum test optimization program to multiple flights

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Byrd, J.

    1981-01-01

    The thermal vacuum test optimization model developed to provide an approach to the optimization of a test program based on prediction of flight performance with a single flight option in mind is extended to consider reflight as in space shuttle missions. The concept of 'utility', developed under the name of 'availability', is used to follow performance through the various options encountered when the capabilities of reflight and retrievability of space shuttle are available. Also, a 'lost value' model is modified to produce a measure of the probability of a mission's success, achieving a desired utility using a minimal cost test strategy. The resulting matrix of probabilities and their associated costs provides a means for project management to evaluate various test and reflight strategies.

  13. A probabilistic method for the estimation of residual risk in donated blood.

    PubMed

    Bish, Ebru K; Ragavan, Prasanna K; Bish, Douglas R; Slonim, Anthony D; Stramer, Susan L

    2014-10-01

    The residual risk (RR) of transfusion-transmitted infections, including the human immunodeficiency virus and hepatitis B and C viruses, is typically estimated by the incidence[Formula: see text]window period model, which relies on the following restrictive assumptions: Each screening test, with probability 1, (1) detects an infected unit outside of the test's window period; (2) fails to detect an infected unit within the window period; and (3) correctly identifies an infection-free unit. These assumptions need not hold in practice due to random or systemic errors and individual variations in the window period. We develop a probability model that accurately estimates the RR by relaxing these assumptions, and quantify their impact using a published cost-effectiveness study and also within an optimization model. These assumptions lead to inaccurate estimates in cost-effectiveness studies and to sub-optimal solutions in the optimization model. The testing solution generated by the optimization model translates into fewer expected infections without an increase in the testing cost. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Microhard MHX 2420 Orbital Performance Evaluation Using RT Logic T400CS

    NASA Technical Reports Server (NTRS)

    Kearney, Stuart; Lombardi, Mark; Attai, Watson; Oyadomari, Ken; Al Rumhi, Ahmed Saleh Nasser; Rakotonarivo, Sebastien; Chardon, Loic; Gazulla, Oriol Tintore; Wolfe, Jasper; Salas, AlbertoGuillen; hide

    2012-01-01

    A major upfront cost of building low cost Nanosatellites is the communications sub-system. Most radios built for space missions cost over $4,000 per unit. This exceeds many budgets. One possible cost effective solution is the Microhard MHX2420, a commercial off-the-shelf transceiver with a unit cost under $1000. This paper aims to support the Nanosatellite community seeking an inexpensive radio by characterizing Microhard's performance envelope. Though not intended for space operations, the ability to test edge cases and increase average data transfer speeds through optimization positions this radio as a solution for Nanosatellite communications by expanding usage to include more missions. The second objective of this paper is to test and verify the optimal radio settings for the most common cases to improve downlinking. All tests were conducted with the aid of the RT Logic T400CS, a hardware-in-the-loop channel simulator designed to emulate real-world radio frequency (RF) link effects. This study provides recommended settings to optimize the downlink speed as well as the environmental parameters that cause the link to fail.

  15. Optimal shielding design for minimum materials cost or mass

    DOE PAGES

    Woolley, Robert D.

    2015-12-02

    The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very smallmore » changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.« less

  16. Renewable Energy Resources Portfolio Optimization in the Presence of Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behboodi, Sahand; Chassin, David P.; Crawford, Curran

    In this paper we introduce a simple cost model of renewable integration and demand response that can be used to determine the optimal mix of generation and demand response resources. The model includes production cost, demand elasticity, uncertainty costs, capacity expansion costs, retirement and mothballing costs, and wind variability impacts to determine the hourly cost and revenue of electricity delivery. The model is tested on the 2024 planning case for British Columbia and we find that cost is minimized with about 31% renewable generation. We also find that demand responsive does not have a significant impact on cost at themore » hourly level. The results suggest that the optimal level of renewable resource is not sensitive to a carbon tax or demand elasticity, but it is highly sensitive to the renewable resource installation cost.« less

  17. Integrated testing strategies can be optimal for chemical risk classification.

    PubMed

    Raseta, Marko; Pitchford, Jon; Cussens, James; Doe, John

    2017-08-01

    There is an urgent need to refine strategies for testing the safety of chemical compounds. This need arises both from the financial and ethical costs of animal tests, but also from the opportunities presented by new in-vitro and in-silico alternatives. Here we explore the mathematical theory underpinning the formulation of optimal testing strategies in toxicology. We show how the costs and imprecisions of the various tests, and the variability in exposures and responses of individuals, can be assembled rationally to form a Markov Decision Problem. We compute the corresponding optimal policies using well developed theory based on Dynamic Programming, thereby identifying and overcoming some methodological and logical inconsistencies which may exist in the current toxicological testing. By illustrating our methods for two simple but readily generalisable examples we show how so-called integrated testing strategies, where information of different precisions from different sources is combined and where different initial test outcomes lead to different sets of future tests, can arise naturally as optimal policies. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. An approach for Ewing test selection to support the clinical assessment of cardiac autonomic neuropathy.

    PubMed

    Stranieri, Andrew; Abawajy, Jemal; Kelarev, Andrei; Huda, Shamsul; Chowdhury, Morshed; Jelinek, Herbert F

    2013-07-01

    This article addresses the problem of determining optimal sequences of tests for the clinical assessment of cardiac autonomic neuropathy (CAN). We investigate the accuracy of using only one of the recommended Ewing tests to classify CAN and the additional accuracy obtained by adding the remaining tests of the Ewing battery. This is important as not all five Ewing tests can always be applied in each situation in practice. We used new and unique database of the diabetes screening research initiative project, which is more than ten times larger than the data set used by Ewing in his original investigation of CAN. We utilized decision trees and the optimal decision path finder (ODPF) procedure for identifying optimal sequences of tests. We present experimental results on the accuracy of using each one of the recommended Ewing tests to classify CAN and the additional accuracy that can be achieved by adding the remaining tests of the Ewing battery. We found the best sequences of tests for cost-function equal to the number of tests. The accuracies achieved by the initial segments of the optimal sequences for 2, 3 and 4 categories of CAN are 80.80, 91.33, 93.97 and 94.14, and respectively, 79.86, 89.29, 91.16 and 91.76, and 78.90, 86.21, 88.15 and 88.93. They show significant improvement compared to the sequence considered previously in the literature and the mathematical expectations of the accuracies of a random sequence of tests. The complete outcomes obtained for all subsets of the Ewing features are required for determining optimal sequences of tests for any cost-function with the use of the ODPF procedure. We have also found two most significant additional features that can increase the accuracy when some of the Ewing attributes cannot be obtained. The outcomes obtained can be used to determine the optimal sequences of tests for each individual cost-function by following the ODPF procedure. The results show that the best single Ewing test for diagnosing CAN is the deep breathing heart rate variation test. Optimal sequences found for the cost-function equal to the number of tests guarantee that the best accuracy is achieved after any number of tests and provide an improvement in comparison with the previous ordering of tests or a random sequence. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Placental alpha-microglobulin-1 and combined traditional diagnostic test: a cost-benefit analysis.

    PubMed

    Echebiri, Nelson C; McDoom, M Maya; Pullen, Jessica A; Aalto, Meaghan M; Patel, Natasha N; Doyle, Nora M

    2015-01-01

    We sought to evaluate if the placental alpha-microglobulin (PAMG)-1 test vs the combined traditional diagnostic test (CTDT) of pooling, nitrazine, and ferning would be a cost-beneficial screening strategy in the setting of potential preterm premature rupture of membranes. A decision analysis model was used to estimate the economic impact of PAMG-1 test vs the CTDT on preterm delivery costs from a societal perspective. Our primary outcome was the annual net cost-benefit per person tested. Baseline probabilities and costs assumptions were derived from published literature. We conducted sensitivity analyses using both deterministic and probabilistic models. Cost estimates reflect 2013 US dollars. Annual net benefit from PAMG-1 was $20,014 per person tested, while CTDT had a net benefit of $15,757 per person tested. If the probability of rupture is <38%, PAMG-1 will be cost-beneficial with an annual net benefit of $16,000-37,000 per person tested, while CTDT will have an annual net benefit of $16,000-19,500 per person tested. If the probability of rupture is >38%, CTDT is more cost-beneficial. Monte Carlo simulations of 1 million trials selected PAMG-1 as the optimal strategy with a frequency of 89%, while CTDT was only selected as the optimal strategy with a frequency of 11%. Sensitivity analyses were robust. Our cost-benefit analysis provides the economic evidence for the adoption of PAMG-1 in diagnosing preterm premature rupture of membranes in uncertain presentations and when CTDT is equivocal at 34 to <37 weeks' gestation. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Cost optimization of reinforced concrete cantilever retaining walls under seismic loading using a biogeography-based optimization algorithm with Levy flights

    NASA Astrophysics Data System (ADS)

    Aydogdu, Ibrahim

    2017-03-01

    In this article, a new version of a biogeography-based optimization algorithm with Levy flight distribution (LFBBO) is introduced and used for the optimum design of reinforced concrete cantilever retaining walls under seismic loading. The cost of the wall is taken as an objective function, which is minimized under the constraints implemented by the American Concrete Institute (ACI 318-05) design code and geometric limitations. The influence of peak ground acceleration (PGA) on optimal cost is also investigated. The solution of the problem is attained by the LFBBO algorithm, which is developed by adding Levy flight distribution to the mutation part of the biogeography-based optimization (BBO) algorithm. Five design examples, of which two are used in literature studies, are optimized in the study. The results are compared to test the performance of the LFBBO and BBO algorithms, to determine the influence of the seismic load and PGA on the optimal cost of the wall.

  1. Optimization of structures on the basis of fracture mechanics and reliability criteria

    NASA Technical Reports Server (NTRS)

    Heer, E.; Yang, J. N.

    1973-01-01

    Systematic summary of factors which are involved in optimization of given structural configuration is part of report resulting from study of analysis of objective function. Predicted reliability of performance of finished structure is sharply dependent upon results of coupon tests. Optimization analysis developed by study also involves expected cost of proof testing.

  2. Assessment of regional management strategies for controlling seawater intrusion

    USGS Publications Warehouse

    Reichard, E.G.; Johnson, T.A.

    2005-01-01

    Simulation-optimization methods, applied with adequate sensitivity tests, can provide useful quantitative guidance for controlling seawater intrusion. This is demonstrated in an application to the West Coast Basin of coastal Los Angeles that considers two management options for improving hydraulic control of seawater intrusion: increased injection into barrier wells and in lieu delivery of surface water to replace current pumpage. For the base-case optimization analysis, assuming constant groundwater demand, in lieu delivery was determined to be most cost effective. Reduced-cost information from the optimization provided guidance for prioritizing locations for in lieu delivery. Model sensitivity to a suite of hydrologic, economic, and policy factors was tested. Raising the imposed average water-level constraint at the hydraulic-control locations resulted in nonlinear increases in cost. Systematic varying of the relative costs of injection and in lieu water yielded a trade-off curve between relative costs and injection/in lieu amounts. Changing the assumed future scenario to one of increasing pumpage in the adjacent Central Basin caused a small increase in the computed costs of seawater intrusion control. Changing the assumed boundary condition representing interaction with an adjacent basin did not affect the optimization results. Reducing the assumed hydraulic conductivity of the main productive aquifer resulted in a large increase in the model-computed cost. Journal of Water Resources Planning and Management ?? ASCE.

  3. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    NASA Astrophysics Data System (ADS)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  4. Staff Study on Cost and Training Effectiveness of Proposed Training Systems. TAEG Report 1.

    ERIC Educational Resources Information Center

    Naval Training Equipment Center, Orlando, FL. Training Analysis and Evaluation Group.

    A study began the development and initial testing of a method for predicting cost and training effectiveness of proposed training programs. A prototype Training Effectiveness and Cost Effectiveness Prediction (TECEP) model was developed and tested. The model was a method for optimization of training media allocation on the basis of fixed training…

  5. Optimizing urine drug testing for monitoring medication compliance in pain management.

    PubMed

    Melanson, Stacy E F; Ptolemy, Adam S; Wasan, Ajay D

    2013-12-01

    It can be challenging to successfully monitor medication compliance in pain management. Clinicians and laboratorians need to collaborate to optimize patient care and maximize operational efficiency. The test menu, assay cutoffs, and testing algorithms utilized in the urine drug testing panels should be periodically reviewed and tailored to the patient population to effectively assess compliance and avoid unnecessary testing and cost to the patient. Pain management and pathology collaborated on an important quality improvement initiative to optimize urine drug testing for monitoring medication compliance in pain management. We retrospectively reviewed 18 months of data from our pain management center. We gathered data on test volumes, positivity rates, and the frequency of false positive results. We also reviewed the clinical utility of our testing algorithms, assay cutoffs, and adulterant panel. In addition, the cost of each component was calculated. The positivity rate for ethanol and 3,4-methylenedioxymethamphetamine were <1% so we eliminated this testing from our panel. We also lowered the screening cutoff for cocaine to meet the clinical needs of the pain management center. In addition, we changed our testing algorithm for 6-acetylmorphine, benzodiazepines, and methadone. For example, due the high rate of false negative results using our immunoassay-based benzodiazepine screen, we removed the screening portion of the algorithm and now perform benzodiazepine confirmation up front in all specimens by liquid chromatography-tandem mass spectrometry. Conducting an interdisciplinary quality improvement project allowed us to optimize our testing panel for monitoring medication compliance in pain management and reduce cost. Wiley Periodicals, Inc.

  6. Application of a territorial-based filtering algorithm in turbomachinery blade design optimization

    NASA Astrophysics Data System (ADS)

    Bahrami, Salman; Khelghatibana, Maryam; Tribes, Christophe; Yi Lo, Suk; von Fellenberg, Sven; Trépanier, Jean-Yves; Guibault, François

    2017-02-01

    A territorial-based filtering algorithm (TBFA) is proposed as an integration tool in a multi-level design optimization methodology. The design evaluation burden is split between low- and high-cost levels in order to properly balance the cost and required accuracy in different design stages, based on the characteristics and requirements of the case at hand. TBFA is in charge of connecting those levels by selecting a given number of geometrically different promising solutions from the low-cost level to be evaluated in the high-cost level. Two test case studies, a Francis runner and a transonic fan rotor, have demonstrated the robustness and functionality of TBFA in real industrial optimization problems.

  7. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  8. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  9. Development of coordination system model on single-supplier multi-buyer for multi-item supply chain with probabilistic demand

    NASA Astrophysics Data System (ADS)

    Olivia, G.; Santoso, A.; Prayogo, D. N.

    2017-11-01

    Nowadays, the level of competition between supply chains is getting tighter and a good coordination system between supply chains members is very crucial in solving the issue. This paper focused on a model development of coordination system between single supplier and buyers in a supply chain as a solution. Proposed optimization model was designed to determine the optimal number of deliveries from a supplier to buyers in order to minimize the total cost over a planning horizon. Components of the total supply chain cost consist of transportation costs, handling costs of supplier and buyers and also stock out costs. In the proposed optimization model, the supplier can supply various types of items to retailers whose item demand patterns are probabilistic. Sensitivity analysis of the proposed model was conducted to test the effect of changes in transport costs, handling costs and production capacities of the supplier. The results of the sensitivity analysis showed a significant influence on the changes in the transportation cost, handling costs and production capacity to the decisions of the optimal numbers of product delivery for each item to the buyers.

  10. Advances in Optimizing Weather Driven Electric Power Systems.

    NASA Astrophysics Data System (ADS)

    Clack, C.; MacDonald, A. E.; Alexander, A.; Dunbar, A. D.; Xie, Y.; Wilczak, J. M.

    2014-12-01

    The importance of weather-driven renewable energies for the United States (and global) energy portfolio is growing. The main perceived problems with weather-driven renewable energies are their intermittent nature, low power density, and high costs. The National Energy with Weather System Simulator (NEWS) is a mathematical optimization tool that allows the construction of weather-driven energy sources that will work in harmony with the needs of the system. For example, it will match the electric load, reduce variability, decrease costs, and abate carbon emissions. One important test run included existing US carbon-free power sources, natural gas power when needed, and a High Voltage Direct Current power transmission network. This study shows that the costs and carbon emissions from an optimally designed national system decrease with geographic size. It shows that with achievable estimates of wind and solar generation costs, that the US could decrease its carbon emissions by up to 80% by the early 2030s, without an increase in electric costs. The key requirement would be a 48 state network of HVDC transmission, creating a national market for electricity not possible in the current AC grid. These results were found without the need for storage. Further, we tested the effect of changing natural gas fuel prices on the optimal configuration of the national electric power system. Another test that was carried out was an extension to global regions. The extension study shows that the same properties found in the US study extend to the most populous regions of the planet. The extra test is a simplified version of the US study, and is where much more research can be carried out. We compare our results to other model results.

  11. Estimated Costs for Delivery of HIV Antiretroviral Therapy to Individuals with CD4+ T-Cell Counts >350 cells/uL in Rural Uganda.

    PubMed

    Jain, Vivek; Chang, Wei; Byonanebye, Dathan M; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R; Havlir, Diane V; Kahn, James G

    2015-01-01

    Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451-716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100-200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. In a Ugandan HIV clinic, ART delivery costs--including VL testing--for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions.

  12. Optimal Congestion Management in Electricity Market Using Particle Swarm Optimization with Time Varying Acceleration Coefficients

    NASA Astrophysics Data System (ADS)

    Boonyaritdachochai, Panida; Boonchuay, Chanwit; Ongsakul, Weerakorn

    2010-06-01

    This paper proposes an optimal power redispatching approach for congestion management in deregulated electricity market. Generator sensitivity is considered to indicate the redispatched generators. It can reduce the number of participating generators. The power adjustment cost and total redispatched power are minimized by particle swarm optimization with time varying acceleration coefficients (PSO-TVAC). The IEEE 30-bus and IEEE 118-bus systems are used to illustrate the proposed approach. Test results show that the proposed optimization scheme provides the lowest adjustment cost and redispatched power compared to the other schemes. The proposed approach is useful for the system operator to manage the transmission congestion.

  13. A system-level cost-of-energy wind farm layout optimization with landowner modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Le; MacDonald, Erin

    This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under twomore » land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.« less

  14. The optimal imaging strategy for patients with stable chest pain: a cost-effectiveness analysis.

    PubMed

    Genders, Tessa S S; Petersen, Steffen E; Pugliese, Francesca; Dastidar, Amardeep G; Fleischmann, Kirsten E; Nieman, Koen; Hunink, M G Myriam

    2015-04-07

    The optimal imaging strategy for patients with stable chest pain is uncertain. To determine the cost-effectiveness of different imaging strategies for patients with stable chest pain. Microsimulation state-transition model. Published literature. 60-year-old patients with a low to intermediate probability of coronary artery disease (CAD). Lifetime. The United States, the United Kingdom, and the Netherlands. Coronary computed tomography (CT) angiography, cardiac stress magnetic resonance imaging, stress single-photon emission CT, and stress echocardiography. Lifetime costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios. The strategy that maximized QALYs and was cost-effective in the United States and the Netherlands began with coronary CT angiography, continued with cardiac stress imaging if angiography found at least 50% stenosis in at least 1 coronary artery, and ended with catheter-based coronary angiography if stress imaging induced ischemia of any severity. For U.K. men, the preferred strategy was optimal medical therapy without catheter-based coronary angiography if coronary CT angiography found only moderate CAD or stress imaging induced only mild ischemia. In these strategies, stress echocardiography was consistently more effective and less expensive than other stress imaging tests. For U.K. women, the optimal strategy was stress echocardiography followed by catheter-based coronary angiography if echocardiography induced mild or moderate ischemia. Results were sensitive to changes in the probability of CAD and assumptions about false-positive results. All cardiac stress imaging tests were assumed to be available. Exercise electrocardiography was included only in a sensitivity analysis. Differences in QALYs among strategies were small. Coronary CT angiography is a cost-effective triage test for 60-year-old patients who have nonacute chest pain and a low to intermediate probability of CAD. Erasmus University Medical Center.

  15. Technology forecasting for space communication. Task one report: Cost and weight tradeoff studies for EOS and TDRS

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Weight and cost optimized EOS communication links are determined for 2.25, 7.25, 14.5, 21, and 60 GHz systems and for a 10.6 micron homodyne detection laser system. EOS to ground links are examined for 556, 834, and 1112 km EOS orbits, with ground terminals at the Network Test and Tracking Facility and at Goldstone. Optimized 21 GHz and 10.6 micron links are also examined. For the EOS to Tracking and Data Relay Satellite to ground link, signal-to-noise ratios of the uplink and downlink are also optimized for minimum overall cost or spaceborne weight. Finally, the optimized 21 GHz EOS to ground link is determined for various precipitation rates. All system performance parameters and mission dependent constraints are presented, as are the system cost and weight functional dependencies. The features and capabilities of the computer program to perform the foregoing analyses are described.

  16. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  17. Design and Field Test of a WSN Platform Prototype for Long-Term Environmental Monitoring

    PubMed Central

    Lazarescu, Mihai T.

    2015-01-01

    Long-term wildfire monitoring using distributed in situ temperature sensors is an accurate, yet demanding environmental monitoring application, which requires long-life, low-maintenance, low-cost sensors and a simple, fast, error-proof deployment procedure. We present in this paper the most important design considerations and optimizations of all elements of a low-cost WSN platform prototype for long-term, low-maintenance pervasive wildfire monitoring, its preparation for a nearly three-month field test, the analysis of the causes of failure during the test and the lessons learned for platform improvement. The main components of the total cost of the platform (nodes, deployment and maintenance) are carefully analyzed and optimized for this application. The gateways are designed to operate with resources that are generally used for sensor nodes, while the requirements and cost of the sensor nodes are significantly lower. We define and test in simulation and in the field experiment a simple, but effective communication protocol for this application. It helps to lower the cost of the nodes and field deployment procedure, while extending the theoretical lifetime of the sensor nodes to over 16 years on a single 1 Ah lithium battery. PMID:25912349

  18. Distributed Wind Competitiveness Improvement Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Competitiveness Improvement Project (CIP) is a periodic solicitation through the U.S. Department of Energy and its National Renewable Energy Laboratory. The Competitiveness Improvement Project (CIP) is a periodic solicitation through the U.S. Department of Energy and its National Renewable Energy Laboratory. Manufacturers of small and medium wind turbines are awarded cost-shared grants via a competitive process to optimize their designs, develop advanced manufacturing processes, and perform turbine testing. The goals of the CIP are to make wind energy cost competitive with other distributed generation technology and increase the number of wind turbine designs certified to national testing standards. Thismore » fact sheet describes the CIP and funding awarded as part of the project.ufacturers of small and medium wind turbines are awarded cost-shared grants via a competitive process to optimize their designs, develop advanced manufacturing processes, and perform turbine testing. The goals of the CIP are to make wind energy cost competitive with other distributed generation technology and increase the number of wind turbine designs certified to national testing standards. This fact sheet describes the CIP and funding awarded as part of the project.« less

  19. Cost-effectiveness of cervical cancer screening with primary human papillomavirus testing in Norway.

    PubMed

    Burger, E A; Ortendahl, J D; Sy, S; Kristiansen, I S; Kim, J J

    2012-04-24

    New screening technologies and vaccination against human papillomavirus (HPV), the necessary cause of cervical cancer, may impact optimal approaches to prevent cervical cancer. We evaluated the cost-effectiveness of alternative screening strategies to inform cervical cancer prevention guidelines in Norway. We leveraged the primary epidemiologic and economic data from Norway to contextualise a simulation model of HPV-induced cervical cancer. The current cytology-only screening was compared with strategies involving cytology at younger ages and primary HPV-based screening at older ages (31/34+ years), an option being actively deliberated by the Norwegian government. We varied the switch-age, screening interval, and triage strategies for women with HPV-positive results. Uncertainty was evaluated in sensitivity analysis. Current cytology-only screening was less effective and more costly than strategies that involve switching to primary HPV testing in older ages. For unvaccinated women, switching at age 34 years to primary HPV testing every 4 years was optimal given the Norwegian cost-effectiveness threshold ($83,000 per year of life saved). For vaccinated women, a 6-year screening interval was cost-effective. When we considered a wider range of strategies, we found that an earlier switch to HPV testing (at age 31 years) may be preferred. Strategies involving a switch to HPV testing for primary screening in older women is expected to be cost-effective compared with current recommendations in Norway.

  20. Fairness in optimizing bus-crew scheduling process.

    PubMed

    Ma, Jihui; Song, Cuiying; Ceder, Avishai Avi; Liu, Tao; Guan, Wei

    2017-01-01

    This work proposes a model considering fairness in the problem of crew scheduling for bus drivers (CSP-BD) using a hybrid ant-colony optimization (HACO) algorithm to solve it. The main contributions of this work are the following: (a) a valid approach for cases with a special cost structure and constraints considering the fairness of working time and idle time; (b) an improved algorithm incorporating Gamma heuristic function and selecting rules. The relationships of each cost are examined with ten bus lines collected from the Beijing Public Transport Holdings (Group) Co., Ltd., one of the largest bus transit companies in the world. It shows that unfair cost is indirectly related to common cost, fixed cost and extra cost and also the unfair cost approaches to common and fixed cost when its coefficient is twice of common cost coefficient. Furthermore, the longest time for the tested bus line with 1108 pieces, 74 blocks is less than 30 minutes. The results indicate that the HACO-based algorithm can be a feasible and efficient optimization technique for CSP-BD, especially with large scale problems.

  1. Optimal Sensor Allocation for Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann

    2004-01-01

    Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.

  2. Hybrid maize breeding with doubled haploids: I. One-stage versus two-stage selection for testcross performance.

    PubMed

    Longin, C Friedrich H; Utz, H Friedrich; Reif, Jochen C; Schipprack, Wolfgang; Melchinger, Albrecht E

    2006-03-01

    Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain deltaG(k) and the probability P(k) of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P(k) of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P(k) > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P(k)(5%) was similar to that for deltaG(k), the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria.

  3. Estimated Costs for Delivery of HIV Antiretroviral Therapy to Individuals with CD4+ T-Cell Counts >350 cells/uL in Rural Uganda

    PubMed Central

    Jain, Vivek; Chang, Wei; Byonanebye, Dathan M.; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R.; Havlir, Diane V.; Kahn, James G.

    2015-01-01

    Background Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Methods Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Findings Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451–716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100–200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. Conclusions In a Ugandan HIV clinic, ART delivery costs—including VL testing—for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions. PMID:26632823

  4. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Active and Reactive Power Optimal Dispatch Associated with Load and DG Uncertainties in Active Distribution Network

    NASA Astrophysics Data System (ADS)

    Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.

    2017-05-01

    In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.

  6. Development of the IBSAL-SimMOpt Method for the Optimization of Quality in a Corn Stover Supply Chain

    DOE PAGES

    Chavez, Hernan; Castillo-Villar, Krystel; Webb, Erin

    2017-08-01

    Variability on the physical characteristics of feedstock has a relevant effect on the reactor’s reliability and operating cost. Most of the models developed to optimize biomass supply chains have failed to quantify the effect of biomass quality and preprocessing operations required to meet biomass specifications on overall cost and performance. The Integrated Biomass Supply Analysis and Logistics (IBSAL) model estimates the harvesting, collection, transportation, and storage cost while considering the stochastic behavior of the field-to-biorefinery supply chain. This paper proposes an IBSAL-SimMOpt (Simulation-based Multi-Objective Optimization) method for optimizing the biomass quality and costs associated with the efforts needed to meetmore » conversion technology specifications. The method is developed in two phases. For the first phase, a SimMOpt tool that interacts with the extended IBSAL is developed. For the second phase, the baseline IBSAL model is extended so that the cost for meeting and/or penalization for failing in meeting specifications are considered. The IBSAL-SimMOpt method is designed to optimize quality characteristics of biomass, cost related to activities intended to improve the quality of feedstock, and the penalization cost. A case study based on 1916 farms in Ontario, Canada is considered for testing the proposed method. Analysis of the results demonstrates that this method is able to find a high-quality set of non-dominated solutions.« less

  7. Development of the IBSAL-SimMOpt Method for the Optimization of Quality in a Corn Stover Supply Chain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Hernan; Castillo-Villar, Krystel; Webb, Erin

    Variability on the physical characteristics of feedstock has a relevant effect on the reactor’s reliability and operating cost. Most of the models developed to optimize biomass supply chains have failed to quantify the effect of biomass quality and preprocessing operations required to meet biomass specifications on overall cost and performance. The Integrated Biomass Supply Analysis and Logistics (IBSAL) model estimates the harvesting, collection, transportation, and storage cost while considering the stochastic behavior of the field-to-biorefinery supply chain. This paper proposes an IBSAL-SimMOpt (Simulation-based Multi-Objective Optimization) method for optimizing the biomass quality and costs associated with the efforts needed to meetmore » conversion technology specifications. The method is developed in two phases. For the first phase, a SimMOpt tool that interacts with the extended IBSAL is developed. For the second phase, the baseline IBSAL model is extended so that the cost for meeting and/or penalization for failing in meeting specifications are considered. The IBSAL-SimMOpt method is designed to optimize quality characteristics of biomass, cost related to activities intended to improve the quality of feedstock, and the penalization cost. A case study based on 1916 farms in Ontario, Canada is considered for testing the proposed method. Analysis of the results demonstrates that this method is able to find a high-quality set of non-dominated solutions.« less

  8. Multi-level optimization of a beam-like space truss utilizing a continuum model

    NASA Technical Reports Server (NTRS)

    Yates, K.; Gurdal, Z.; Thangjitham, S.

    1992-01-01

    A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.

  9. Minimization of bovine tuberculosis control costs in US dairy herds

    PubMed Central

    Smith, Rebecca L.; Tauer, Loren W.; Schukken, Ynte H.; Lu, Zhao; Grohn, Yrjo T.

    2013-01-01

    The objective of this study was to minimize the cost of controlling an isolated bovine tuberculosis (bTB) outbreak in a US dairy herd, using a stochastic simulation model of bTB with economic and biological layers. A model optimizer produced a control program that required 2-month testing intervals (TI) with 2 negative whole-herd tests to leave quarantine. This control program minimized both farm and government costs. In all cases, test-and-removal costs were lower than depopulation costs, although the variability in costs increased for farms with high holding costs or small herd sizes. Increasing herd size significantly increased costs for both the farm and the government, while increasing indemnity payments significantly decreased farm costs and increasing testing costs significantly increased government costs. Based on the results of this model, we recommend 2-month testing intervals for herds after an outbreak of bovine tuberculosis, with 2 negative whole herd tests being sufficient to lift quarantine. A prolonged test and cull program may cause a state to lose its bTB-free status during the testing period. When the cost of losing the bTB-free status is greater than $1.4 million then depopulation of farms could be preferred over a test and cull program. PMID:23953679

  10. Launch Vehicle Propulsion Design with Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.

    2005-01-01

    The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.

  11. Decision Models for Determining the Optimal Life Test Sampling Plans

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  12. Economic Analysis of Screening Strategies for Rupture of Silicone Gel Breast Implants

    PubMed Central

    Chung, Kevin C.; Malay, Sunitha; Shauver, Melissa J.; Kim, H. Myra

    2012-01-01

    Background In 2006, the U.S. Food and Drug Administration (FDA) recommended screening of all women with silicone gel breast implants with magnetic resonance imaging (MRI) three years after implantation and every two years thereafter to assess their integrity. The cost for these serial examinations over the lifetime of the breast implants is an added burden to insurance payers and to women. We perform an economic analysis to determine the most optimal screening strategies by considering the diagnostic accuracy of the screening tests, the costs of the tests and subsequent implant removal. Methods We determined aggregate/pooled values for sensitivity and specificity of the screening tests ultrasound (US) and MRI in detecting silicone breast implant ruptures from the data obtained from published literature. We compiled costs, based on Medicare reimbursements for 2011, for the following elements: imaging modalities, anesthesia and 3 surgical treatment options for detected ruptures. We used decision tree to compare three alternate screening strategies of US only, MRI only and US followed by MRI in asymptomatic and symptomatic women. Results The cost per rupture of screening and management of rupture with US in asymptomatic women was $1,090, whereas in symptomatic women it was $1,622. Similar cost for MRI in asymptomatic women was $2,067, whereas in symptomatic women it was $2,143. Similar cost for US followed by MRI in asymptomatic women was $637, whereas in symptomatic women it was $2,908. Conclusion Screening with US followed by MRI was optimal for asymptomatic women and screening with US was optimal for symptomatic women. PMID:22743887

  13. Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.

  14. Management decision of optimal recharge water in groundwater artificial recharge conditions- A case study in an artificial recharge test site

    NASA Astrophysics Data System (ADS)

    He, H. Y.; Shi, X. F.; Zhu, W.; Wang, C. Q.; Ma, H. W.; Zhang, W. J.

    2017-11-01

    The city conducted groundwater artificial recharge test which was taken a typical site as an example, and the purpose is to prevent and control land subsidence, increase the amount of groundwater resources. To protect groundwater environmental quality and safety, the city chose tap water as recharge water, however, the high cost makes it not conducive to the optimal allocation of water resources and not suitable to popularize widely. To solve this, the city selects two major surface water of River A and B as the proposed recharge water, to explore its feasibility. According to a comprehensive analysis of the cost of recharge, the distance of the water transport, the quality of recharge water and others. Entropy weight Fuzzy Comprehensive Evaluation Method is used to prefer tap water and water of River A and B. Evaluation results show that water of River B is the optimal recharge water, if used; recharge cost will be from 0.4724/m3 to 0.3696/m3. Using Entropy weight Fuzzy Comprehensive Evaluation Method to confirm water of River B as optimal water is scientific and reasonable. The optimal water management decisions can provide technical support for the city to carry out overall groundwater artificial recharge engineering in deep aquifer.

  15. Benefits and costs of HIV testing.

    PubMed

    Bloom, D E; Glied, S

    1991-06-28

    The benefits and costs of human immunodeficiency virus (HIV) testing in employment settings are examined from two points of view: that of private employers whose profitability may be affected by their testing policies and that of public policy-makers who may affect social welfare through their design of regulations related to HIV testing. The results reveal that HIV testing is clearly not cost-beneficial for most firms, although the benefits of HIV testing may outweigh the costs for some large firms that offer generous fringe-benefit packages and that recruit workers from populations in which the prevalence of HIV infection is high. The analysis also indicates that the testing decisions of unregulated employers are not likely to yield socially optimal economic outcomes and that existing state and federal legislation related to HIV testing in employment settings has been motivated primarily by concerns over social equity.

  16. Strategy of arm movement control is determined by minimization of neural effort for joint coordination.

    PubMed

    Dounskaia, Natalia; Shimansky, Yury

    2016-06-01

    Optimality criteria underlying organization of arm movements are often validated by testing their ability to adequately predict hand trajectories. However, kinematic redundancy of the arm allows production of the same hand trajectory through different joint coordination patterns. We therefore consider movement optimality at the level of joint coordination patterns. A review of studies of multi-joint movement control suggests that a 'trailing' pattern of joint control is consistently observed during which a single ('leading') joint is rotated actively and interaction torque produced by this joint is the primary contributor to the motion of the other ('trailing') joints. A tendency to use the trailing pattern whenever the kinematic redundancy is sufficient and increased utilization of this pattern during skillful movements suggests optimality of the trailing pattern. The goal of this study is to determine the cost function minimization of which predicts the trailing pattern. We show that extensive experimental testing of many known cost functions cannot successfully explain optimality of the trailing pattern. We therefore propose a novel cost function that represents neural effort for joint coordination. That effort is quantified as the cost of neural information processing required for joint coordination. We show that a tendency to reduce this 'neurocomputational' cost predicts the trailing pattern and that the theoretically developed predictions fully agree with the experimental findings on control of multi-joint movements. Implications for future research of the suggested interpretation of the trailing joint control pattern and the theory of joint coordination underlying it are discussed.

  17. Analyzing the Effect of Multi-fuel and Practical Constraints on Realistic Economic Load Dispatch using Novel Two-stage PSO

    NASA Astrophysics Data System (ADS)

    Chintalapudi, V. S.; Sirigiri, Sivanagaraju

    2017-04-01

    In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.

  18. Cost Minimization for Joint Energy Management and Production Scheduling Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Shah, Rahul H.

    Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.

  19. Cost-effectiveness of cervical cancer screening with primary human papillomavirus testing in Norway

    PubMed Central

    Burger, E A; Ortendahl, J D; Sy, S; Kristiansen, I S; Kim, J J

    2012-01-01

    Background: New screening technologies and vaccination against human papillomavirus (HPV), the necessary cause of cervical cancer, may impact optimal approaches to prevent cervical cancer. We evaluated the cost-effectiveness of alternative screening strategies to inform cervical cancer prevention guidelines in Norway. Methods: We leveraged the primary epidemiologic and economic data from Norway to contextualise a simulation model of HPV-induced cervical cancer. The current cytology-only screening was compared with strategies involving cytology at younger ages and primary HPV-based screening at older ages (31/34+ years), an option being actively deliberated by the Norwegian government. We varied the switch-age, screening interval, and triage strategies for women with HPV-positive results. Uncertainty was evaluated in sensitivity analysis. Results: Current cytology-only screening was less effective and more costly than strategies that involve switching to primary HPV testing in older ages. For unvaccinated women, switching at age 34 years to primary HPV testing every 4 years was optimal given the Norwegian cost-effectiveness threshold ($83 000 per year of life saved). For vaccinated women, a 6-year screening interval was cost-effective. When we considered a wider range of strategies, we found that an earlier switch to HPV testing (at age 31 years) may be preferred. Conclusions: Strategies involving a switch to HPV testing for primary screening in older women is expected to be cost-effective compared with current recommendations in Norway. PMID:22441643

  20. The application of the Luus-Jaakola direct search method to the optimization of a hybrid renewable energy system

    NASA Astrophysics Data System (ADS)

    Jatzeck, Bernhard Michael

    2000-10-01

    The application of the Luus-Jaakola direct search method to the optimization of stand-alone hybrid energy systems consisting of wind turbine generators (WTG's), photovoltaic (PV) modules, batteries, and an auxiliary generator was examined. The loads for these systems were for agricultural applications, with the optimization conducted on the basis of minimum capital, operating, and maintenance costs. Five systems were considered: two near Edmonton, Alberta, and one each near Lethbridge, Alberta, Victoria, British Columbia, and Delta, British Columbia. The optimization algorithm used hourly data for the load demand, WTG output power/area, and PV module output power. These hourly data were in two sets: seasonal (summer and winter values separated) and total (summer and winter values combined). The costs for the WTG's, PV modules, batteries, and auxiliary generator fuel were full market values. To examine the effects of price discounts or tax incentives, these values were lowered to 25% of the full costs for the energy sources and two-thirds of the full cost for agricultural fuel. Annual costs for a renewable energy system depended upon the load, location, component costs, and which data set (seasonal or total) was used. For one Edmonton load, the cost for a renewable energy system consisting of 27.01 m2 of WTG area, 14 PV modules, and 18 batteries (full price, total data set) was 6873/year. For Lethbridge, a system with 22.85 m2 of WTG area, 47 PV modules, and 5 batteries (reduced prices, seasonal data set) cost 2913/year. The performance of renewable energy systems based on the obtained results was tested in a simulation using load and weather data for selected days. Test results for one Edmonton load showed that the simulations for most of the systems examined ran for at least 17 hours per day before failing due to either an excessive load on the auxiliary generator or a battery constraint being violated. Additional testing indicated that increasing the generator capacity and reducing the maximum allowed battery charge current during the time of the day at which these failures occurred allowed the simulation to successfully operate.

  1. An enhancement of ROC curves made them clinically relevant for diagnostic-test comparison and optimal-threshold determination.

    PubMed

    Subtil, Fabien; Rabilloud, Muriel

    2015-07-01

    The receiver operating characteristic curves (ROC curves) are often used to compare continuous diagnostic tests or determine the optimal threshold of a test; however, they do not consider the costs of misclassifications or the disease prevalence. The ROC graph was extended to allow for these aspects. Two new lines are added to the ROC graph: a sensitivity line and a specificity line. Their slopes depend on the disease prevalence and on the ratio of the net benefit of treating a diseased subject to the net cost of treating a nondiseased one. First, these lines help researchers determine the range of specificities within which test comparisons of partial areas under the curves is clinically relevant. Second, the ROC curve point the farthest from the specificity line is shown to be the optimal threshold in terms of expected utility. This method was applied: (1) to determine the optimal threshold of ratio specific immunoglobulin G (IgG)/total IgG for the diagnosis of congenital toxoplasmosis and (2) to select, among two markers, the most accurate for the diagnosis of left ventricular hypertrophy in hypertensive subjects. The two additional lines transform the statistically valid ROC graph into a clinically relevant tool for test selection and threshold determination. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. A thermal vacuum test optimization procedure

    NASA Technical Reports Server (NTRS)

    Kruger, R.; Norris, H. P.

    1979-01-01

    An analytical model was developed that can be used to establish certain parameters of a thermal vacuum environmental test program based on an optimization of program costs. This model is in the form of a computer program that interacts with a user insofar as the input of certain parameters. The program provides the user a list of pertinent information regarding an optimized test program and graphs of some of the parameters. The model is a first attempt in this area and includes numerous simplifications. The model appears useful as a general guide and provides a way for extrapolating past performance to future missions.

  3. Testing Optimal Foraging Theory Using Bird Predation on Goldenrod Galls

    ERIC Educational Resources Information Center

    Yahnke, Christopher J.

    2006-01-01

    All animals must make choices regarding what foods to eat, where to eat, and how much time to spend feeding. Optimal foraging theory explains these behaviors in terms of costs and benefits. This laboratory exercise focuses on optimal foraging theory by investigating the winter feeding behavior of birds on the goldenrod gall fly by comparing…

  4. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.

  5. Reagent and labor cost optimization through automation of fluorescence in situ hybridization (FISH) with the VP 2000: an Italian case study.

    PubMed

    Zanatta, Lucia; Valori, Laura; Cappelletto, Eleonora; Pozzebon, Maria Elena; Pavan, Elisabetta; Dei Tos, Angelo Paolo; Merkle, Dennis

    2015-02-01

    In the modern molecular diagnostic laboratory, cost considerations are of paramount importance. Automation of complex molecular assays not only allows a laboratory to accommodate higher test volumes and throughput but also has a considerable impact on the cost of testing from the perspective of reagent costs, as well as hands-on time for skilled laboratory personnel. The following study tracked the cost of labor (hands-on time) and reagents for fluorescence in situ hybridization (FISH) testing in a routine, high-volume pathology and cytogenetics laboratory in Treviso, Italy, over a 2-y period (2011-2013). The laboratory automated FISH testing with the VP 2000 Processor, a deparaffinization, pretreatment, and special staining instrument produced by Abbott Molecular, and compared hands-on time and reagent costs to manual FISH testing. The results indicated significant cost and time saving when automating FISH with VP 2000 when more than six FISH tests were run per week. At 12 FISH assays per week, an approximate total cost reduction of 55% was observed. When running 46 FISH specimens per week, the cost saving increased to 89% versus manual testing. The results demonstrate that the VP 2000 processor can significantly reduce the cost of FISH testing in diagnostic laboratories. © 2014 Society for Laboratory Automation and Screening.

  6. Cost-effectiveness of cervical cancer screening in women living with HIV in South Africa: A mathematical modeling study.

    PubMed

    Campos, Nicole G; Lince-Deroche, Naomi; Chibwesha, Carla J; Firnhaber, Cynthia; Smith, Jennifer S; Michelow, Pam; Meyer-Rath, Gesine; Jamieson, Lise; Jordaan, Suzette; Sharma, Monisha; Regan, Catherine; Sy, Stephen; Liu, Gui; Tsu, Vivien; Jeronimo, Jose; Kim, Jane J

    2018-06-15

    Women with HIV face an increased risk of human papillomavirus (HPV) acquisition and persistence, cervical intraepithelial neoplasia, and invasive cervical cancer. Our objective was to determine the cost-effectiveness of different cervical cancer screening strategies among women with HIV in South Africa. We modified a mathematical model of HPV infection and cervical disease to reflect co-infection with HIV. The model was calibrated to epidemiologic data from HIV-infected women in South Africa. Clinical and economic data were drawn from in-country data sources. The model was used to project reductions in the lifetime risk of cervical cancer and incremental cost-effectiveness ratios (ICERs) of Pap and HPV DNA screening and management algorithms beginning at HIV diagnosis, at one-, two-, or three-year intervals. Strategies with an ICER below South Africa's 2016 per capita GDP (US$5,270) were considered 'cost-effective.' HPV testing followed by treatment (test-and-treat) at two-year intervals was the most effective strategy that was also cost-effective, reducing lifetime cancer risk by 56·6% with an ICER of US$3,010 per year of life saved (YLS). Other cost-effective strategies included Pap (referral threshold: HSIL+) at one-, two-, and three-year intervals, and HPV test-and-treat at three-year intervals. Pap (ASCUS+), HPV testing with 16/18 genotyping, and HPV testing with Pap or visual triage of HPV-positive women were less effective and more costly than alternatives. Considering per capita GDP as the benchmark for cost-effectiveness, HPV test-and-treat is optimal in South Africa. At lower cost-effectiveness benchmarks, Pap (HSIL+) would be optimal.This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (CC BY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  7. High Level Rule Modeling Language for Airline Crew Pairing

    NASA Astrophysics Data System (ADS)

    Mutlu, Erdal; Birbil, Ş. Ilker; Bülbül, Kerem; Yenigün, Hüsnü

    2011-09-01

    The crew pairing problem is an airline optimization problem where a set of least costly pairings (consecutive flights to be flown by a single crew) that covers every flight in a given flight network is sought. A pairing is defined by using a very complex set of feasibility rules imposed by international and national regulatory agencies, and also by the airline itself. The cost of a pairing is also defined by using complicated rules. When an optimization engine generates a sequence of flights from a given flight network, it has to check all these feasibility rules to ensure whether the sequence forms a valid pairing. Likewise, the engine needs to calculate the cost of the pairing by using certain rules. However, the rules used for checking the feasibility and calculating the costs are usually not static. Furthermore, the airline companies carry out what-if-type analyses through testing several alternate scenarios in each planning period. Therefore, embedding the implementation of feasibility checking and cost calculation rules into the source code of the optimization engine is not a practical approach. In this work, a high level language called ARUS is introduced for describing the feasibility and cost calculation rules. A compiler for ARUS is also implemented in this work to generate a dynamic link library to be used by crew pairing optimization engines.

  8. Contribution of blood oxygen and carbon dioxide sensing to the energetic optimization of human walking.

    PubMed

    Wong, Jeremy D; O'Connor, Shawn M; Selinger, Jessica C; Donelan, J Maxwell

    2017-08-01

    People can adapt their gait to minimize energetic cost, indicating that walking's neural control has access to ongoing measurements of the body's energy use. In this study we tested the hypothesis that an important source of energetic cost measurements arises from blood gas receptors that are sensitive to O 2 and CO 2 concentrations. These receptors are known to play a role in regulating other physiological processes related to energy consumption, such as ventilation rate. Given the role of O 2 and CO 2 in oxidative metabolism, sensing their levels can provide an accurate estimate of the body's total energy use. To test our hypothesis, we simulated an added energetic cost for blood gas receptors that depended on a subject's step frequency and determined if subjects changed their behavior in response to this simulated cost. These energetic costs were simulated by controlling inspired gas concentrations to decrease the circulating levels of O 2 and increase CO 2 We found this blood gas control to be effective at shifting the step frequency that minimized the ventilation rate and perceived exertion away from the normally preferred frequency, indicating that these receptors provide the nervous system with strong physiological and psychological signals. However, rather than adapt their preferred step frequency toward these lower simulated costs, subjects persevered at their normally preferred frequency even after extensive experience with the new simulated costs. These results suggest that blood gas receptors play a negligible role in sensing energetic cost for the purpose of optimizing gait. NEW & NOTEWORTHY Human gait adaptation implies that the nervous system senses energetic cost, yet this signal is unknown. We tested the hypothesis that the blood gas receptors sense cost for gait optimization by controlling blood O 2 and CO 2 with step frequency as people walked. At the simulated energetic minimum, ventilation and perceived exertion were lowest, yet subjects preferred walking at their original frequency. This suggests that blood gas receptors are not critical for sensing cost during gait. Copyright © 2017 the American Physiological Society.

  9. Universal Versus Targeted Screening for Lynch Syndrome: Comparing Ascertainment and Costs Based on Clinical Experience.

    PubMed

    Erten, Mujde Z; Fernandez, Luca P; Ng, Hank K; McKinnon, Wendy C; Heald, Brandie; Koliba, Christopher J; Greenblatt, Marc S

    2016-10-01

    Strategies to screen colorectal cancers (CRCs) for Lynch syndrome are evolving rapidly; the optimal strategy remains uncertain. We compared targeted versus universal screening of CRCs for Lynch syndrome. In 2010-2011, we employed targeted screening (age < 60 and/or Bethesda criteria). From 2012 to 2014, we screened all CRCs. Immunohistochemistry for the four mismatch repair proteins was done in all cases, followed by other diagnostic studies as indicated. We modeled the diagnostic costs of detecting Lynch syndrome and estimated the 5-year costs of preventing CRC by colonoscopy screening, using a system dynamics model. Using targeted screening, 51/175 (29 %) cancers fit criteria and were tested by immunohistochemistry; 15/51 (29 %, or 8.6 % of all CRCs) showed suspicious loss of ≥1 mismatch repair protein. Germline mismatch repair gene mutations were found in 4/4 cases sequenced (11 suspected cases did not have germline testing). Using universal screening, 17/292 (5.8 %) screened cancers had abnormal immunohistochemistry suspicious for Lynch syndrome. Germline mismatch repair mutations were found in only 3/10 cases sequenced (7 suspected cases did not have germline testing). The mean cost to identify Lynch syndrome probands was ~$23,333/case for targeted screening and ~$175,916/case for universal screening at our institution. Estimated costs to identify and screen probands and relatives were: targeted, $9798/case and universal, $38,452/case. In real-world Lynch syndrome management, incomplete clinical follow-up was the major barrier to do genetic testing. Targeted screening costs 2- to 7.5-fold less than universal and rarely misses Lynch syndrome cases. Future changes in testing costs will likely change the optimal algorithm.

  10. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  11. Robust optimization-based DC optimal power flow for managing wind generation uncertainty

    NASA Astrophysics Data System (ADS)

    Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn

    2012-11-01

    Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.

  12. Application of the stepwise focusing method to optimize the cost-effectiveness of genome-wide association studies with limited research budgets for genotyping and phenotyping.

    PubMed

    Ohashi, J; Clark, A G

    2005-05-01

    The recent cataloguing of a large number of SNPs enables us to perform genome-wide association studies for detecting common genetic variants associated with disease. Such studies, however, generally have limited research budgets for genotyping and phenotyping. It is therefore necessary to optimize the study design by determining the most cost-effective numbers of SNPs and individuals to analyze. In this report we applied the stepwise focusing method, with two-stage design, developed by Satagopan et al. (2002) and Saito & Kamatani (2002), to optimize the cost-effectiveness of a genome-wide direct association study using a transmission/disequilibrium test (TDT). The stepwise focusing method consists of two steps: a large number of SNPs are examined in the first focusing step, and then all the SNPs showing a significant P-value are tested again using a larger set of individuals in the second focusing step. In the framework of optimization, the numbers of SNPs and families and the significance levels in the first and second steps were regarded as variables to be considered. Our results showed that the stepwise focusing method achieves a distinct gain of power compared to a conventional method with the same research budget.

  13. Solving fuzzy shortest path problem by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Syarif, A.; Muludi, K.; Adrian, R.; Gen, M.

    2018-03-01

    Shortest Path Problem (SPP) is known as one of well-studied fields in the area Operations Research and Mathematical Optimization. It has been applied for many engineering and management designs. The objective is usually to determine path(s) in the network with minimum total cost or traveling time. In the past, the cost value for each arc was usually assigned or estimated as a deteministic value. For some specific real world applications, however, it is often difficult to determine the cost value properly. One way of handling such uncertainty in decision making is by introducing fuzzy approach. With this situation, it will become difficult to solve the problem optimally. This paper presents the investigations on the application of Genetic Algorithm (GA) to a new SPP model in which the cost values are represented as Triangular Fuzzy Number (TFN). We adopts the concept of ranking fuzzy numbers to determine how good the solutions. Here, by giving his/her degree value, the decision maker can determine the range of objective value. This would be very valuable for decision support system in the real world applications.Simulation experiments were carried out by modifying several test problems with 10-25 nodes. It is noted that the proposed approach is capable attaining a good solution with different degree of optimism for the tested problems.

  14. COTSAT Small Spacecraft Cost Optimization for Government and Commercial Use

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.; Bui, David; Dallara, Christopher; Ghassemieh, Shakib; Hanratty, James; Jackson, Evan; Klupar, Pete; Lindsay, Michael; Ling, Kuok; Mattei, Nicholas; hide

    2009-01-01

    Cost Optimized Test of Spacecraft Avionics and Technologies (COTSAT-1) is an ongoing spacecraft research and development project at NASA Ames Research Center (ARC). The prototype spacecraft, also known as CheapSat, is the first of what could potentially be a series of rapidly produced low-cost spacecraft. The COTSAT-1 team is committed to realizing the challenging goal of building a fully functional spacecraft for $500K parts and $2.0M labor. The project's efforts have resulted in significant accomplishments within the scope of a limited budget and schedule. Completion and delivery of the flight hardware to the Engineering Directorate at NASA Ames occurred in February 2009 and a cost effective qualification program is currently under study. The COTSAT-1 spacecraft is now located at NASA Ames Research Center and is awaiting a cost effective launch opportunity. This paper highlights the advancements of the COTSAT-1 spacecraft cost reduction techniques.

  15. Reliability based design including future tests and multiagent approaches

    NASA Astrophysics Data System (ADS)

    Villanueva, Diane

    The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method was studied, and the method was compared to other surrogate-based optimization methods that aim to locate the global optimum using two two-dimensional test functions, a six-dimensional test function, and a five-dimensional engineering example.

  16. A novel non-uniform control vector parameterization approach with time grid refinement for flight level tracking optimal control problems.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua

    2018-02-01

    High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. The cost of illness attributable to diabetic foot and cost-effectiveness of secondary prevention in Peru.

    PubMed

    Cárdenas, María Kathia; Mirelman, Andrew J; Galvin, Cooper J; Lazo-Porras, María; Pinto, Miguel; Miranda, J Jaime; Gilman, Robert H

    2015-10-26

    Diabetes mellitus is a public health challenge worldwide, and roughly 25% of patients with diabetes in developing countries will develop at least one foot ulcer during their lifetime. The gravest outcome of an ulcerated foot is amputation, leading to premature death and larger economic costs. This study aimed to estimate the economic costs of diabetic foot in high-risk patients in Peru in 2012 and to model the cost-effectiveness of a year-long preventive strategy for foot ulceration including: sub-optimal care (baseline), standard care as recommended by the International Diabetes Federation, and standard care plus daily self-monitoring of foot temperature. A decision tree model using a population prevalence-based approach was used to calculate the costs and the incremental cost-effectiveness ratio (ICER). Outcome measures were deaths and major amputations, uncertainty was tested with a one-way sensitivity analysis. The direct costs for prevention and management with sub-optimal care for high-risk diabetics is around US$74.5 million dollars in a single year, which decreases to US$71.8 million for standard care and increases to US$96.8 million for standard care plus temperature monitoring. The implementation of a standard care strategy would avert 791 deaths and is cost-saving in comparison to sub-optimal care. For standard care plus temperature monitoring compared to sub-optimal care the ICER rises to US$16,124 per death averted and averts 1,385 deaths. Diabetic foot complications are highly costly and largely preventable in Peru. The implementation of a standard care strategy would lead to net savings and avert deaths over a one-year period. More intensive prevention strategies such as incorporating temperature monitoring may also be cost-effective.

  18. Real Time Optimal Control of Supercapacitor Operation for Frequency Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yusheng; Panwar, Mayank; Mohanpurkar, Manish

    2016-07-01

    Supercapacitors are gaining wider applications in power systems due to fast dynamic response. Utilizing supercapacitors by means of power electronics interfaces for power compensation is a proven effective technique. For applications such as requency restoration if the cost of supercapacitors maintenance as well as the energy loss on the power electronics interfaces are addressed. It is infeasible to use traditional optimization control methods to mitigate the impacts of frequent cycling. This paper proposes a Front End Controller (FEC) using Generalized Predictive Control featuring real time receding optimization. The optimization constraints are based on cost and thermal management to enhance tomore » the utilization efficiency of supercapacitors. A rigorous mathematical derivation is conducted and test results acquired from Digital Real Time Simulator are provided to demonstrate effectiveness.« less

  19. Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing

    NASA Astrophysics Data System (ADS)

    Srivastava, Praveen Ranjan; Pareek, Deepak

    Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.

  20. Impact of target organ damage assessment in the evaluation of global risk in patients with essential hypertension.

    PubMed

    Viazzi, Francesca; Leoncini, Giovanna; Parodi, Denise; Ratto, Elena; Vettoretti, Simone; Vaccaro, Valentina; Parodi, Angelica; Falqui, Valeria; Tomolillo, Cinzia; Deferrari, Giacomo; Pontremoli, Roberto

    2005-03-01

    Accurate assessment of cardiovascular risk is a key step toward optimizing the treatment of hypertensive patients. We analyzed the impact and cost-effectiveness of routine, thorough assessment of target organ damage (TOD) in evaluating risk profile in hypertension. A total of 380 never-treated patients with essential hypertension underwent routine work-up plus evaluation of albuminuria and ultrasonography of cardiac and vascular structures. The impact of these tests on risk stratification, as indicated by European Society of Hypertension-European Society of Cardiology guidelines, was assessed in light of their cost and sensitivity. The combined use of all of these tests greatly improved the detection of TOD, therefore leading to the identification of a higher percentage of patients who were at high/very high risk, as compared with those who were detected by routine clinical work-up (73% instead of 42%; P < 0.0001). Different signs of TOD only partly cluster within the same subgroup of patients; thus, all three tests should be performed to maximize the sensitivity of the evaluation process. The diagnostic algorithm yielding the lowest cost per detected case of TOD is the search for microalbuminuria, followed by echocardiography and then carotid ultrasonography. Adopting lower cut-off values to define microalbuminuria allows us to optimize further the cost-effectiveness of diagnostic algorithms. In conclusion, because of its low cost and widespread availability, measuring albuminuria is an attractive and cost-effective screening test that is especially suitable as the first step in the large-scale diagnostic work-up of hypertensive patients.

  1. Optimization of PSA screening policies: a comparison of the patient and societal perspectives.

    PubMed

    Zhang, Jingyu; Denton, Brian T; Balasubramanian, Hari; Shah, Nilay D; Inman, Brant A

    2012-01-01

    To estimate the benefit of PSA-based screening for prostate cancer from the patient and societal perspectives. A partially observable Markov decision process model was used to optimize PSA screening decisions. Age-specific prostate cancer incidence rates and the mortality rates from prostate cancer and competing causes were considered. The model trades off the potential benefit of early detection with the cost of screening and loss of patient quality of life due to screening and treatment. PSA testing and biopsy decisions are made based on the patient's probability of having prostate cancer. Probabilities are inferred based on the patient's complete PSA history using Bayesian updating. The results of all PSA tests and biopsies done in Olmsted County, Minnesota, from 1993 to 2005 (11,872 men and 50,589 PSA test results). Patients' perspective: to maximize expected quality-adjusted life years (QALYs); societal perspective: to maximize the expected monetary value based on societal willingness to pay for QALYs and the cost of PSA testing, prostate biopsies, and treatment. From the patient perspective, the optimal policy recommends stopping PSA testing and biopsy at age 76. From the societal perspective, the stopping age is 71. The expected incremental benefit of optimal screening over the traditional guideline of annual PSA screening with threshold 4.0 ng/mL for biopsy is estimated to be 0.165 QALYs per person from the patient perspective and 0.161 QALYs per person from the societal perspective. PSA screening based on traditional guidelines is found to be worse than no screening at all. PSA testing done with traditional guidelines underperforms and therefore underestimates the potential benefit of screening. Optimal screening guidelines differ significantly depending on the perspective of the decision maker.

  2. Cost-effectiveness analysis of the optimal threshold of an automated immunochemical test for colorectal cancer screening: performances of immunochemical colorectal cancer screening.

    PubMed

    Berchi, Célia; Guittet, Lydia; Bouvier, Véronique; Launoy, Guy

    2010-01-01

    Most industrialized countries, including France, have undertaken to generalize colorectal cancer screening using guaiac fecal occult blood tests (G-FOBT). However, recent researches demonstrate that immunochemical fecal occult blood tests (I-FOBT) are more effective than G-FOBT. Moreover, new generation I-FOBT benefits from a quantitative reading technique allowing the positivity threshold to be chosen, hence offering the best balance between effectiveness and cost. We aimed at comparing the cost and the clinical performance of one round of screening using I-FOBT at different positivity thresholds to those obtained with G-FOBT to determine the optimal cut-off for I-FOBT. Data were derived from an experiment conducted from June 2004 to December 2005 in Calvados (France) where 20,322 inhabitants aged 50-74 years performed both I-FOBT and G-FOBT. Clinical performance was assessed by the number of advanced tumors screened, including large adenomas and cancers. Costs were assessed by the French Social Security Board and included only direct costs. Screening using I-FOBT resulted in better health outcomes and lower costs than screening using G-FOBT for thresholds comprised between 75 and 93 ng/ml. I-FOBT at 55 ng/ml also offers a satisfactory alternative to G-FOBT, because it is 1.8-fold more effective than G-FOBT, without increasing the number of unnecessary colonoscopies, and at an extra cost of 2,519 euros per advanced tumor screened. The use of an automated I-FOBT at 75 ng/ml would guarantee more efficient screening than currently used G-FOBT. Health authorities in industrialized countries should consider the replacement of G-FOBT by an automated I-FOBT test in the near future.

  3. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  4. Aero/structural tailoring of engine blades (AERO/STAEBL)

    NASA Technical Reports Server (NTRS)

    Brown, K. W.

    1988-01-01

    This report describes the Aero/Structural Tailoring of Engine Blades (AERO/STAEBL) program, which is a computer code used to perform engine fan and compressor blade aero/structural numerical optimizations. These optimizations seek a blade design of minimum operating cost that satisfies realistic blade design constraints. This report documents the overall program (i.e., input, optimization procedures, approximate analyses) and also provides a detailed description of the validation test cases.

  5. Optimal sequence of tests for the mediastinal staging of non-small cell lung cancer.

    PubMed

    Luque, Manuel; Díez, Francisco Javier; Disdier, Carlos

    2016-01-26

    Non-small cell lung cancer (NSCLC) is the most prevalent type of lung cancer and the most difficult to predict. When there are no distant metastases, the optimal therapy depends mainly on whether there are malignant lymph nodes in the mediastinum. Given the vigorous debate among specialists about which tests should be used, our goal was to determine the optimal sequence of tests for each patient. We have built an influence diagram (ID) that represents the possible tests, their costs, and their outcomes. This model is equivalent to a decision tree containing millions of branches. In the first evaluation, we only took into account the clinical outcomes (effectiveness). In the second, we used a willingness-to-pay of € 30,000 per quality adjusted life year (QALY) to convert economic costs into effectiveness. We assigned a second-order probability distribution to each parameter in order to conduct several types of sensitivity analysis. Two strategies were obtained using two different criteria. When considering only effectiveness, a positive computed tomography (CT) scan must be followed by a transbronchial needle aspiration (TBNA), an endobronchial ultrasound (EBUS), and an endoscopic ultrasound (EUS). When the CT scan is negative, a positron emission tomography (PET), EBUS, and EUS are performed. If the TBNA or the PET is positive, then a mediastinoscopy is performed only if the EBUS and EUS are negative. If the TBNA or the PET is negative, then a mediastinoscopy is performed only if the EBUS and the EUS give contradictory results. When taking into account economic costs, a positive CT scan is followed by a TBNA; an EBUS is done only when the CT scan or the TBNA is negative. This recommendation of performing a TBNA in certain cases should be discussed by the pneumology community because TBNA is a cheap technique that could avoid an EBUS, an expensive test, for many patients. We have determined the optimal sequence of tests for the mediastinal staging of NSCLC by considering sensitivity, specificity, and the economic cost of each test. The main novelty of our study is the recommendation of performing TBNA whenever the CT scan is positive. Our model is publicly available so that different experts can populate it with their own parameters and re-examine its conclusions. It is therefore proposed as an evidence-based instrument for reaching a consensus.

  6. Development and optimization of a stove-powered thermoelectric generator

    NASA Astrophysics Data System (ADS)

    Mastbergen, Dan

    Almost a third of the world's population still lacks access to electricity. Most of these people use biomass stoves for cooking which produce significant amounts of wasted thermal energy, but no electricity. Less than 1% of this energy in the form of electricity would be adequate for basic tasks such as lighting and communications. However, an affordable and reliable means of accomplishing this is currently nonexistent. The goal of this work is to develop a thermoelectric generator to convert a small amount of wasted heat into electricity. Although this concept has been around for decades, previous attempts have failed due to insufficient analysis of the system as a whole, leading to ineffective and costly designs. In this work, a complete design process is undertaken including concept generation, prototype testing, field testing, and redesign/optimization. Detailed component models are constructed and integrated to create a full system model. The model encompasses the stove operation, thermoelectric module, heat sinks, charging system and battery. A 3000 cycle endurance test was also conducted to evaluate the effects of operating temperature, module quality, and thermal interface quality on the generator's reliability, lifetime and cost effectiveness. The results from this testing are integrated into the system model to determine the lowest system cost in $/Watt over a five year period. Through this work the concept of a stove-based thermoelectric generator is shown to be technologically and economically feasible. In addition, a methodology is developed for optimizing the system for specific regional stove usage habits.

  7. Development of optimized, graded-permeability axial groove heat pipes

    NASA Technical Reports Server (NTRS)

    Kapolnek, Michael R.; Holmes, H. Rolland

    1988-01-01

    Heat pipe performance can usually be improved by uniformly varying or grading wick permeability from end to end. A unique and cost effective method for grading the permeability of an axial groove heat pipe is described - selective chemical etching of the pipe casing. This method was developed and demonstrated on a proof-of-concept test article. The process improved the test article's performance by 50 percent. Further improvement is possible through the use of optimally etched grooves.

  8. Optimal public rationing and price response.

    PubMed

    Grassi, Simona; Ma, Ching-To Albert

    2011-12-01

    We study optimal public health care rationing and private sector price responses. Consumers differ in their wealth and illness severity (defined as treatment cost). Due to a limited budget, some consumers must be rationed. Rationed consumers may purchase from a monopolistic private market. We consider two information regimes. In the first, the public supplier rations consumers according to their wealth information (means testing). In equilibrium, the public supplier must ration both rich and poor consumers. Rationing some poor consumers implements price reduction in the private market. In the second information regime, the public supplier rations consumers according to consumers' wealth and cost information. In equilibrium, consumers are allocated the good if and only if their costs are below a threshold (cost effectiveness). Rationing based on cost results in higher equilibrium consumer surplus than rationing based on wealth. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Portuguese Family Physicians’ Awareness of Diagnostic and Laboratory Test Costs: A Cross-Sectional Study

    PubMed Central

    Sá, Luísa; Costa-Santos, Cristina; Teixeira, Andreia; Couto, Luciana; Costa-Pereira, Altamiro; Hespanhol, Alberto; Santos, Paulo; Martins, Carlos

    2015-01-01

    Background Physicians’ ability to make cost-effective decisions has been shown to be affected by their knowledge of health care costs. This study assessed whether Portuguese family physicians are aware of the costs of the most frequently prescribed diagnostic and laboratory tests. Methods A cross-sectional study was conducted in a representative sample of Portuguese family physicians, using computer-assisted telephone interviews for data collection. A Likert scale was used to assess physician’s level of agreement with four statements about health care costs. Family physicians were also asked to estimate the costs of diagnostic and laboratory tests. Each physician’s cost estimate was compared with the true cost and the absolute error was calculated. Results One-quarter (24%; 95% confidence interval: 23%–25%) of all cost estimates were accurate to within 25% of the true cost, with 55% (95% IC: 53–56) overestimating and 21% (95% IC: 20–22) underestimating the true actual cost. The majority (76%) of family physicians thought they did not have or were uncertain as to whether they had adequate knowledge of diagnostic and laboratory test costs, and only 7% reported receiving adequate education. The majority of the family physicians (82%) said that they had adequate access to information about the diagnostic and laboratory test costs. Thirty-three percent thought that costs did not influence their decision to order tests, while 27% were uncertain. Conclusions Portuguese family physicians have limited awareness of diagnostic and laboratory test costs, and our results demonstrate a need for improved education in this area. Further research should focus on identifying whether interventions in cost knowledge actually change ordering behavior, in identifying optimal methods to disseminate cost information, and on improving the cost-effectiveness of care. PMID:26356625

  10. A simulation-optimization model for water-resources management, Santa Barbara, California

    USGS Publications Warehouse

    Nishikawa, Tracy

    1998-01-01

    In times of drought, the local water supplies of the city of Santa Barbara, California, are insufficient to satisfy water demand. In response, the city has built a seawater desalination plant and gained access to imported water in 1997. Of primary concern to the city is delivering water from the various sources at a minimum cost while satisfying water demand and controlling seawater intrusion that might result from the overpumping of ground water. A simulation-optimization model has been developed for the optimal management of Santa Barbara?s water resources. The objective is to minimize the cost of water supply while satisfying various physical and institutional constraints such as meeting water demand, maintaining minimum hydraulic heads at selected sites, and not exceeding water-delivery or pumping capacities. The model is formulated as a linear programming problem with monthly management periods and a total planning horizon of 5 years. The decision variables are water deliveries from surface water (Gibraltar Reservoir, Cachuma Reservoir, Cachuma Reservoir cumulative annual carryover, Mission Tunnel, State Water Project, and desalinated seawater) and ground water (13 production wells). The state variables are hydraulic heads. Basic assumptions for all simulations are that (1) the cost of water varies with source but is fixed over time, and (2) only existing or planned city wells are considered; that is, the construction of new wells is not allowed. The drought of 1947?51 is Santa Barbara?s worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. Assumptions that were made for this base case include a head constraint equal to sea level at the coastal nodes; Cachuma Reservoir carryover of 3,000 acre-feet per year, with a maximum carryover of 8,277 acre-feet; a maximum annual demand of 15,000 acre-feet; and average monthly capacities for the Cachuma and the Gibraltar Reservoirs. The base-case results indicate that water demands can be met, with little water required from the most expensive water source (desalinated seawater), at a total cost of $5.56 million over the 5-year planning horizon. The simulation model has drains, which operate as nonlinear functions of heads and could affect the model solutions. However, numerical tests show that the drains have little effect on the optimal solution. Sensitivity analyses on the base case yield the following results: If allowable Cachuma Reservoir carryover is decreased by about 50 percent, then costs increase by about 14 percent; if the peak demand is decreased by 7 percent, then costs will decrease by about 14 percent; if the head constraints are loosened to -30 feet, then the costs decrease by about 18 percent; if the heads are constrained such that a zero hydraulic gradient condition occurs at the ocean boundary, then the optimization problem does not have a solution; if the capacity of the desalination plant is constrained to zero acre-feet, then the cost increases by about 2 percent; and if the carryover of State Water Project water is implemented, then the cost decreases by about 0.5 percent. Four additional monthly diversion distribution scenarios for the reservoirs were tested: average monthly Cachuma Reservoir deliveries with the actual (scenario 1) and proposed (scenario 2) monthly distributions of Gibraltar Reservoir water, and variable monthly Cachuma Reservoir deliveries with the actual (scenario 3) and proposed (scenario 4) monthly distributions of Gibraltar Reservoir water. Scenario 1 resulted in a total cost of about $7.55 million, scenario 2 resulted in a total cost of about $5.07 million, and scenarios 3 and 4 resulted in a total cost of about $4.53 million. Sensitivities of the scenarios 1 and 2 to desalination-plant capacity and State Water Project water carryover were tested. The scenario 1 sensitivity analysis indicated that incorpo

  11. Toward an Integration of Deep Learning and Neuroscience

    PubMed Central

    Marblestone, Adam H.; Wayne, Greg; Kording, Konrad P.

    2016-01-01

    Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses. PMID:27683554

  12. Cost-effectiveness of interventions to prevent alcohol-related disease and injury in Australia.

    PubMed

    Cobiac, Linda; Vos, Theo; Doran, Christopher; Wallace, Angela

    2009-10-01

    To evaluate cost-effectiveness of eight interventions for reducing alcohol-attributable harm and determine the optimal intervention mix. Interventions include volumetric taxation, advertising bans, an increase in minimum legal drinking age, licensing controls on operating hours, brief intervention (with and without general practitioner telemarketing and support), drink driving campaigns, random breath testing and residential treatment for alcohol dependence (with and without naltrexone). Cost-effectiveness is modelled over the life-time of the Australian population in 2003, with all costs and health outcomes evaluated from an Australian health sector perspective. Each intervention is compared with current practice, and the most cost-effective options are then combined to determine the optimal intervention mix. Cost-effectiveness is measured in 2003 Australian dollars per disability adjusted life year averted. Although current alcohol intervention in Australia (random breath testing) is cost-effective, if the current spending of $71 million could be invested in a more cost-effective combination of interventions, more than 10 times the amount of health gain could be achieved. Taken as a package of interventions, all seven preventive interventions would be a cost-effective investment that could lead to substantial improvement in population health; only residential treatment is not cost-effective. Based on current evidence, interventions to reduce harm from alcohol are highly recommended. The potential reduction in costs of treating alcohol-related diseases and injuries mean that substantial improvements in population health can be achieved at a relatively low cost to the health sector. © 2009 The Authors. Journal compilation © 2009 Society for the Study of Addiction.

  13. Rocket Design for the Future

    NASA Technical Reports Server (NTRS)

    Follett, William W.; Rajagopal, Raj

    2001-01-01

    The focus of the AA MDO team is to reduce product development cost through the capture and automation of best design and analysis practices and through increasing the availability of low-cost, high-fidelity analysis. Implementation of robust designs reduces costs associated with the Test-Fall-Fix cycle. RD is currently focusing on several technologies to improve the design process, including optimization and robust design, expert and rule-based systems, and collaborative technologies.

  14. Costs of unstructured investigation of unexplained syncope: insights from a micro-costing analysis of the observational PICTURE registry.

    PubMed

    Edvardsson, Nils; Wolff, Claudia; Tsintzos, Stelios; Rieger, Guido; Linker, Nicholas J

    2015-07-01

    The observational PICTURE (Place of Reveal In the Care pathway and Treatment of patients with Unexplained Recurrent Syncope) registry enrolled 570 patients with unexplained syncope, documented their care pathway and the various tests they underwent before the insertion of an implantable loop recorder (ILR). The aims were to describe the extent and cost of diagnostic tests performed before the implant. Actual costs of 17 predefined diagnostic tests were characterized based on a combination of data from PICTURE and a micro-costing study performed at a medium-sized UK university hospital in the UK. The median cost of diagnostic tests per patient was £1114 (95% CI £995-£1233). As many patients received more than the median number of tests, the mean expenditure per patient was higher with £1613 (95% CI £1494-£1732), and for 10% of the patients the cost exceeded £3539. Tests were frequently repeated, and early use of specific and expensive tests was common. In the 12% of patients with types of tests entirely within the recommendations for an initial evaluation before ILR implant, the mean cost was £710. Important opportunities to reduce test-related costs before an ILR implant were identified, e.g. by more appropriate use of tests recommended in the initial evaluation, by decreasing repetition of tests, and by avoiding early use of specialized and expensive tests. A structured multidisciplinary approach would be the best model to achieve an optimal outcome. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Cardiology.

  15. Costs of unstructured investigation of unexplained syncope: insights from a micro-costing analysis of the observational PICTURE registry

    PubMed Central

    Edvardsson, Nils; Wolff, Claudia; Tsintzos, Stelios; Rieger, Guido; Linker, Nicholas J.

    2015-01-01

    Aims The observational PICTURE (Place of Reveal In the Care pathway and Treatment of patients with Unexplained Recurrent Syncope) registry enrolled 570 patients with unexplained syncope, documented their care pathway and the various tests they underwent before the insertion of an implantable loop recorder (ILR). The aims were to describe the extent and cost of diagnostic tests performed before the implant. Methods and results Actual costs of 17 predefined diagnostic tests were characterized based on a combination of data from PICTURE and a micro-costing study performed at a medium-sized UK university hospital in the UK. The median cost of diagnostic tests per patient was £1114 (95% CI £995–£1233). As many patients received more than the median number of tests, the mean expenditure per patient was higher with £1613 (95% CI £1494–£1732), and for 10% of the patients the cost exceeded £3539. Tests were frequently repeated, and early use of specific and expensive tests was common. In the 12% of patients with types of tests entirely within the recommendations for an initial evaluation before ILR implant, the mean cost was £710. Conclusion Important opportunities to reduce test-related costs before an ILR implant were identified, e.g. by more appropriate use of tests recommended in the initial evaluation, by decreasing repetition of tests, and by avoiding early use of specialized and expensive tests. A structured multidisciplinary approach would be the best model to achieve an optimal outcome. PMID:25759408

  16. Development and optimization of a new culture media using extruded bean as nitrogen source.

    PubMed

    Batista, Karla A; Fernandes, Kátia F

    2015-01-01

    The composition of a culture medium is one of the most important parameters to be analyzed in biotechnological processes with industrial purposes, because around 30-40% of the production costs were estimated to be accounted for the cost of the growth medium [1]. Since medium optimization using a one-factor-at-a-time approach is time-consuming, expensive, and often leads to misinterpretation of results, statistical experimental design has been applied to medium optimization for growth and metabolite production [2-5]. In this scenario, the use of mixture design to develop a culture medium containing a cheaper nitrogen source seems to be more appropriate and simple. In this sense, the focus of this work is to present a detailed description of the steps involved in the development of a optimized culture medium containing extruded bean as nitrogen source. •In a previous work we tested a development of new culture media based on the composition of YPD medium, aiming to reduce bioprocess costs as well as to improve the biomass production and heterologous expression.•The developed medium was tested for growth of Saccharomyces cerevisiae and Pichia pastoris (GS 115).•The use of culture media containing extruded bean as sole nitrogen source showed better biomass production and protein expression than those observed in the standard YPD medium.

  17. Optimization applications in aircraft engine design and test

    NASA Technical Reports Server (NTRS)

    Pratt, T. K.

    1984-01-01

    Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.

  18. Costs of novel tuberculosis diagnostics--will countries be able to afford it?

    PubMed

    Pantoja, Andrea; Kik, Sandra V; Denkinger, Claudia M

    2015-04-01

    Four priority target product profiles for the development of diagnostic tests for tuberculosis were identified: 1) Rapid sputum-based (RSP), 2) non-sputum Biomarker-based (BMT), 3) triage test followed by confirmatory test (TT), and 4) drug-susceptibility testing (DST). We assessed the cost of the new tests in suitable strategies and of the conventional diagnosis of tuberculosis as per World Health Organization guidelines, in 36 high tuberculosis and MDR burden countries. Costs were then compared to the available funding for tuberculosis at country level. Costs of diagnosing tuberculosis using RSP ranged US$93-187 million/year; if RSP unit cost is of US$2-4 it would be lower/similar cost than conventional strategy with sputum smear microscopy (US$ 119 million/year). Using BMT (with unit cost of US$2-4) would cost US$70-121 million/year and be lower/comparable cost than conventional diagnostics. Using TT with TPP characteristics (unit cost of US$1-2) followed by Xpert would reduce diagnostic costs up to US$36 million/year. Costs of using different novel DST strategies for the diagnosis of drug resistance would be higher compared with conventional diagnosis. Introducing a TT or a biomarker test with optimal characteristics would be affordable from a cost and affordability perspective at the current available funding for tuberculosis. Additional domestic or donor funding would be needed in most countries to achieve affordability for other new diagnostic tests. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Good Manufacturing Practices (GMP) manufacturing of advanced therapy medicinal products: a novel tailored model for optimizing performance and estimating costs.

    PubMed

    Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra

    2013-03-01

    Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  20. The Inactivation Principle: Mathematical Solutions Minimizing the Absolute Work and Biological Implications for the Planning of Arm Movements

    PubMed Central

    Berret, Bastien; Darlot, Christian; Jean, Frédéric; Pozzo, Thierry; Papaxanthis, Charalambos; Gauthier, Jean Paul

    2008-01-01

    An important question in the literature focusing on motor control is to determine which laws drive biological limb movements. This question has prompted numerous investigations analyzing arm movements in both humans and monkeys. Many theories assume that among all possible movements the one actually performed satisfies an optimality criterion. In the framework of optimal control theory, a first approach is to choose a cost function and test whether the proposed model fits with experimental data. A second approach (generally considered as the more difficult) is to infer the cost function from behavioral data. The cost proposed here includes a term called the absolute work of forces, reflecting the mechanical energy expenditure. Contrary to most investigations studying optimality principles of arm movements, this model has the particularity of using a cost function that is not smooth. First, a mathematical theory related to both direct and inverse optimal control approaches is presented. The first theoretical result is the Inactivation Principle, according to which minimizing a term similar to the absolute work implies simultaneous inactivation of agonistic and antagonistic muscles acting on a single joint, near the time of peak velocity. The second theoretical result is that, conversely, the presence of non-smoothness in the cost function is a necessary condition for the existence of such inactivation. Second, during an experimental study, participants were asked to perform fast vertical arm movements with one, two, and three degrees of freedom. Observed trajectories, velocity profiles, and final postures were accurately simulated by the model. In accordance, electromyographic signals showed brief simultaneous inactivation of opposing muscles during movements. Thus, assuming that human movements are optimal with respect to a certain integral cost, the minimization of an absolute-work-like cost is supported by experimental observations. Such types of optimality criteria may be applied to a large range of biological movements. PMID:18949023

  1. Microhole Test Data

    DOE Data Explorer

    Su, Jiann

    2016-05-23

    Drilling results from the microhole project at the Sandia High Operating Temperature test facility. The project is seeking to help reduce the cost of exploration and monitoring of geothermal wells and formations by drilling smaller holes. The tests were part of a control algorithm development to optimize the weight-on-bit (WOB) used during drilling with a percussive hammer.

  2. Costs, equity, efficiency and feasibility of identifying the poor in Ghana's National Health Insurance Scheme: empirical analysis of various strategies.

    PubMed

    Aryeetey, Genevieve Cecilia; Jehu-Appiah, Caroline; Spaan, Ernst; Agyepong, Irene; Baltussen, Rob

    2012-01-01

    To analyse the costs and evaluate the equity, efficiency and feasibility of four strategies to identify poor households for premium exemptions in Ghana's National Health Insurance Scheme (NHIS): means testing (MT), proxy means testing (PMT), participatory wealth ranking (PWR) and geographic targeting (GT) in urban, rural and semi-urban settings in Ghana. We conducted the study in 145-147 households per setting with MT as our gold standard strategy. We estimated total costs that included costs of household surveys and cost of premiums paid to the poor, efficiency (cost per poor person identified), equity (number of true poor excluded) and the administrative feasibility of implementation. The cost of exempting one poor individual ranged from US$15.87 to US$95.44; exclusion of the poor ranged between 0% and 73%. MT was most efficient and equitable in rural and urban settings with low-poverty incidence; GT was efficient and equitable in the semi-urban setting with high-poverty incidence. PMT and PWR were less equitable and inefficient although feasible in some settings. We recommend MT as optimal strategy in low-poverty urban and rural settings and GT as optimal strategy in high-poverty semi-urban setting. The study is relevant to other social and developmental programmes that require identification and exemptions of the poor in low-income countries. © 2011 Blackwell Publishing Ltd.

  3. Test plan for the soils facility demonstration: A petroleum contaminated soil bioremediation facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lombard, K.H.

    1994-08-01

    The objectives of this test plan are to show the value added by using bioremediation as an effective and environmentally sound method to remediate petroleum contaminated soils (PCS) by: demonstrating bioremediation as a permanent method for remediating soils contaminated with petroleum products; establishing the best operating conditions for maximizing bioremediation and minimizing volatilization for SRS PCS during different seasons; determining the minimum set of analyses and sampling frequency to allow efficient and cost-effective operation; determining best use of existing site equipment and personnel to optimize facility operations and conserve SRS resources; and as an ancillary objective, demonstrating and optimizing newmore » and innovative analytical techniques that will lower cost, decrease time, and decrease secondary waste streams for required PCS assays.« less

  4. Decision Modeling in Sleep Apnea: The Critical Roles of Pretest Probability, Cost of Untreated Obstructive Sleep Apnea, and Time Horizon.

    PubMed

    Moro, Marilyn; Westover, M Brandon; Kelly, Jessica; Bianchi, Matt T

    2016-03-01

    Obstructive sleep apnea (OSA) is associated with increased morbidity and mortality, and treatment with positive airway pressure (PAP) is cost-effective. However, the optimal diagnostic strategy remains a subject of debate. Prior modeling studies have not consistently supported the widely held assumption that home sleep testing (HST) is cost-effective. We modeled four strategies: (1) treat no one; (2) treat everyone empirically; (3) treat those testing positive during in-laboratory polysomnography (PSG) via in-laboratory titration; and (4) treat those testing positive during HST with auto-PAP. The population was assumed to lack independent reasons for in-laboratory PSG (such as insomnia, periodic limb movements in sleep, complex apnea). We considered the third-party payer perspective, via both standard (quality-adjusted) and pure cost methods. The preferred strategy depended on three key factors: pretest probability of OSA, cost of untreated OSA, and time horizon. At low prevalence and low cost of untreated OSA, the treat no one strategy was favored, whereas empiric treatment was favored for high prevalence and high cost of untreated OSA. In-laboratory backup for failures in the at-home strategy increased the preference for the at-home strategy. Without laboratory backup in the at-home arm, the in-laboratory strategy was increasingly preferred at longer time horizons. Using a model framework that captures a broad range of clinical possibilities, the optimal diagnostic approach to uncomplicated OSA depends on pretest probability, cost of untreated OSA, and time horizon. Estimating each of these critical factors remains a challenge warranting further investigation. © 2016 American Academy of Sleep Medicine.

  5. Do Men and Women Need to Be Screened Differently with Fecal Immunochemical Testing? A Cost-Effectiveness Analysis.

    PubMed

    Meulen, Miriam P van der; Kapidzic, Atija; Leerdam, Monique E van; van der Steen, Alex; Kuipers, Ernst J; Spaander, Manon C W; de Koning, Harry J; Hol, Lieke; Lansdorp-Vogelaar, Iris

    2017-08-01

    Background: Several studies suggest that test characteristics for the fecal immunochemical test (FIT) differ by gender, triggering a debate on whether men and women should be screened differently. We used the microsimulation model MISCAN-Colon to evaluate whether screening stratified by gender is cost-effective. Methods: We estimated gender-specific FIT characteristics based on first-round positivity and detection rates observed in a FIT screening pilot (CORERO-1). Subsequently, we used the model to estimate harms, benefits, and costs of 480 gender-specific FIT screening strategies and compared them with uniform screening. Results: Biennial FIT screening from ages 50 to 75 was less effective in women than men [35.7 vs. 49.0 quality-adjusted life years (QALY) gained, respectively] at higher costs (€42,161 vs. -€5,471, respectively). However, the incremental QALYs gained and costs of annual screening compared with biennial screening were more similar for both genders (8.7 QALYs gained and €26,394 for women vs. 6.7 QALYs gained and €20,863 for men). Considering all evaluated screening strategies, optimal gender-based screening yielded at most 7% more QALYs gained than optimal uniform screening and even resulted in equal costs and QALYs gained from a willingness-to-pay threshold of €1,300. Conclusions: FIT screening is less effective in women, but the incremental cost-effectiveness is similar in men and women. Consequently, screening stratified by gender is not more cost-effective than uniform FIT screening. Impact: Our conclusions support the current policy of uniform FIT screening. Cancer Epidemiol Biomarkers Prev; 26(8); 1328-36. ©2017 AACR . ©2017 American Association for Cancer Research.

  6. Comparing genetic algorithm and particle swarm optimization for solving capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Iswari, T.; Asih, A. M. S.

    2018-04-01

    In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.

  7. Economic environmental dispatch using BSA algorithm

    NASA Astrophysics Data System (ADS)

    Jihane, Kartite; Mohamed, Cherkaoui

    2018-05-01

    Economic environmental dispatch problem (EED) is an important issue especially in the field of fossil fuel power plant system. It allows the network manager to choose among different units the most optimized in terms of fuel costs and emission level. The objective of this paper is to minimize the fuel cost with emissions constrained; the test is conducted for two cases: six generator unit and ten generator unit for the same power demand 1200Mw. The simulation has been computed in MATLAB and the result shows the robustness of the Backtracking Search optimization Algorithm (BSA) and the impact of the load demand on the emission.

  8. Optimizing bulk milk dioxin monitoring based on costs and effectiveness.

    PubMed

    Lascano-Alcoser, V H; Velthuis, A G J; van der Fels-Klerx, H J; Hoogenboom, L A P; Oude Lansink, A G J M

    2013-07-01

    Dioxins are environmental pollutants, potentially present in milk products, which have negative consequences for human health and for the firms and farms involved in the dairy chain. Dioxin monitoring in feed and food has been implemented to detect their presence and estimate their levels in food chains. However, the costs and effectiveness of such programs have not been evaluated. In this study, the costs and effectiveness of bulk milk dioxin monitoring in milk trucks were estimated to optimize the sampling and pooling monitoring strategies aimed at detecting at least 1 contaminated dairy farm out of 20,000 at a target dioxin concentration level. Incidents of different proportions, in terms of the number of contaminated farms, and concentrations were simulated. A combined testing strategy, consisting of screening and confirmatory methods, was assumed as well as testing of pooled samples. Two optimization models were built using linear programming. The first model aimed to minimize monitoring costs subject to a minimum required effectiveness of finding an incident, whereas the second model aimed to maximize the effectiveness for a given monitoring budget. Our results show that a high level of effectiveness is possible, but at high costs. Given specific assumptions, monitoring with 95% effectiveness to detect an incident of 1 contaminated farm at a dioxin concentration of 2 pg of toxic equivalents/g of fat [European Commission's (EC) action level] costs €2.6 million per month. At the same level of effectiveness, a 73% cost reduction is possible when aiming to detect an incident where 2 farms are contaminated at a dioxin concentration of 3 pg of toxic equivalents/g of fat (EC maximum level). With a fixed budget of €40,000 per month, the probability of detecting an incident with a single contaminated farm at a dioxin concentration equal to the EC action level is 4.4%. This probability almost doubled (8.0%) when aiming to detect the same incident but with a dioxin concentration equal to the EC maximum level. This study shows that the effectiveness of finding an incident depends not only on the ratio at which, for testing, collected truck samples are mixed into a pooled sample (aiming at detecting certain concentration), but also the number of collected truck samples. In conclusion, the optimal cost-effective monitoring depends on the number of contaminated farms and the concentration aimed at detection. The models and study results offer quantitative support to risk managers of food industries and food safety authorities. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. A methodology to guide the selection of composite materials in a wind turbine rotor blade design process

    NASA Astrophysics Data System (ADS)

    Bortolotti, P.; Adolphs, G.; Bottasso, C. L.

    2016-09-01

    This work is concerned with the development of an optimization methodology for the composite materials used in wind turbine blades. Goal of the approach is to guide designers in the selection of the different materials of the blade, while providing indications to composite manufacturers on optimal trade-offs between mechanical properties and material costs. The method works by using a parametric material model, and including its free parameters amongst the design variables of a multi-disciplinary wind turbine optimization procedure. The proposed method is tested on the structural redesign of a conceptual 10 MW wind turbine blade, its spar caps and shell skin laminates being subjected to optimization. The procedure identifies a blade optimum for a new spar cap laminate characterized by a higher longitudinal Young's modulus and higher cost than the initial one, which however in turn induce both cost and mass savings in the blade. In terms of shell skin, the adoption of a laminate with intermediate properties between a bi-axial one and a tri-axial one also leads to slight structural improvements.

  10. Underestimation of Project Costs

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    Large projects almost always exceed their budgets. Estimating cost is difficult and estimated costs are usually too low. Three different reasons are suggested: bad luck, overoptimism, and deliberate underestimation. Project management can usually point to project difficulty and complexity, technical uncertainty, stakeholder conflicts, scope changes, unforeseen events, and other not really unpredictable bad luck. Project planning is usually over-optimistic, so the likelihood and impact of bad luck is systematically underestimated. Project plans reflect optimism and hope for success in a supposedly unique new effort rather than rational expectations based on historical data. Past project problems are claimed to be irrelevant because "This time it's different." Some bad luck is inevitable and reasonable optimism is understandable, but deliberate deception must be condemned. In a competitive environment, project planners and advocates often deliberately underestimate costs to help gain project approval and funding. Project benefits, cost savings, and probability of success are exaggerated and key risks ignored. Project advocates have incentives to distort information and conceal difficulties from project approvers. One naively suggested cure is more openness, honesty, and group adherence to shared overall goals. A more realistic alternative is threatening overrun projects with cancellation. Neither approach seems to solve the problem. A better method to avoid the delusions of over-optimism and the deceptions of biased advocacy is to base the project cost estimate on the actual costs of a large group of similar projects. Over optimism and deception can continue beyond the planning phase and into project execution. Hard milestones based on verified tests and demonstrations can provide a reality check.

  11. Variable fidelity robust optimization of pulsed laser orbital debris removal under epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan

    2016-04-01

    A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.

  12. Cost analysis of oxygen recovery systems

    NASA Technical Reports Server (NTRS)

    Yakut, M. M.

    1973-01-01

    The design and development of equipment for flight use in earth-orbital programs, when optimally approached cost effectively, proceed through the following logical progression: (1) bench testing of breadboard designs, (2) the fabrication and evaluation of prototype equipment, (3) redesign to meet flight-imposed requirements, and (4) qualification and testing of a flight-ready system. Each of these steps is intended to produce the basic design information necessary to progress to the next step. The cost of each step is normally substantially less than that of the following step. An evaluation of the cost elements involved in each of the steps and their impact on total program cost are presented. Cost analyses of four leading oxygen recovery subsystems which include two carbon dioxide reduction subsystem, Sabatier and Bosch, and two water electrolysis subsystems, the solid polymer electrolyte and the circulating KOH electrolyte are described.

  13. Composite panel development at JPL

    NASA Technical Reports Server (NTRS)

    Mcelroy, Paul; Helms, Rich

    1988-01-01

    Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.

  14. Economic optimization of natural hazard protection - conceptual study of existing approaches

    NASA Astrophysics Data System (ADS)

    Spackova, Olga; Straub, Daniel

    2013-04-01

    Risk-based planning of protection measures against natural hazards has become a common practice in many countries. The selection procedure aims at identifying an economically efficient strategy with regard to the estimated costs and risk (i.e. expected damage). A correct setting of the evaluation methodology and decision criteria should ensure an optimal selection of the portfolio of risk protection measures under a limited state budget. To demonstrate the efficiency of investments, indicators such as Benefit-Cost Ratio (BCR), Marginal Costs (MC) or Net Present Value (NPV) are commonly used. However, the methodologies for efficiency evaluation differ amongst different countries and different hazard types (floods, earthquakes etc.). Additionally, several inconsistencies can be found in the applications of the indicators in practice. This is likely to lead to a suboptimal selection of the protection strategies. This study provides a general formulation for optimization of the natural hazard protection measures from a socio-economic perspective. It assumes that all costs and risks can be expressed in monetary values. The study regards the problem as a discrete hierarchical optimization, where the state level sets the criteria and constraints, while the actual optimization is made on the regional level (towns, catchments) when designing particular protection measures and selecting the optimal protection level. The study shows that in case of an unlimited budget, the task is quite trivial, as it is sufficient to optimize the protection measures in individual regions independently (by minimizing the sum of risk and cost). However, if the budget is limited, the need for an optimal allocation of resources amongst the regions arises. To ensure this, minimum values of BCR or MC can be required by the state, which must be achieved in each region. The study investigates the meaning of these indicators in the optimization task at the conceptual level and compares their suitability. To illustrate the theoretical findings, the indicators are tested on a hypothetical example of five regions with different risk levels. Last but not least, political and societal aspects and limitations in the use of the risk-based optimization framework are discussed.

  15. Stochastic injection-strategy optimization for the preliminary assessment of candidate geological storage sites

    NASA Astrophysics Data System (ADS)

    Cody, Brent M.; Baù, Domenico; González-Nicolás, Ana

    2015-09-01

    Geological carbon sequestration (GCS) has been identified as having the potential to reduce increasing atmospheric concentrations of carbon dioxide (CO2). However, a global impact will only be achieved if GCS is cost-effectively and safely implemented on a massive scale. This work presents a computationally efficient methodology for identifying optimal injection strategies at candidate GCS sites having uncertainty associated with caprock permeability, effective compressibility, and aquifer permeability. A multi-objective evolutionary optimization algorithm is used to heuristically determine non-dominated solutions between the following two competing objectives: (1) maximize mass of CO2 sequestered and (2) minimize project cost. A semi-analytical algorithm is used to estimate CO2 leakage mass rather than a numerical model, enabling the study of GCS sites having vastly different domain characteristics. The stochastic optimization framework presented herein is applied to a feasibility study of GCS in a brine aquifer in the Michigan Basin (MB), USA. Eight optimization test cases are performed to investigate the impact of decision-maker (DM) preferences on Pareto-optimal objective-function values and carbon-injection strategies. This analysis shows that the feasibility of GCS at the MB test site is highly dependent upon the DM's risk-adversity preference and degree of uncertainty associated with caprock integrity. Finally, large gains in computational efficiency achieved using parallel processing and archiving are discussed.

  16. Thermal-Aware Test Access Mechanism and Wrapper Design Optimization for System-on-Chips

    NASA Astrophysics Data System (ADS)

    Yu, Thomas Edison; Yoneda, Tomokazu; Chakrabarty, Krishnendu; Fujiwara, Hideo

    Rapid advances in semiconductor manufacturing technology have led to higher chip power densities, which places greater emphasis on packaging and temperature control during testing. For system-on-chips, peak power-based scheduling algorithms have been used to optimize tests under specified power constraints. However, imposing power constraints does not always solve the problem of overheating due to the non-uniform distribution of power across the chip. This paper presents a TAM/Wrapper co-design methodology for system-on-chips that ensures thermal safety while still optimizing the test schedule. The method combines a simplified thermal-cost model with a traditional bin-packing algorithm to minimize test time while satisfying temperature constraints. Furthermore, for temperature checking, thermal simulation is done using cycle-accurate power profiles for more realistic results. Experiments show that even a minimal sacrifice in test time can yield a considerable decrease in test temperature as well as the possibility of further lowering temperatures beyond those achieved using traditional power-based test scheduling.

  17. Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations

    NASA Astrophysics Data System (ADS)

    Romanihin, S. M.; Tronin, I. V.

    2016-09-01

    We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lissner, Daniel N.; Edward, Lovelace C.

    The purpose of the Free Flow Power (FFP) Water-to-Wire Project (Project) was to evaluate and optimize the performance, environmental compatibility, and cost factors of FFP hydrokinetic turbines through design analyses and deployments in test flumes and riverine locations.

  19. Cost-Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.

    PubMed

    Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris

    2017-12-27

    Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared to the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon microsimulation model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess whether optimal screening strategies would change. Colonoscopy every 5 years, starting at age 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in population is not clear. Using a Microsimulation Screening Analysis-Colon microsimulation model, we found screening of patients with cystic fibrosis for CRC to be cost-effective. Due to the higher risk in these patients for CRC, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  20. Demand side management in recycling and electricity retail pricing

    NASA Astrophysics Data System (ADS)

    Kazan, Osman

    This dissertation addresses several problems from the recycling industry and electricity retail market. The first paper addresses a real-life scheduling problem faced by a national industrial recycling company. Based on their practices, a scheduling problem is defined, modeled, analyzed, and a solution is approximated efficiently. The recommended application is tested on the real-life data and randomly generated data. The scheduling improvements and the financial benefits are presented. The second problem is from electricity retail market. There are well-known patterns in daily usage in hours. These patterns change in shape and magnitude by seasons and days of the week. Generation costs are multiple times higher during the peak hours of the day. Yet most consumers purchase electricity at flat rates. This work explores analytic pricing tools to reduce peak load electricity demand for retailers. For that purpose, a nonlinear model that determines optimal hourly prices is established based on two major components: unit generation costs and consumers' utility. Both are analyzed and estimated empirically in the third paper. A pricing model is introduced to maximize the electric retailer's profit. As a result, a closed-form expression for the optimal price vector is obtained. Possible scenarios are evaluated for consumers' utility distribution. For the general case, we provide a numerical solution methodology to obtain the optimal pricing scheme. The models recommended are tested under various scenarios that consider consumer segmentation and multiple pricing policies. The recommended model reduces the peak load significantly in most cases. Several utility companies offer hourly pricing to their customers. They determine prices using historical data of unit electricity cost over time. In this dissertation we develop a nonlinear model that determines optimal hourly prices with parameter estimation. The last paper includes a regression analysis of the unit generation cost function obtained from Independent Service Operators. A consumer experiment is established to replicate the peak load behavior. As a result, consumers' utility function is estimated and optimal retail electricity prices are computed.

  1. Cost Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.

    PubMed

    Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris

    2018-02-01

    Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared with the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess if optimal screening strategies would change. Colonoscopy every 5 years, starting at an age of 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in this population is not clear. Using a Microsimulation Screening Analysis-Colon model, we found screening of patients with cystic fibrosis for CRC to be cost effective. Because of the higher risk of CRC in these patients, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.

  2. Limitations and mechanisms influencing the migratory performance of soaring birds

    Treesearch

    Tricia A. Miller; Brooks Robert P.; Michael J. Lanzone; David Brandes; Jeff Cooper; Junior A. Tremblay; Jay Wilhelm; Adam Duerr; Todd E. Katzner

    2016-01-01

    Migration is costly in terms of time, energy and safety. Optimal migration theory suggests that individual migratory birds will choose between these three costs depending on their motivation and available resources. To test hypotheses about use of migratory strategies by large soaring birds, we used GPS telemetry to track 18 adult, 13 sub-adult and 15 juvenile Golden...

  3. Culture Moderates Biases in Search Decisions.

    PubMed

    Pattaratanakun, Jake A; Mak, Vincent

    2015-08-01

    Prior studies suggest that people often search insufficiently in sequential-search tasks compared with the predictions of benchmark optimal strategies that maximize expected payoff. However, those studies were mostly conducted in individualist Western cultures; Easterners from collectivist cultures, with their higher susceptibility to escalation of commitment induced by sunk search costs, could exhibit a reversal of this undersearch bias by searching more than optimally, but only when search costs are high. We tested our theory in four experiments. In our pilot experiment, participants generally undersearched when search cost was low, but only Eastern participants oversearched when search cost was high. In Experiments 1 and 2, we obtained evidence for our hypothesized effects via a cultural-priming manipulation on bicultural participants in which we manipulated the language used in the program interface. We obtained further process evidence for our theory in Experiment 3, in which we made sunk costs nonsalient in the search task-as expected, cross-cultural effects were largely mitigated. © The Author(s) 2015.

  4. Parameter Optimization for Turbulent Reacting Flows Using Adjoints

    NASA Astrophysics Data System (ADS)

    Lapointe, Caelan; Hamlington, Peter E.

    2017-11-01

    The formulation of a new adjoint solver for topology optimization of turbulent reacting flows is presented. This solver provides novel configurations (e.g., geometries and operating conditions) based on desired system outcomes (i.e., objective functions) for complex reacting flow problems of practical interest. For many such problems, it would be desirable to know optimal values of design parameters (e.g., physical dimensions, fuel-oxidizer ratios, and inflow-outflow conditions) prior to real-world manufacture and testing, which can be expensive, time-consuming, and dangerous. However, computational optimization of these problems is made difficult by the complexity of most reacting flows, necessitating the use of gradient-based optimization techniques in order to explore a wide design space at manageable computational cost. The adjoint method is an attractive way to obtain the required gradients, because the cost of the method is determined by the dimension of the objective function rather than the size of the design space. Here, the formulation of a novel solver is outlined that enables gradient-based parameter optimization of turbulent reacting flows using the discrete adjoint method. Initial results and an outlook for future research directions are provided.

  5. On the Optimization of Aerospace Plane Ascent Trajectory

    NASA Astrophysics Data System (ADS)

    Al-Garni, Ahmed; Kassem, Ayman Hamdy

    A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.

  6. Predicting Cost and Schedule Growth for Military and Civil Space Systems

    DTIC Science & Technology

    2008-03-01

    the Shapiro-Wilk Test , and testing the residuals for constant variance using the Breusch - Pagan test . For logistic models, diagnostics include...the Breusch - Pagan Test . With this test , a p-value below 0.05 rejects the null hypothesis that the residuals have constant variance. Thus, similar...to the Shapiro- Wilk Test , because the optimal model will have constant variance of its residuals, this requires Breusch - Pagan p-values over 0.05

  7. Introducing a Model for Optimal Design of Sequential Objective Structured Clinical Examinations

    ERIC Educational Resources Information Center

    Mortaz Hejri, Sara; Yazdani, Kamran; Labaf, Ali; Norcini, John J.; Jalili, Mohammad

    2016-01-01

    In a sequential OSCE which has been suggested to reduce testing costs, candidates take a short screening test and who fail the test, are asked to take the full OSCE. In order to introduce an effective and accurate sequential design, we developed a model for designing and evaluating screening OSCEs. Based on two datasets from a 10-station…

  8. Decision Modeling in Sleep Apnea: The Critical Roles of Pretest Probability, Cost of Untreated Obstructive Sleep Apnea, and Time Horizon

    PubMed Central

    Moro, Marilyn; Westover, M. Brandon; Kelly, Jessica; Bianchi, Matt T.

    2016-01-01

    Study Objectives: Obstructive sleep apnea (OSA) is associated with increased morbidity and mortality, and treatment with positive airway pressure (PAP) is cost-effective. However, the optimal diagnostic strategy remains a subject of debate. Prior modeling studies have not consistently supported the widely held assumption that home sleep testing (HST) is cost-effective. Methods: We modeled four strategies: (1) treat no one; (2) treat everyone empirically; (3) treat those testing positive during in-laboratory polysomnography (PSG) via in-laboratory titration; and (4) treat those testing positive during HST with auto-PAP. The population was assumed to lack independent reasons for in-laboratory PSG (such as insomnia, periodic limb movements in sleep, complex apnea). We considered the third-party payer perspective, via both standard (quality-adjusted) and pure cost methods. Results: The preferred strategy depended on three key factors: pretest probability of OSA, cost of untreated OSA, and time horizon. At low prevalence and low cost of untreated OSA, the treat no one strategy was favored, whereas empiric treatment was favored for high prevalence and high cost of untreated OSA. In-laboratory backup for failures in the at-home strategy increased the preference for the at-home strategy. Without laboratory backup in the at-home arm, the in-laboratory strategy was increasingly preferred at longer time horizons. Conclusion: Using a model framework that captures a broad range of clinical possibilities, the optimal diagnostic approach to uncomplicated OSA depends on pretest probability, cost of untreated OSA, and time horizon. Estimating each of these critical factors remains a challenge warranting further investigation. Citation: Moro M, Westover MB, Kelly J, Bianchi MT. Decision modeling in sleep apnea: the critical roles of pretest probability, cost of untreated obstructive sleep apnea, and time horizon. J Clin Sleep Med 2016;12(3):409–418. PMID:26518699

  9. Cost-Effectiveness of One-Time Hepatitis C Screening Strategies Among Adolescents and Young Adults in Primary Care Settings.

    PubMed

    Assoumou, Sabrina A; Tasillo, Abriana; Leff, Jared A; Schackman, Bruce R; Drainoni, Mari-Lynn; Horsburgh, C Robert; Barry, M Anita; Regis, Craig; Kim, Arthur Y; Marshall, Alison; Saxena, Sheel; Smith, Peter C; Linas, Benjamin P

    2018-01-18

    High hepatitis C virus (HCV) rates have been reported in young people who inject drugs (PWID). We evaluated the clinical benefit and cost-effectiveness of testing among youth seen in communities with a high overall number of reported HCV cases. We developed a decision analytic model to project quality-adjusted life years (QALYs), costs (2016 US$), and incremental cost-effectiveness ratios (ICERs) of 9 strategies for 1-time testing among 15- to 30-year-olds seen at urban community health centers. Strategies differed in 3 ways: targeted vs routine testing, rapid finger stick vs standard venipuncture, and ordered by physician vs by counselor/tester using standing orders. We performed deterministic and probabilistic sensitivity analyses (PSA) to evaluate uncertainty. Compared to targeted risk-based testing (current standard of care), routine testing increased the lifetime medical cost by $80 and discounted QALYs by 0.0013 per person. Across all strategies, rapid testing provided higher QALYs at a lower cost per QALY gained and was always preferred. Counselor-initiated routine rapid testing was associated with an ICER of $71000/QALY gained. Results were sensitive to offer and result receipt rates. Counselor-initiated routine rapid testing was cost-effective (ICER <$100000/QALY) unless the prevalence of PWID was <0.59%, HCV prevalence among PWID was <16%, reinfection rate was >26 cases per 100 person-years, or reflex confirmatory testing followed all reactive venipuncture diagnostics. In PSA, routine rapid testing was the optimal strategy in 90% of simulations. Routine rapid HCV testing among 15- to 30-year-olds may be cost-effective when the prevalence of PWID is >0.59%. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  10. Optimizing Sensor and Actuator Arrays for ASAC Noise Control

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan; Cabell, Ran

    2000-01-01

    This paper summarizes the development of an approach to optimizing the locations for arrays of sensors and actuators in active noise control systems. A type of directed combinatorial search, called Tabu Search, is used to select an optimal configuration from a much larger set of candidate locations. The benefit of using an optimized set is demonstrated. The importance of limiting actuator forces to realistic levels when evaluating the cost function is discussed. Results of flight testing an optimized system are presented. Although the technique has been applied primarily to Active Structural Acoustic Control systems, it can be adapted for use in other active noise control implementations.

  11. Distributed Wind Competitiveness Improvement Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-05-01

    The Competitiveness Improvement Project (CIP) is a periodic solicitation through the U.S. Department of Energy and its National Renewable Energy Laboratory. Manufacturers of small and medium wind turbines are awarded cost-shared grants via a competitive process to optimize their designs, develop advanced manufacturing processes, and perform turbine testing. The goals of the CIP are to make wind energy cost competitive with other distributed generation technology and increase the number of wind turbine designs certified to national testing standards. This fact sheet describes the CIP and funding awarded as part of the project.

  12. Developing and Validating a Rapid Small-Scale Column Test Procedure for GAC Selection using Reconstituted Lyophilized NOM

    EPA Science Inventory

    Cost effective design and operation of Granular Activated Carbon (GAC) facilities requires the selection of GAC that is optimal for a specific site. Rapid small-scale column tests (RSSCTs) are widely used for GAC assessment due to several advantages, including the ability to simu...

  13. Developing and Validating a Rapid Small-Scale Column Test Procedure for GAC Selection using Reconstituted Lyophilized NOM - Portland, OR

    EPA Science Inventory

    Cost effective design and operation of Granular Activated Carbon (GAC) facilities requires the selection of GAC that is optimal for a specific site. Rapid small-scale column tests (RSSCTs) are widely used for GAC assessment due to several advantages, including the ability to simu...

  14. Water-resources optimization model for Santa Barbara, California

    USGS Publications Warehouse

    Nishikawa, Tracy

    1998-01-01

    A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.

  15. Cost analysis of centralized viral load testing for antiretroviral therapy monitoring in Nicaragua, a low-HIV prevalence, low-resource setting.

    PubMed

    Gerlach, Jay; Sequeira, Magda; Alvarado, Vivian; Cerpas, Christian; Balmaseda, Angel; Gonzalez, Alcides; de Los Santos, Tala; Levin, Carol E; Amador, Juan Jose; Domingo, Gonzalo J

    2010-11-05

    HIV viral load testing as a component of antiretroviral therapy monitoring is costly. Understanding the full costs and the major sources of inefficiency associated with viral load testing is critical for optimizing the systems and technologies that support the testing process. The objective of our study was to estimate the costs associated with viral load testing performed for antiretroviral therapy monitoring to both patients and the public healthcare system in a low-HIV prevalence, low-resource country. A detailed cost analysis was performed to understand the costs involved in each step of performing a viral load test in Nicaragua, from initial specimen collection to communication of the test results to each patient's healthcare provider. Data were compiled and cross referenced from multiple information sources: laboratory records, regional surveillance centre records, and scheduled interviews with the key healthcare providers responsible for HIV patient care in five regions of the country. The total average cost of performing a viral load test in Nicaragua varied by region, ranging from US$99.01 to US$124.58, the majority of which was at the laboratory level: $88.73 to $97.15 per specimen, depending on batch size. The average cost to clinics at which specimens were collected ranged from $3.31 to $20.92, depending on the region. The average cost per patient for transportation, food, lodging and lost income ranged from $3.70 to $14.93. The quantitative viral load test remains the single most expensive component of the process. For the patient, the distance of his or her residence from the specimen collection site is a large determinant of cost. Importantly, the efficiency of results reporting has a large impact on the cost per result delivered to the clinician and utility of the result for patient monitoring. Detailed cost analysis can identify opportunities for removing barriers to effective antiretroviral therapy monitoring programmes in limited-resource countries with low HIV prevalence.

  16. Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.

    PubMed

    Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W

    2016-01-01

    To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.

  17. Automation of On-Board Flightpath Management

    NASA Technical Reports Server (NTRS)

    Erzberger, H.

    1981-01-01

    The status of concepts and techniques for the design of onboard flight path management systems is reviewed. Such systems are designed to increase flight efficiency and safety by automating the optimization of flight procedures onboard aircraft. After a brief review of the origins and functions of such systems, two complementary methods are described for attacking the key design problem, namely, the synthesis of efficient trajectories. One method optimizes en route, the other optimizes terminal area flight; both methods are rooted in optimal control theory. Simulation and flight test results are reviewed to illustrate the potential of these systems for fuel and cost savings.

  18. Exact and Approximate Stability of Solutions to Traveling Salesman Problems.

    PubMed

    Niendorf, Moritz; Girard, Anouck R

    2018-02-01

    This paper presents the stability analysis of an optimal tour for the symmetric traveling salesman problem (TSP) by obtaining stability regions. The stability region of an optimal tour is the set of all cost changes for which that solution remains optimal and can be understood as the margin of optimality for a solution with respect to perturbations in the problem data. It is known that it is not possible to test in polynomial time whether an optimal tour remains optimal after the cost of an arbitrary set of edges changes. Therefore, this paper develops tractable methods to obtain under and over approximations of stability regions based on neighborhoods and relaxations. The application of the results to the two-neighborhood and the minimum 1 tree (M1T) relaxation are discussed in detail. For Euclidean TSPs, stability regions with respect to vertex location perturbations and the notion of safe radii and location criticalities are introduced. Benefits of this paper include insight into robustness properties of tours, minimum spanning trees, M1Ts, and fast methods to evaluate optimality after perturbations occur. Numerical examples are given to demonstrate the methods and achievable approximation quality.

  19. Optimal triage test characteristics to improve the cost-effectiveness of the Xpert MTB/RIF assay for TB diagnosis: a decision analysis.

    PubMed

    van't Hoog, Anna H; Cobelens, Frank; Vassall, Anna; van Kampen, Sanne; Dorman, Susan E; Alland, David; Ellner, Jerrold

    2013-01-01

    High costs are a limitation to scaling up the Xpert MTB/RIF assay (Xpert) for the diagnosis of tuberculosis in resource-constrained settings. A triaging strategy in which a sensitive but not necessarily highly specific rapid test is used to select patients for Xpert may result in a more affordable diagnostic algorithm. To inform the selection and development of particular diagnostics as a triage test we explored combinations of sensitivity, specificity and cost at which a hypothetical triage test will improve affordability of the Xpert assay. In a decision analytical model parameterized for Uganda, India and South Africa, we compared a diagnostic algorithm in which a cohort of patients with presumptive TB received Xpert to a triage algorithm whereby only those with a positive triage test were tested by Xpert. A triage test with sensitivity equal to Xpert, 75% specificity, and costs of US$5 per patient tested reduced total diagnostic costs by 42% in the Uganda setting, and by 34% and 39% respectively in the India and South Africa settings. When exploring triage algorithms with lower sensitivity, the use of an example triage test with 95% sensitivity relative to Xpert, 75% specificity and test costs $5 resulted in similar cost reduction, and was cost-effective by the WHO willingness-to-pay threshold compared to Xpert for all in Uganda, but not in India and South Africa. The gain in affordability of the examined triage algorithms increased with decreasing prevalence of tuberculosis among the cohort. A triage test strategy could potentially improve the affordability of Xpert for TB diagnosis, particularly in low-income countries and with enhanced case-finding. Tests and markers with lower accuracy than desired of a diagnostic test may fall within the ranges of sensitivity, specificity and cost required for triage tests and be developed as such.

  20. Reliability enhancement through optimal burn-in

    NASA Astrophysics Data System (ADS)

    Kuo, W.

    1984-06-01

    A numerical reliability and cost model is defined for production line burn-in tests of electronic components. The necessity of burn-in is governed by upper and lower bounds: burn-in is mandatory for operation-critical or nonreparable component; no burn-in is needed when failure effects are insignificant or easily repairable. The model considers electronic systems in terms of a series of components connected by a single black box. The infant mortality rate is described with a Weibull distribution. Performance reaches a steady state after burn-in, and the cost of burn-in is a linear function for each component. A minimum cost is calculated among the costs and total time of burn-in, shop repair, and field repair, with attention given to possible losses in future sales from inadequate burn-in testing.

  1. Optimizing conceptual aircraft designs for minimum life cycle cost

    NASA Technical Reports Server (NTRS)

    Johnson, Vicki S.

    1989-01-01

    A life cycle cost (LCC) module has been added to the FLight Optimization System (FLOPS), allowing the additional optimization variables of life cycle cost, direct operating cost, and acquisition cost. Extensive use of the methodology on short-, medium-, and medium-to-long range aircraft has demonstrated that the system works well. Results from the study show that optimization parameter has a definite effect on the aircraft, and that optimizing an aircraft for minimum LCC results in a different airplane than when optimizing for minimum take-off gross weight (TOGW), fuel burned, direct operation cost (DOC), or acquisition cost. Additionally, the economic assumptions can have a strong impact on the configurations optimized for minimum LCC or DOC. Also, results show that advanced technology can be worthwhile, even if it results in higher manufacturing and operating costs. Examining the number of engines a configuration should have demonstrated a real payoff of including life cycle cost in the conceptual design process: the minimum TOGW of fuel aircraft did not always have the lowest life cycle cost when considering the number of engines.

  2. Drilling and Production Testing the Methane Hydrate Resource Potential Associated with the Barrow Gas Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steve McRae; Thomas Walsh; Michael Dunn

    2010-02-22

    In November of 2008, the Department of Energy (DOE) and the North Slope Borough (NSB) committed funding to develop a drilling plan to test the presence of hydrates in the producing formation of at least one of the Barrow Gas Fields, and to develop a production surveillance plan to monitor the behavior of hydrates as dissociation occurs. This drilling and surveillance plan was supported by earlier studies in Phase 1 of the project, including hydrate stability zone modeling, material balance modeling, and full-field history-matched reservoir simulation, all of which support the presence of methane hydrate in association with the Barrowmore » Gas Fields. This Phase 2 of the project, conducted over the past twelve months focused on selecting an optimal location for a hydrate test well; design of a logistics, drilling, completion and testing plan; and estimating costs for the activities. As originally proposed, the project was anticipated to benefit from industry activity in northwest Alaska, with opportunities to share equipment, personnel, services and mobilization and demobilization costs with one of the then-active exploration operators. The activity level dropped off, and this benefit evaporated, although plans for drilling of development wells in the BGF's matured, offering significant synergies and cost savings over a remote stand-alone drilling project. An optimal well location was chosen at the East Barrow No.18 well pad, and a vertical pilot/monitoring well and horizontal production test/surveillance well were engineered for drilling from this location. Both wells were designed with Distributed Temperature Survey (DTS) apparatus for monitoring of the hydrate-free gas interface. Once project scope was developed, a procurement process was implemented to engage the necessary service and equipment providers, and finalize project cost estimates. Based on cost proposals from vendors, total project estimated cost is $17.88 million dollars, inclusive of design work, permitting, barging, ice road/pad construction, drilling, completion, tie-in, long-term production testing and surveillance, data analysis and technology transfer. The PRA project team and North Slope have recommended moving forward to the execution phase of this project.« less

  3. The cost-effectiveness of screening for colorectal cancer.

    PubMed

    Telford, Jennifer J; Levy, Adrian R; Sambrook, Jennifer C; Zou, Denise; Enns, Robert A

    2010-09-07

    Published decision analyses show that screening for colorectal cancer is cost-effective. However, because of the number of tests available, the optimal screening strategy in Canada is unknown. We estimated the incremental cost-effectiveness of 10 strategies for colorectal cancer screening, as well as no screening, incorporating quality of life, noncompliance and data on the costs and benefits of chemotherapy. We used a probabilistic Markov model to estimate the costs and quality-adjusted life expectancy of 50-year-old average-risk Canadians without screening and with screening by each test. We populated the model with data from the published literature. We calculated costs from the perspective of a third-party payer, with inflation to 2007 Canadian dollars. Of the 10 strategies considered, we focused on three tests currently being used for population screening in some Canadian provinces: low-sensitivity guaiac fecal occult blood test, performed annually; fecal immunochemical test, performed annually; and colonoscopy, performed every 10 years. These strategies reduced the incidence of colorectal cancer by 44%, 65% and 81%, and mortality by 55%, 74% and 83%, respectively, compared with no screening. These strategies generated incremental cost-effectiveness ratios of $9159, $611 and $6133 per quality-adjusted life year, respectively. The findings were robust to probabilistic sensitivity analysis. Colonoscopy every 10 years yielded the greatest net health benefit. Screening for colorectal cancer is cost-effective over conventional levels of willingness to pay. Annual high-sensitivity fecal occult blood testing, such as a fecal immunochemical test, or colonoscopy every 10 years offer the best value for the money in Canada.

  4. Effect of land tenure and stakeholders attitudes on optimization of conservation practices in agricultural watersheds

    NASA Astrophysics Data System (ADS)

    Piemonti, A. D.; Babbar-Sebens, M.; Luzar, E. J.

    2012-12-01

    Modeled watershed management plans have become valuable tools for evaluating the effectiveness and impacts of conservation practices on hydrologic processes in watersheds. In multi-objective optimization approaches, several studies have focused on maximizing physical, ecological, or economic benefits of practices in a specific location, without considering the relationship between social systems and social attitudes on the overall optimality of the practice at that location. For example, objectives that have been commonly used in spatial optimization of practices are economic costs, sediment loads, nutrient loads and pesticide loads. Though the benefits derived from these objectives are generally oriented towards community preferences, they do not represent attitudes of landowners who might operate their land differently than their neighbors (e.g. farm their own land or rent the land to someone else) and might have different social/personal drivers that motivate them to adopt the practices. In addition, a distribution of such landowners could exist in the watershed, leading to spatially varying preferences to practices. In this study we evaluated the effect of three different land tenure types on the spatial-optimization of conservation practices. To perform the optimization, we used a uniform distribution of land tenure type and a spatially varying distribution of land tenure type. Our results show that for a typical Midwestern agricultural watershed, the most optimal solutions (i.e. highest benefits for minimum economic costs) found were for a uniform distribution of landowners who operate their own land. When a different land-tenure was used for the watershed, the optimized alternatives did not change significantly for nitrates reduction benefits and sediment reduction benefits, but were attained at economic costs much higher than the costs of the landowner who farms her/his own land. For example, landowners who rent to cash-renters would have to spend ~120% higher costs than landowners who operate their own land, to attain the same benefits. We also tested the effect of different social attitudes on the final preferences of the optimized alternatives and its consequences over the total effectiveness of the standard optimization approaches. The results suggest that, for example, when practices were removed from the system due to landowners' attitudes driven by economic profits, then the modified alternatives experienced a decrease in nitrates reduction by 2-50%, and decrease in peak flow reductions by 11-98 %, and decrease in sediments reduction by 20-77%.

  5. Using 3D printed models for planning and guidance during endovascular intervention: a technical advance.

    PubMed

    Itagaki, Michael W

    2015-01-01

    Three-dimensional (3D) printing applications in medicine have been limited due to high cost and technical difficulty of creating 3D printed objects. It is not known whether patient-specific, hollow, small-caliber vascular models can be manufactured with 3D printing, and used for small vessel endoluminal testing of devices. Manufacture of anatomically accurate, patient-specific, small-caliber arterial models was attempted using data from a patient's CT scan, free open-source software, and low-cost Internet 3D printing services. Prior to endovascular treatment of a patient with multiple splenic artery aneurysms, a 3D printed model was used preoperatively to test catheter equipment and practice the procedure. A second model was used intraoperatively as a reference. Full-scale plastic models were successfully produced. Testing determined the optimal puncture site for catheter positioning. A guide catheter, base catheter, and microcatheter combination selected during testing was used intraoperatively with success, and the need for repeat angiograms to optimize image orientation was minimized. A difficult and unconventional procedure was successful in treating the aneurysms while preserving splenic function. We conclude that creation of small-caliber vascular models with 3D printing is possible. Free software and low-cost printing services make creation of these models affordable and practical. Models are useful in preoperative planning and intraoperative guidance.

  6. On the design of innovative heterogeneous tests using a shape optimization approach

    NASA Astrophysics Data System (ADS)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    The development of full-field measurement methods enabled a new trend of mechanical tests. By providing the inhomogeneous strain field from the tests, these techniques are being widely used in sheet metal identification strategies, through heterogeneous mechanical tests. In this work, a heterogeneous mechanical test with an innovative tool/specimen shape, capable of producing rich heterogeneous strain paths providing extensive information on material behavior, is aimed. The specimen is found using a shape optimization process where a dedicated indicator that evaluates the richness of strain information is used. The methodology and results here presented are extended to non-specimen geometry dependence and to the non-dependence of the geometry parametrization through the use of the Ritz method for boundary value problems. Different curve models, such as Splines, B-Splines and NURBS, are used and C1 continuity throughout the specimen is guaranteed. Moreover, various optimization methods are used, deterministic and stochastic, in order to find the method or a combination of methods able to effectively minimize the cost function.

  7. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm

    PubMed Central

    Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method. PMID:29220396

  8. Improved approach for electric vehicle rapid charging station placement and sizing using Google maps and binary lightning search algorithm.

    PubMed

    Islam, Md Mainul; Shareef, Hussain; Mohamed, Azah

    2017-01-01

    The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method.

  9. Optimization of a vacuum chamber for vibration measurements.

    PubMed

    Danyluk, Mike; Dhingra, Anoop

    2011-10-01

    A 200 °C high vacuum chamber has been built to improve vibration measurement sensitivity. The optimized design addresses two significant issues: (i) vibration measurements under high vacuum conditions and (ii) use of design optimization tools to reduce operating costs. A test rig consisting of a cylindrical vessel with one access port has been constructed with a welded-bellows assembly used to seal the vessel and enable vibration measurements in high vacuum that are comparable with measurements in air. The welded-bellows assembly provides a force transmissibility of 0.1 or better at 15 Hz excitation under high vacuum conditions. Numerical results based on design optimization of a larger diameter chamber are presented. The general constraints on the new design include material yield stress, chamber first natural frequency, vibration isolation performance, and forced convection heat transfer capabilities over the exterior of the vessel access ports. Operating costs of the new chamber are reduced by 50% compared to a preexisting chamber of similar size and function.

  10. Shuttle payload vibroacoustic test plan evaluation. Free flyer payload applications and sortie payload parametric variations

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.

    1977-01-01

    A preliminary assessment of vibroacoustic test plan optimization for free flyer STS payloads is presented and the effects on alternate test plans for Spacelab sortie payloads number of missions are also examined. The component vibration failure probability and the number of components in the housekeeping subassemblies are provided. Decision models are used to evaluate the cost effectiveness of seven alternate test plans using protoflight hardware.

  11. Cost analysis of new and retrofit hot-air type solar assisted heating systems

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.; Hawkins, B. J.

    1978-01-01

    A detailed cost analysis/cost improvement study was performed on two Department of Energy/National Aeronautics and Space Administration operational test sites to determine actual costs and potential cost improvements of new and retrofit hot air type, solar assisted heating and hot water systems for single family sized structures. This analysis concentrated on the first cost of a system which included procurement, installation, and integration of a solar assisted heating and hot water system on a new or retrofit basis; it also provided several cost projections which can be used as inputs to payback analyses, depending upon the degree of optimism or future improvements assumed. Cost definitions were developed for five categories of cost, and preliminary estimates were developed for each. The costing methodology, approach, and results together with several candidate low cost designs are described.

  12. Cost Analysis of Tuberculosis Diagnosis in Cambodia with and without Xpert® MTB/RIF for People Living with HIV/AIDS and People with Presumptive Multidrug-resistant Tuberculosis.

    PubMed

    Pallas, Sarah Wood; Courey, Marissa; Hy, Chhaily; Killam, Wm Perry; Warren, Dora; Moore, Brittany

    2018-06-04

    The Xpert ® MTB/RIF (Xpert) test has been shown to be effective and cost-effective for diagnosing tuberculosis (TB) under conditions with high HIV prevalence and HIV-TB co-infection but less is known about Xpert's cost in low HIV prevalence settings. Cambodia, a country with low HIV prevalence (0.7%), high TB burden, and low multidrug-resistant (MDR) TB burden (1.4% of new TB cases, 11% of retreatment cases) introduced Xpert into its TB diagnostic algorithms for people living with HIV (PLHIV) and people with presumptive MDR TB in 2012. The study objective was to estimate these algorithms' costs pre- and post-Xpert introduction in four provinces of Cambodia. Using a retrospective, ingredients-based microcosting approach, primary cost data on personnel, equipment, maintenance, supplies, and specimen transport were collected at four sites through observation, records review, and key informant consultations. Across the sample facilities, the cost per Xpert test was US$33.88-US$37.11, clinical exam cost US$1.22-US$1.84, chest X-ray cost US$2.02-US$2.14, fluorescent microscopy (FM) smear cost US$1.56-US$1.93, Ziehl-Neelsen (ZN) smear cost US$1.26, liquid culture test cost US$11.63-US$22.83, follow-on work-up for positive culture results and Mycobacterium tuberculosis complex (MTB) identification cost US$11.50-US$14.72, and drug susceptibility testing (DST) cost US$44.26. Specimen transport added US$1.39-US$5.21 per sample. Assuming clinician adherence to the algorithms and perfect test accuracy, the normative cost per patient correctly diagnosed under the post-Xpert algorithms would be US$25-US$29 more per PLHIV and US$34-US$37 more per person with presumptive MDR TB (US$41 more per PLHIV when accounting for variable test sensitivity and specificity). Xpert test unit costs could be reduced through lower cartridge prices, longer usable life of GeneXpert ® (Cepheid, USA) instruments, and increased test volumes; however, epidemiological and test eligibility conditions in Cambodia limit the number of specimens received at laboratories, leading to sub-optimal utilization of current instruments. Improvements to patient referral and specimen transport could increase test volumes and reduce Xpert test unit costs in this setting.

  13. OPTIMAL NETWORK TOPOLOGY DESIGN

    NASA Technical Reports Server (NTRS)

    Yuen, J. H.

    1994-01-01

    This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.

  14. Advanced Energy Storage Management in Distribution Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Ceylan, Oguzhan; Xiao, Bailu

    2016-01-01

    With increasing penetration of distributed generation (DG) in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. In this paper, an iterative mixed integer quadratic constrained quadratic programming model to optimize the operation of a three phase unbalanced distribution system with high penetration of Photovoltaic (PV) panels, DG and energy storage (ES) is developed. The proposed model minimizes not only the operating cost, including fuel cost and purchasing cost, but also voltage deviations and power loss. The optimization model is based on the linearized sensitivity coefficients between state variables (e.g., node voltages) andmore » control variables (e.g., real and reactive power injections of DG and ES). To avoid slow convergence when close to the optimum, a golden search method is introduced to control the step size and accelerate the convergence. The proposed algorithm is demonstrated on modified IEEE 13 nodes test feeders with multiple PV panels, DG and ES. Numerical simulation results validate the proposed algorithm. Various scenarios of system configuration are studied and some critical findings are concluded.« less

  15. An Insoluble Titanium-Lead Anode for Sulfate Electrolytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferdman, Alla

    2005-05-11

    The project is devoted to the development of novel insoluble anodes for copper electrowinning and electrolytic manganese dioxide (EMD) production. The anodes are made of titanium-lead composite material produced by techniques of powder metallurgy, compaction of titanium powder, sintering and subsequent lead infiltration. The titanium-lead anode combines beneficial electrochemical behavior of a lead anode with high mechanical properties and corrosion resistance of a titanium anode. In the titanium-lead anode, the titanium stabilizes the lead, preventing it from spalling, and the lead sheathes the titanium, protecting it from passivation. Interconnections between manufacturing process, structure, composition and properties of the titanium-lead compositemore » material were investigated. The material containing 20-30 vol.% of lead had optimal combination of mechanical and electrochemical properties. Optimal process parameters to manufacture the anodes were identified. Prototypes having optimized composition and structure were produced for testing in operating conditions of copper electrowinning and EMD production. Bench-scale, mini-pilot scale and pilot scale tests were performed. The test anodes were of both a plate design and a flow-through cylindrical design. The cylindrical anodes were composed of cylinders containing titanium inner rods and fitting over titanium-lead bushings. The cylindrical design allows the electrolyte to flow through the anode, which enhances diffusion of the electrolyte reactants. The cylindrical anodes demonstrate higher mass transport capabilities and increased electrical efficiency compared to the plate anodes. Copper electrowinning represents the primary target market for the titanium-lead anode. A full-size cylindrical anode performance in copper electrowinning conditions was monitored over a year. The test anode to cathode voltage was stable in the 1.8 to 2.0 volt range. Copper cathode morphology was very smooth and uniform. There was no measurable anode weight loss during this time period. Quantitative chemical analysis of the anode surface showed that the lead content after testing remained at its initial level. No lead dissolution or transfer from the anode to the product occurred.A key benefit of the titanium-lead anode design is that cobalt additions to copper electrolyte should be eliminated. Cobalt is added to the electrolyte to help stabilize the lead oxide surface of conventional lead anodes. The presence of the titanium intimately mixed with the lead should eliminate the need for cobalt stabilization of the lead surface. The anode should last twice as long as the conventional lead anode. Energy savings should be achieved due to minimizing and stabilizing the anode-cathode distance in the electrowinning cells. The anode is easily substitutable into existing tankhouses without a rectifier change.The copper electrowinning test data indicate that the titanium-lead anode is a good candidate for further testing as a possible replacement for a conventional lead anode. A key consideration is the cost. Titanium costs have increased. One of the ways to get the anode cost down is manufacturing the anodes with fewer cylinders. Additional prototypes having different number of cylinders were constructed for a long-term commercial testing in a circuit without cobalt. The objective of the testing is to evaluate the need for cobalt, investigate the effect of decreasing the number of cylinders on the anode performance, and to optimize further the anode design in order to meet the operating requirements, minimize the voltage, maximize the life of the anode, and to balance this against a reasonable cost for the anode. It is anticipated that after testing of the additional prototypes, a whole cell commercial test will be conducted to complete evaluation of the titanium-lead anode costs/benefits.« less

  16. The cost-effectiveness of using chronic kidney disease risk scores to screen for early-stage chronic kidney disease.

    PubMed

    Yarnoff, Benjamin O; Hoerger, Thomas J; Simpson, Siobhan K; Leib, Alyssa; Burrows, Nilka R; Shrestha, Sundar S; Pavkov, Meda E

    2017-03-13

    Better treatment during early stages of chronic kidney disease (CKD) may slow progression to end-stage renal disease and decrease associated complications and medical costs. Achieving early treatment of CKD is challenging, however, because a large fraction of persons with CKD are unaware of having this disease. Screening for CKD is one important method for increasing awareness. We examined the cost-effectiveness of identifying persons for early-stage CKD screening (i.e., screening for moderate albuminuria) using published CKD risk scores. We used the CKD Health Policy Model, a micro-simulation model, to simulate the cost-effectiveness of using CKD two published risk scores by Bang et al. and Kshirsagar et al. to identify persons in the US for CKD screening with testing for albuminuria. Alternative risk score thresholds were tested (0.20, 0.15, 0.10, 0.05, and 0.02) above which persons were assigned to receive screening at alternative intervals (1-, 2-, and 5-year) for follow-up screening if the first screening was negative. We examined incremental cost-effectiveness ratios (ICERs), incremental lifetime costs divided by incremental lifetime QALYs, relative to the next higher screening threshold to assess cost-effectiveness. Cost-effective scenarios were determined as those with ICERs less than $50,000 per QALY. Among the cost-effective scenarios, the optimal scenario was determined as the one that resulted in the highest lifetime QALYs. ICERs ranged from $8,823 per QALY to $124,626 per QALY for the Bang et al. risk score and $6,342 per QALY to $405,861 per QALY for the Kshirsagar et al. risk score. The Bang et al. risk score with a threshold of 0.02 and 2-year follow-up screening was found to be optimal because it had an ICER less than $50,000 per QALY and resulted in the highest lifetime QALYs. This study indicates that using these CKD risk scores may allow clinicians to cost-effectively identify a broader population for CKD screening with testing for albuminuria and potentially detect people with CKD at earlier stages of the disease than current approaches of screening only persons with diabetes or hypertension.

  17. MEMS resonant load cells for micro-mechanical test frames: feasibility study and optimal design

    NASA Astrophysics Data System (ADS)

    Torrents, A.; Azgin, K.; Godfrey, S. W.; Topalli, E. S.; Akin, T.; Valdevit, L.

    2010-12-01

    This paper presents the design, optimization and manufacturing of a novel micro-fabricated load cell based on a double-ended tuning fork. The device geometry and operating voltages are optimized for maximum force resolution and range, subject to a number of manufacturing and electromechanical constraints. All optimizations are enabled by analytical modeling (verified by selected finite elements analyses) coupled with an efficient C++ code based on the particle swarm optimization algorithm. This assessment indicates that force resolutions of ~0.5-10 nN are feasible in vacuum (~1-50 mTorr), with force ranges as large as 1 N. Importantly, the optimal design for vacuum operation is independent of the desired range, ensuring versatility. Experimental verifications on a sub-optimal device fabricated using silicon-on-glass technology demonstrate a resolution of ~23 nN at a vacuum level of ~50 mTorr. The device demonstrated in this article will be integrated in a hybrid micro-mechanical test frame for unprecedented combinations of force resolution and range, displacement resolution and range, optical (or SEM) access to the sample, versatility and cost.

  18. Array automated assembly task, phase 2. Low cost silicon solar array project

    NASA Technical Reports Server (NTRS)

    Rhee, S. S.; Jones, G. T.; Allison, K. T.

    1978-01-01

    Several modifications instituted in the wafer surface preparation process served to significantly reduce the process cost to 1.55 cents per peak watt in 1975 cents. Performance verification tests of a laser scanning system showed a limited capability to detect hidden cracks or defects, but with potential equipment modifications this cost effective system could be rendered suitable for applications. Installation of electroless nickel plating system was completed along with an optimization of the wafer plating process. The solder coating and flux removal process verification test was completed. An optimum temperature range of 500-550 C was found to produce uniform solder coating with the restriction that a modified dipping procedure is utilized. Finally, the construction of the spray-on dopant equipment was completed.

  19. Design optimization of aircraft landing gear assembly under dynamic loading

    NASA Astrophysics Data System (ADS)

    Wong, Jonathan Y. B.

    As development cycles and prototyping iterations begin to decrease in the aerospace industry, it is important to develop and improve practical methodologies to meet all design metrics. This research presents an efficient methodology that applies high-fidelity multi-disciplinary design optimization techniques to commercial landing gear assemblies, for weight reduction, cost savings, and structural performance dynamic loading. Specifically, a slave link subassembly was selected as the candidate to explore the feasibility of this methodology. The design optimization process utilized in this research was sectioned into three main stages: setup, optimization, and redesign. The first stage involved the creation and characterization of the models used throughout this research. The slave link assembly was modelled with a simplified landing gear test, replicating the behavior of the physical system. Through extensive review of the literature and collaboration with Safran Landing Systems, dynamic and structural behavior for the system were characterized and defined mathematically. Once defined, the characterized behaviors for the slave link assembly were then used to conduct a Multi-Body Dynamic (MBD) analysis to determine the dynamic and structural response of the system. These responses were then utilized in a topology optimization through the use of the Equivalent Static Load Method (ESLM). The results of the optimization were interpreted and later used to generate improved designs in terms of weight, cost, and structural performance under dynamic loading in stage three. The optimized designs were then validated using the model created for the MBD analysis of the baseline design. The design generation process employed two different approaches for post-processing the topology results produced. The first approach implemented a close replication of the topology results, resulting in a design with an overall peak stress increase of 74%, weight savings of 67%, and no apparent cost savings due to complex features present in the design. The second design approach focused on realizing reciprocating benefits for cost and weight savings. As a result, this design was able to achieve an overall peak stress increase of 6%, weight and cost savings of 36%, and 60%, respectively.

  20. Design and operation of interconnectors for solid oxide fuel cell stacks

    NASA Astrophysics Data System (ADS)

    Winkler, W.; Koeppen, J.

    Highly efficient combined cycles with solid oxide fuel cell (SOFC) need an integrated heat exchanger in the stack to reach efficiencies of about 80%. The stack costs must be lower than 1000 DM/kW. A newly developed welded metallic (Haynes HA 230) interconnector with a free stretching planar SOFC and an integrated heat exchanger was tested in thermal cycling operation. The design allowed a cycling of the SOFC without mechanical damage of the electrolyte in several tests. However, more tests and a further design optimization will be necessary. These results could indicate that commercial high-temperature alloys can be used as interconnector material in order to fullfil the cost requirements.

  1. Pareto-optimal phylogenetic tree reconciliation

    PubMed Central

    Libeskind-Hadas, Ran; Wu, Yi-Chieh; Bansal, Mukul S.; Kellis, Manolis

    2014-01-01

    Motivation: Phylogenetic tree reconciliation is a widely used method for reconstructing the evolutionary histories of gene families and species, hosts and parasites and other dependent pairs of entities. Reconciliation is typically performed using maximum parsimony, in which each evolutionary event type is assigned a cost and the objective is to find a reconciliation of minimum total cost. It is generally understood that reconciliations are sensitive to event costs, but little is understood about the relationship between event costs and solutions. Moreover, choosing appropriate event costs is a notoriously difficult problem. Results: We address this problem by giving an efficient algorithm for computing Pareto-optimal sets of reconciliations, thus providing the first systematic method for understanding the relationship between event costs and reconciliations. This, in turn, results in new techniques for computing event support values and, for cophylogenetic analyses, performing robust statistical tests. We provide new software tools and demonstrate their use on a number of datasets from evolutionary genomic and cophylogenetic studies. Availability and implementation: Our Python tools are freely available at www.cs.hmc.edu/∼hadas/xscape. Contact: mukul@engr.uconn.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24932009

  2. Cost-effectiveness analysis of microscopic observation drug susceptibility test versus Xpert MTB/Rif test for diagnosis of pulmonary tuberculosis in HIV patients in Uganda.

    PubMed

    Walusimbi, Simon; Kwesiga, Brendan; Rodrigues, Rashmi; Haile, Melles; de Costa, Ayesha; Bogg, Lennart; Katamba, Achilles

    2016-10-10

    Microscopic Observation Drug Susceptibility (MODS) and Xpert MTB/Rif (Xpert) are highly sensitive tests for diagnosis of pulmonary tuberculosis (PTB). This study evaluated the cost effectiveness of utilizing MODS versus Xpert for diagnosis of active pulmonary TB in HIV infected patients in Uganda. A decision analysis model comparing MODS versus Xpert for TB diagnosis was used. Costs were estimated by measuring and valuing relevant resources required to perform the MODS and Xpert tests. Diagnostic accuracy data of the tests were obtained from systematic reviews involving HIV infected patients. We calculated base values for unit costs and varied several assumptions to obtain the range estimates. Cost effectiveness was expressed as costs per TB patient diagnosed for each of the two diagnostic strategies. Base case analysis was performed using the base estimates for unit cost and diagnostic accuracy of the tests. Sensitivity analysis was performed using a range of value estimates for resources, prevalence, number of tests and diagnostic accuracy. The unit cost of MODS was US$ 6.53 versus US$ 12.41 of Xpert. Consumables accounted for 59 % (US$ 3.84 of 6.53) of the unit cost for MODS and 84 % (US$10.37 of 12.41) of the unit cost for Xpert. The cost effectiveness ratio of the algorithm using MODS was US$ 34 per TB patient diagnosed compared to US$ 71 of the algorithm using Xpert. The algorithm using MODS was more cost-effective compared to the algorithm using Xpert for a wide range of different values of accuracy, cost and TB prevalence. The cost (threshold value), where the algorithm using Xpert was optimal over the algorithm using MODS was US$ 5.92. MODS versus Xpert was more cost-effective for the diagnosis of PTB among HIV patients in our setting. Efforts to scale-up MODS therefore need to be explored. However, since other non-economic factors may still favour the use of Xpert, the current cost of the Xpert cartridge still needs to be reduced further by more than half, in order to make it economically competitive with MODS.

  3. Algorithms for optimization of branching gravity-driven water networks

    NASA Astrophysics Data System (ADS)

    Dardani, Ian; Jones, Gerard F.

    2018-05-01

    The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.

  4. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  5. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  6. SATCOM simulator speeds MSS deployment and lowers costs

    NASA Technical Reports Server (NTRS)

    Carey, Tim; Hassun, Roland; Koberstein, Dave

    1993-01-01

    Mobile satellite systems (MSS) are being proposed and licensed at an accelerating rate. How can the design, manufacture, and performance of these systems be optimized at costs that allow a reasonable return on investment? The answer is the use of system simulation techniques beginning early in the system design and continuing through integration, pre- and post-launch monitoring, and in-orbit monitoring. This paper focuses on using commercially available, validated simulation instruments to deliver accurate, repeatable, and cost effective measurements throughout the life of a typical mobile satellite system. A satellite communications test set is discussed that provides complete parametric test capability with a significant improvement in measurement speed for manufacturing, integration, and pre-launch and in-orbit testing. The test set can simulate actual up and down link traffic conditions to evaluate the effects of system impairments, propagation and multipath on bit error rate (BER), channel capacity and transponder and system load balancing. Using a standard set of commercial instruments to deliver accurate, verifiable measurements anywhere in the world speeds deployment, generates measurement confidence, and lowers total system cost.

  7. How to design the cost-effectiveness appraisal process of new healthcare technologies to maximise population health: A conceptual framework.

    PubMed

    Johannesen, Kasper M; Claxton, Karl; Sculpher, Mark J; Wailoo, Allan J

    2018-02-01

    This paper presents a conceptual framework to analyse the design of the cost-effectiveness appraisal process of new healthcare technologies. The framework characterises the appraisal processes as a diagnostic test aimed at identifying cost-effective (true positive) and non-cost-effective (true negative) technologies. Using the framework, factors that influence the value of operating an appraisal process, in terms of net gain to population health, are identified. The framework is used to gain insight into current policy questions including (a) how rigorous the process should be, (b) who should have the burden of proof, and (c) how optimal design changes when allowing for appeals, price reductions, resubmissions, and re-evaluations. The paper demonstrates that there is no one optimal appraisal process and the process should be adapted over time and to the specific technology under assessment. Optimal design depends on country-specific features of (future) technologies, for example, effect, price, and size of the patient population, which might explain the difference in appraisal processes across countries. It is shown that burden of proof should be placed on the producers and that the impact of price reductions and patient access schemes on the producer's price setting should be considered when designing the appraisal process. Copyright © 2017 John Wiley & Sons, Ltd.

  8. The costs of avoiding environmental impacts from shale-gas surface infrastructure.

    PubMed

    Milt, Austin W; Gagnolet, Tamara D; Armsworth, Paul R

    2016-12-01

    Growing energy demand has increased the need to manage conflicts between energy production and the environment. As an example, shale-gas extraction requires substantial surface infrastructure, which fragments habitats, erodes soils, degrades freshwater systems, and displaces rare species. Strategic planning of shale-gas infrastructure can reduce trade-offs between economic and environmental objectives, but the specific nature of these trade-offs is not known. We estimated the cost of avoiding impacts from land-use change on forests, wetlands, rare species, and streams from shale-energy development within leaseholds. We created software for optimally siting shale-gas surface infrastructure to minimize its environmental impacts at reasonable construction cost. We visually assessed sites before infrastructure optimization to test whether such inspection could be used to predict whether impacts could be avoided at the site. On average, up to 38% of aggregate environmental impacts of infrastructure could be avoided for 20% greater development costs by spatially optimizing infrastructure. However, we found trade-offs between environmental impacts and costs among sites. In visual inspections, we often distinguished between sites that could be developed to avoid impacts at relatively low cost (29%) and those that could not (20%). Reductions in a metric of aggregate environmental impact could be largely attributed to potential displacement of rare species, sedimentation, and forest fragmentation. Planners and regulators can estimate and use heterogeneous trade-offs among development sites to create industry-wide improvements in environmental performance and do so at reasonable costs by, for example, leveraging low-cost avoidance of impacts at some sites to offset others. This could require substantial effort, but the results and software we provide can facilitate the process. © 2016 Society for Conservation Biology.

  9. Family System of Advanced Charring Ablators for Planetary Exploration Missions

    NASA Technical Reports Server (NTRS)

    Congdon, William M.; Curry, Donald M.

    2005-01-01

    Advanced Ablators Program Objectives: 1) Flight-ready(TRL-6) ablative heat shields for deep-space missions; 2) Diversity of selection from family-system approach; 3) Minimum weight systems with high reliability; 4) Optimized formulations and processing; 5) Fully characterized properties; and 6) Low-cost manufacturing. Definition and integration of candidate lightweight structures. Test and analysis database to support flight-vehicle engineering. Results from production scale-up studies and production-cost analyses.

  10. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  11. [Cost analysis of rapid methods for diagnosis of multidrug resistant tuberculosis in different epidemiologic groups in Perú].

    PubMed

    Solari, Lely; Gutiérrez, Alfonso; Suárez, Carmen; Jave, Oswaldo; Castillo, Edith; Yale, Gloria; Ascencios, Luis; Quispe, Neyda; Valencia, Eddy; Suárez, Víctor

    2011-01-01

    To evaluate the costs of three methods for the diagnosis of drug susceptibility in tuberculosis, and to compare the cost per case of Multidrug-resistant tuberculosis (MDR TB) diagnosed with these (MODS, GRIESS and Genotype MTBDR plus®) in 4 epidemiologic groups in Peru. In the basis of programmatic figures, we divided the population in 4 groups: new cases from Lima/Callao, new cases from other provinces, previously treated patients from Lima/Callao and previously treated from other provinces. We calculated the costs of each test with the standard methodology of the Ministry of Health, from the perspective of the health system. Finally, we calculated the cost per patient diagnosed with MDR TB for each epidemiologic group. The estimated costs per test for MODS, GRIESS, and Genotype MTBDR plus® were 14.83. 15.51 and 176.41 nuevos soles respectively (the local currency, 1 nuevos sol=0.36 US dollars for August, 2011). The cost per patient diagnosed with GRIESS and MODS was lower than 200 nuevos soles in 3 out of the 4 groups. The costs per diagnosed MDR TB were higher than 2,000 nuevos soles with Genotype MTBDR plus® in the two groups of new patients, and lower than 1,000 nuevos soles in the group of previously treated patients. In high-prevalence groups, like the previously treated patients, the costs per diagnosis of MDR TB with the 3 evaluated tests were low, nevertheless, the costs with the molecular test in the low- prevalence groups were high. The use of the molecular tests must be optimized in high prevalence areas.

  12. LSSA large area silicon sheet task continuous Czochralski process development

    NASA Technical Reports Server (NTRS)

    Rea, S. N.

    1978-01-01

    A Czochralski crystal growing furnace was converted to a continuous growth facility by installation of a premelter to provide molten silicon flow into the primary crucible. The basic furnace is operational and several trial crystals were grown in the batch mode. Numerous premelter configurations were tested both in laboratory-scale equipment as well as in the actual furnace. The best arrangement tested to date is a vertical, cylindrical graphite heater containing small fused silicon test tube liner in which the incoming silicon is melted and flows into the primary crucible. Economic modeling of the continuous Czochralski process indicates that for 10 cm diameter crystal, 100 kg furnace runs of four or five crystals each are near-optimal. Costs tend to asymptote at the 100 kg level so little additional cost improvement occurs at larger runs. For these conditions, crystal cost in equivalent wafer area of around $20/sq m exclusive of polysilicon and slicing was obtained.

  13. Workstation-Based Simulation for Rapid Prototyping and Piloted Evaluation of Control System Designs

    NASA Technical Reports Server (NTRS)

    Mansur, M. Hossein; Colbourne, Jason D.; Chang, Yu-Kuang; Aiken, Edwin W. (Technical Monitor)

    1998-01-01

    The development and optimization of flight control systems for modem fixed- and rotary-. wing aircraft consume a significant portion of the overall time and cost of aircraft development. Substantial savings can be achieved if the time required to develop and flight test the control system, and the cost, is reduced. To bring about such reductions, software tools such as Matlab/Simulink are being used to readily implement block diagrams and rapidly evaluate the expected responses of the completed system. Moreover, tools such as CONDUIT (CONtrol Designer's Unified InTerface) have been developed that enable the controls engineers to optimize their control laws and ensure that all the relevant quantitative criteria are satisfied, all within a fully interactive, user friendly, unified software environment.

  14. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    PubMed

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.

  15. Optimizing the design of a reproduction toxicity test with the pond snail Lymnaea stagnalis.

    PubMed

    Charles, Sandrine; Ducrot, Virginie; Azam, Didier; Benstead, Rachel; Brettschneider, Denise; De Schamphelaere, Karel; Filipe Goncalves, Sandra; Green, John W; Holbech, Henrik; Hutchinson, Thomas H; Faber, Daniel; Laranjeiro, Filipe; Matthiessen, Peter; Norrgren, Leif; Oehlmann, Jörg; Reategui-Zirena, Evelyn; Seeland-Fremer, Anne; Teigeler, Matthias; Thome, Jean-Pierre; Tobor Kaplon, Marysia; Weltje, Lennart; Lagadic, Laurent

    2016-11-01

    This paper presents the results from two ring-tests addressing the feasibility, robustness and reproducibility of a reproduction toxicity test with the freshwater gastropod Lymnaea stagnalis (RENILYS strain). Sixteen laboratories (from inexperienced to expert laboratories in mollusc testing) from nine countries participated in these ring-tests. Survival and reproduction were evaluated in L. stagnalis exposed to cadmium, tributyltin, prochloraz and trenbolone according to an OECD draft Test Guideline. In total, 49 datasets were analysed to assess the practicability of the proposed experimental protocol, and to estimate the between-laboratory reproducibility of toxicity endpoint values. The statistical analysis of count data (number of clutches or eggs per individual-day) leading to ECx estimation was specifically developed and automated through a free web-interface. Based on a complementary statistical analysis, the optimal test duration was established and the most sensitive and cost-effective reproduction toxicity endpoint was identified, to be used as the core endpoint. This validation process and the resulting optimized protocol were used to consolidate the OECD Test Guideline for the evaluation of reproductive effects of chemicals in L. stagnalis. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Thermal/Structural Tailoring of Engine Blades (T/STAEBL) User's manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.

    1994-01-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.

  17. Thermal/Structural Tailoring of Engine Blades (T/STAEBL): User's manual

    NASA Astrophysics Data System (ADS)

    Brown, K. W.

    1994-03-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a computer code that is able to perform numerical optimizations of cooled jet engine turbine blades and vanes. These optimizations seek an airfoil design of minimum operating cost that satisfies realistic design constraints. This report documents the organization of the T/STAEBL computer program, its design and analysis procedure, its optimization procedure, and provides an overview of the input required to run the program, as well as the computer resources required for its effective use. Additionally, usage of the program is demonstrated through a validation test case.

  18. Structural Tailoring of Advanced Turboprops (STAT)

    NASA Technical Reports Server (NTRS)

    Brown, Kenneth W.

    1988-01-01

    This interim report describes the progress achieved in the structural Tailoring of Advanced Turboprops (STAT) program which was developed to perform numerical optimizations on highly swept propfan blades. The optimization procedure seeks to minimize an objective function, defined as either direct operating cost or aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. This report provides a detailed description of the input, optimization procedures, approximate analyses and refined analyses, as well as validation test cases for the STAT program. In addition, conclusions and recommendations are summarized.

  19. Affordable Design: A Methodolgy to Implement Process-Based Manufacturing Cost into the Traditional Performance-Focused Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Bao, Han P.; Samareh, J. A.

    2000-01-01

    The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.

  20. Construction Performance Optimization toward Green Building Premium Cost Based on Greenship Rating Tools Assessment with Value Engineering Method

    NASA Astrophysics Data System (ADS)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Riswanto; Budiman, Rachmat

    2017-07-01

    Green building concept becomes important in current building life cycle to mitigate environment issues. The purpose of this paper is to optimize building construction performance towards green building premium cost, achieving green building rating tools with optimizing life cycle cost. Therefore, this study helps building stakeholder determining building fixture to achieve green building certification target. Empirically the paper collects data of green building in the Indonesian construction industry such as green building fixture, initial cost, operational and maintenance cost, and certification score achievement. After that, using value engineering method optimized green building fixture based on building function and cost aspects. Findings indicate that construction performance optimization affected green building achievement with increasing energy and water efficiency factors and life cycle cost effectively especially chosen green building fixture.

  1. Removing Barriers for Effective Deployment of Intermittent Renewable Generation

    NASA Astrophysics Data System (ADS)

    Arabali, Amirsaman

    The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.

  2. Reconstruction of a piecewise constant conductivity on a polygonal partition via shape optimization in EIT

    NASA Astrophysics Data System (ADS)

    Beretta, Elena; Micheletti, Stefano; Perotto, Simona; Santacesaria, Matteo

    2018-01-01

    In this paper, we develop a shape optimization-based algorithm for the electrical impedance tomography (EIT) problem of determining a piecewise constant conductivity on a polygonal partition from boundary measurements. The key tool is to use a distributed shape derivative of a suitable cost functional with respect to movements of the partition. Numerical simulations showing the robustness and accuracy of the method are presented for simulated test cases in two dimensions.

  3. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis

    2002-10-01

    This document details the progress to date on the OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING contract for the quarter starting July 2002 through September 2002. Even though we are awaiting the optimization portion of the testing program, accomplishments include the following: (1) Smith International agreed to participate in the DOE Mud Hammer program. (2) Smith International chromed collars for upcoming benchmark tests at TerraTek, now scheduled for 4Q 2002. (3) ConocoPhillips had a field trial of the Smith fluid hammer offshore Vietnam. The hammer functioned properly, though themore » well encountered hole conditions and reaming problems. ConocoPhillips plan another field trial as a result. (4) DOE/NETL extended the contract for the fluid hammer program to allow Novatek to ''optimize'' their much delayed tool to 2003 and to allow Smith International to add ''benchmarking'' tests in light of SDS Digger Tools' current financial inability to participate. (5) ConocoPhillips joined the Industry Advisors for the mud hammer program. (6) TerraTek acknowledges Smith International, BP America, PDVSA, and ConocoPhillips for cost-sharing the Smith benchmarking tests allowing extension of the contract to complete the optimizations.« less

  4. Issue a Boil-Water Advisory or Wait for Definitive Information? A Decision Analysis

    PubMed Central

    Wagner, Michael M.; Wallstrom, Garrick L.; Onisko, Agnieszka

    2005-01-01

    Objective Study the decision to issue a boil-water advisory in response to a spike in sales of diarrhea remedies or wait 72 hours for the results of definitive testing of water and people. Methods Decision analysis. Results In the base-case analysis, the optimal decision is test-and-wait. If the cost of issuing a boil-water advisory is less than 13.92 cents per person per day, the optimal decision is to issue the boil-water advisory immediately. Conclusions Decisions based on surveillance data that are suggestive but not conclusive about the existence of a disease outbreak can be modeled. PMID:16779145

  5. Simulation and Optimization of Large Scale Subsurface Environmental Impacts; Investigations, Remedial Design and Long Term Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deschaine, L.M.; Chalmers Univ. of Technology, Dept. of Physical Resources, Complex Systems Group, Goteborg

    2008-07-01

    The global impact to human health and the environment from large scale chemical / radionuclide releases is well documented. Examples are the wide spread release of radionuclides from the Chernobyl nuclear reactors, the mobilization of arsenic in Bangladesh, the formation of Environmental Protection Agencies in the United States, Canada and Europe, and the like. The fiscal costs of addressing and remediating these issues on a global scale are astronomical, but then so are the fiscal and human health costs of ignoring them. An integrated methodology for optimizing the response(s) to these issues is needed. This work addresses development of optimalmore » policy design for large scale, complex, environmental issues. It discusses the development, capabilities, and application of a hybrid system of algorithms that optimizes the environmental response. It is important to note that 'optimization' does not singularly refer to cost minimization, but to the effective and efficient balance of cost, performance, risk, management, and societal priorities along with uncertainty analysis. This tool integrates all of these elements into a single decision framework. It provides a consistent approach to designing optimal solutions that are tractable, traceable, and defensible. The system is modular and scalable. It can be applied either as individual components or in total. By developing the approach in a complex systems framework, a solution methodology represents a significant improvement over the non-optimal 'trial and error' approach to environmental response(s). Subsurface environmental processes are represented by linear and non-linear, elliptic and parabolic equations. The state equations solved using numerical methods include multi-phase flow (water, soil gas, NAPL), and multicomponent transport (radionuclides, heavy metals, volatile organics, explosives, etc.). Genetic programming is used to generate the simulators either when simulation models do not exist, or to extend the accuracy of them. The uncertainty and sparse nature of information in earth science simulations necessitate stochastic representations. For discussion purposes, the solution to these site-wide challenges is divided into three sub-components; plume finding, long term monitoring, and site-wide remediation. Plume finding is the optimal estimation of the plume fringe(s) at a specified time. It is optimized by fusing geo-stochastic flow and transport simulations with the information content of data using a Kalman filter. The result is an optimal monitoring sensor network; the decision variable is location(s) of sensor in three dimensions. Long term monitoring extends this approach concept, and integrates the spatial-time correlations to optimize the decision variables of where to sample and when to sample over the project life cycle. Optimization of location and timing of samples to meet the desired accuracy of temporal plume movement is accomplished using enumeration or genetic algorithms. The remediation optimization solves the multi-component, multiphase system of equations and incorporates constraints on life-cycle costs, maximum annual costs, maximum allowable annual discharge (for assessing the monitored natural attenuation solution) and constraints on where remedial system component(s) can be located, including management overrides to force certain solutions to be chosen are incorporated for solution design. It uses a suite of optimization techniques, including the outer approximation method, Lipchitz global optimization, genetic algorithms, and the like. The automated optimal remedial design algorithm requires a stable simulator be available for the simulated process. This is commonly the case for all above specifications sans true three-dimensional multiphase flow. Much work is currently being conducted in the industry to develop stable 3D, three-phase simulators. If needed, an interim heuristic algorithm is available to get close to optimal for these conditions. This system process provides the full capability to optimize multi-source, multiphase, and multicomponent sites. The results of applying just components of these algorithms have produced predicted savings of as much as $90,000,000(US), when compared to alternative solutions. Investment in a pilot program to test the model saved 100% of the $20,000,000 predicted for the smaller test implementation. This was done without loss of effectiveness, and received an award from the Vice President - and now Nobel peace prize winner - Al Gore of the United States. (authors)« less

  6. Topography-based Flood Planning and Optimization Capability Development Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R.; Tasseff, Byron A.; Bent, Russell W.

    2014-02-26

    Globally, water-related disasters are among the most frequent and costly natural hazards. Flooding inflicts catastrophic damage on critical infrastructure and population, resulting in substantial economic and social costs. NISAC is developing LeveeSim, a suite of nonlinear and network optimization models, to predict optimal barrier placement to protect critical regions and infrastructure during flood events. LeveeSim currently includes a high-performance flood model to simulate overland flow, as well as a network optimization model to predict optimal barrier placement during a flood event. The LeveeSim suite models the effects of flooding in predefined regions. By manipulating a domain’s underlying topography, developers alteredmore » flood propagation to reduce detrimental effects in areas of interest. This numerical altering of a domain’s topography is analogous to building levees, placing sandbags, etc. To induce optimal changes in topography, NISAC used a novel application of an optimization algorithm to minimize flooding effects in regions of interest. To develop LeveeSim, NISAC constructed and coupled hydrodynamic and optimization algorithms. NISAC first implemented its existing flood modeling software to use massively parallel graphics processing units (GPUs), which allowed for the simulation of larger domains and longer timescales. NISAC then implemented a network optimization model to predict optimal barrier placement based on output from flood simulations. As proof of concept, NISAC developed five simple test scenarios, and optimized topographic solutions were compared with intuitive solutions. Finally, as an early validation example, barrier placement was optimized to protect an arbitrary region in a simulation of the historic Taum Sauk dam breach.« less

  7. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis

    2003-07-01

    This document details the progress to date on the ''OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE--A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING'' contract for the quarter starting April 2003 through June 2003. The DOE and TerraTek continue to wait for Novatek on the optimization portion of the testing program (they are completely rebuilding their fluid hammer). Accomplishments included the following: (1) Hughes Christensen has recently expressed interest in the possibility of a program to examine cutter impact testing, which would be useful in a better understanding of the physics of rock impact. Their interest however is notmore » necessarily fluid hammers, but to use the information for drilling bit development. (2) Novatek (cost sharing supplier of tools) has informed the DOE project manager that their tool may not be ready for ''optimization'' testing late summer 2003 (August-September timeframe) as originally anticipated. During 3Q Novatek plans to meet with TerraTek to discuss progress with their tool for 4Q 2003 testing. (3) A task for an addendum to the hammer project related to cutter impact studies was written during 2Q 2003. (4) Smith International internally is upgrading their hammer for the optimization testing phase. One currently known area of improvement is their development program to significantly increase the hammer blow energy.« less

  8. Optimal numbers of matings: the conditional balance between benefits and costs of mating for females of a nuptial gift-giving spider.

    PubMed

    Toft, S; Albo, M J

    2015-02-01

    In species where females gain a nutritious nuptial gift during mating, the balance between benefits and costs of mating may depend on access to food. This means that there is not one optimal number of matings for the female but a range of optimal mating numbers. With increasing food availability, the optimal number of matings for a female should vary from the number necessary only for fertilization of her eggs to the number needed also for producing these eggs. In three experimental series, the average number of matings for females of the nuptial gift-giving spider Pisaura mirabilis before egg sac construction varied from 2 to 16 with food-limited females generally accepting more matings than well-fed females. Minimal level of optimal mating number for females at satiation feeding conditions was predicted to be 2-3; in an experimental test, the median number was 2 (range 0-4). Multiple mating gave benefits in terms of increased fecundity and increased egg hatching success up to the third mating, and it had costs in terms of reduced fecundity, reduced egg hatching success after the third mating, and lower offspring size. The level of polyandry seems to vary with the female optimum, regulated by a satiation-dependent resistance to mating, potentially leaving satiated females in lifelong virginity. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  9. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  10. Optimal screening and donor management in a public stool bank.

    PubMed

    Kazerouni, Abbas; Burgess, James; Burns, Laura J; Wein, Lawrence M

    2015-12-17

    Fecal microbiota transplantation is an effective treatment for recurrent Clostridium difficile infection and is being investigated as a treatment for other microbiota-associated diseases. To facilitate these activities, an international public stool bank has been created, which screens donors and processes stools in a standardized manner. The goal of this research is to use mathematical modeling and analysis to optimize screening and donor management at the stool bank. Compared to the current policy of screening active donors every 60 days before releasing their quarantined stools for sale, costs can be reduced by 10.3 % by increasing the screening frequency to every 36 days. In addition, the stool production rate varies widely across donors, and using donor-specific screening, where higher producers are screened more frequently, also reduces costs, as does introducing an interim (i.e., between consecutive regular tests) stool test for just rotavirus and C. difficile. We also derive a donor release (i.e., into the system) policy that allows the supply to approximately match an exponentially increasing deterministic demand. More frequent screening, interim screening for rotavirus and C. difficile, and donor-specific screening, where higher stool producers are screened more frequently, are all cost-reducing measures. If screening costs decrease in the future (e.g., as a result of bringing screening in house), a bottleneck for implementing some of these recommendations may be the reluctance of donors to undergo serum screening more frequently than monthly.

  11. Cost Optimization and Technology Enablement COTSAT-1

    NASA Technical Reports Server (NTRS)

    Spremo, Stevan; Lindsay, Michael C.; Klupar, Peter Damian; Swank, Aaron J.

    2010-01-01

    Cost Optimized Test of Spacecraft Avionics and Technologies (COTSAT-1) is an ongoing spacecraft research and development project at NASA Ames Research Center (ARC). The space industry was a hot bed of innovation and development at its birth. Many new technologies were developed for and first demonstrated in space. In the recent past this trend has reversed with most of the new technology funding and research being driven by the private industry. Most of the recent advances in spaceflight hardware have come from the cell phone industry with a lag of about 10 to 15 years from lab demonstration to in space usage. NASA has started a project designed to address this problem. The prototype spacecraft known as Cost Optimized Test of Spacecraft Avionics and Technologies (COTSAT-1) and CheapSat work to reduce these issues. This paper highlights the approach taken by NASA Ames Research center to achieve significant subsystem cost reductions. The COSTAT-1 research system design incorporates use of COTS (Commercial Off The Shelf), MOTS (Modified Off The Shelf), and GOTS (Government Off The Shelf) hardware for a remote sensing spacecraft. The COTSAT-1 team demonstrated building a fully functional spacecraft for $500K parts and $2.0M labor. The COTSAT-1 system, including a selected science payload, is described within this paper. Many of the advancements identified in the process of cost reduction can be attributed to the use of a one-atmosphere pressurized structure to house the spacecraft components. By using COTS hardware, the spacecraft program can utilize investments already made by commercial vendors. This ambitious project development philosophy/cycle has yielded the COTSAT-1 flight hardware. This paper highlights the advancements of the COTSAT-1 spacecraft leading to the delivery of the current flight hardware that is now located at NASA Ames Research Center. This paper also addresses the plans for COTSAT-2.

  12. Wing-section optimization for supersonic viscous flow

    NASA Technical Reports Server (NTRS)

    Item, Cem C.; Baysal, Oktay (Editor)

    1995-01-01

    To improve the shape of a supersonic wing, an automated method that also includes higher fidelity to the flow physics is desirable. With this impetus, an aerodynamic optimization methodology incorporating thin-layer Navier-Stokes equations and sensitivity analysis had been previously developed. Prior to embarking upon the wind design task, the present investigation concentrated on testing the feasibility of the methodology, and the identification of adequate problem formulations, by defining two-dimensional, cost-effective test cases. Starting with two distinctly different initial airfoils, two independent shape optimizations resulted in shapes with similar features: slightly cambered, parabolic profiles with sharp leading- and trailing-edges. Secondly, the normal section to the subsonic portion of the leading edge, which had a high normal angle-of-attack, was considered. The optimization resulted in a shape with twist and camber which eliminated the adverse pressure gradient, hence, exploiting the leading-edge thrust. The wing section shapes obtained in all the test cases had the features predicted by previous studies. Therefore, it was concluded that the flowfield analyses and sensitivity coefficients were computed and fed to the present gradient-based optimizer correctly. Also, as a result of the present two-dimensional study, suggestions were made for the problem formulations which should contribute to an effective wing shape optimization.

  13. Audit of Use and Overuse of Serum Protein Immunofixation Electrophoresis and Serum Free Light Chain Assay in Tertiary Health Care: A Case for Algorithmic Testing to Optimize Laboratory Utilization.

    PubMed

    Heaton, Christopher; Vyas, Shikhar G; Singh, Gurmukh

    2016-04-01

    Overuse of laboratory tests is a persistent issue. We examined the use and overuse of serum immunofixation electrophoresis and serum free light chain assays to develop an algorithm for optimizing utilization. A retrospective review of all tests, for investigation of monoclonal gammopathies, for all patients who had any of these tests done from April 24, 2014, through July 25, 2014, was carried out. The test orders were categorized as warranted or not warranted according to criteria presented in the article. A total of 237 patients were tested, and their historical records included 1,503 episodes of testing for one or more of serum protein electrophoresis, serum immunofixation electrophoresis, and serum free light chain assays. Only 46% of the serum immunofixation and 42% serum free light chain assays were warranted. Proper utilization, at our institution alone, would have obviated $64,182.95/year in health care costs, reduced laboratory cost of reagent alone by $26,436.04/year, and put $21,904.92/year of part B reimbursement at risk. Fewer than half of the serum immunofixation and serum free light chain assays added value. The proposed algorithm for testing should improve utilization. Risk to part B billing may be a disincentive to reducing test utilization. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Wavelength routing beyond the standard graph coloring approach

    NASA Astrophysics Data System (ADS)

    Blankenhorn, Thomas

    2004-04-01

    When lightpaths are routed in the planning stage of transparent optical networks, the textbook approach is to use algorithms that try to minimize the overall number of wavelengths used in the . We demonstrate that this method cannot be expected to minimize actual costs when the marginal cost of instlling more wavelengths is a declining function of the number of wavelengths already installed, as is frequently the case. We further demonstrate how cost optimization can theoretically be improved with algorithms based on Prim"s algorithm. Finally, we test this theory with simulaion on a series of actual network topologies, which confirm the theoretical analysis.

  15. Data on cost-optimal Nearly Zero Energy Buildings (NZEBs) across Europe.

    PubMed

    D'Agostino, Delia; Parker, Danny

    2018-04-01

    This data article refers to the research paper A model for the cost-optimal design of Nearly Zero Energy Buildings (NZEBs) in representative climates across Europe [1]. The reported data deal with the design optimization of a residential building prototype located in representative European locations. The study focus on the research of cost-optimal choices and efficiency measures in new buildings depending on the climate. The data linked within this article relate to the modelled building energy consumption, renewable production, potential energy savings, and costs. Data allow to visualize energy consumption before and after the optimization, selected efficiency measures, costs and renewable production. The reduction of electricity and natural gas consumption towards the NZEB target can be visualized together with incremental and cumulative costs in each location. Further data is available about building geometry, costs, CO 2 emissions, envelope, materials, lighting, appliances and systems.

  16. Cost effectiveness and projected national impact of colorectal cancer screening in France.

    PubMed

    Hassan, C; Benamouzig, R; Spada, C; Ponchon, T; Zullo, A; Saurin, J C; Costamagna, G

    2011-09-01

    Colorectal cancer (CRC) is a major cause of morbidity and mortality in France. Only scanty data on cost-effectiveness of CRC screening in Europe are available, generating uncertainty over its efficiency. Although immunochemical fecal tests (FIT) and guaiac-based fecal occult blood tests (g-FOBT) have been shown to be cost-effective in France, cost-effectiveness of endoscopic screening has not yet been addressed. Cost-effectiveness of screening strategies using colonoscopy, flexible sigmoidoscopy, second-generation colon capsule endoscopy (CCE), FIT and g-FOBT were compared using a Markov model. A 40 % adherence rate was assumed for all strategies. Colonoscopy costs included anesthesiologist assistance. Incremental cost-effectiveness ratios (ICERs) were calculated. Probabilistic and value-of-information analyses were used to estimate the expected benefit of future research. A third-payer perspective was adopted. In the reference case analysis, FIT repeated every year was the most cost-effective strategy, with an ICER of €48165 per life-year gained vs. FIT every 2 years, which was the next most cost-effective strategy. Although CCE every 5 years was as effective as FIT 1-year, it was not a cost-effective alternative. Colonoscopy repeated every 10 years was substantially more costly, and slightly less effective than FIT 1-year. When projecting the model outputs onto the French population, the least (g-FOBT 2-years) and most (FIT 1-year) effective strategies reduced the absolute number of annual CRC deaths from 16037 to 12916 and 11217, respectively, resulting in an annual additional cost of €26 million and €347 million, respectively. Probabilistic sensitivity analysis demonstrated that FIT 1-year was the optimal choice in 20% of the simulated scenarios, whereas sigmoidoscopy 5-years, colonoscopy, and FIT 2-years were the optimal choices in 40%, 26%, and 14%, respectively. A screening program based on FIT 1-year appeared to be the most cost-effective approach for CRC screening in France. However, a substantial uncertainty over this choice is still present. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Cost-effectiveness of oral antiplatelet agents--current and future perspectives.

    PubMed

    Arnold, Suzanne V; Cohen, David J; Magnuson, Elizabeth A

    2011-08-09

    Cardiovascular disease is both highly prevalent and exceedingly costly to treat. Several novel antiplatelet agents have been found to be effective in reducing the morbidity and mortality associated with cardiovascular disease. Understanding both the economic and the clinical implications of these novel therapies is particularly important. In this article, the results of published evaluations of the cost-effectiveness of oral antiplatelet strategies for use across a range of clinical conditions and treatment settings are reviewed. The results of these studies support the use of aspirin for primary prevention in high-risk patients and for secondary prevention in all patients with previous cardiovascular events. Although the optimal duration of dual antiplatelet therapy after an event remains uncertain, favorable cost-effectiveness estimates have been demonstrated for aspirin plus clopidogrel versus aspirin alone after a myocardial infarction or percutaneous coronary intervention. Moreover, prasugrel has been shown to be more cost-effective than clopidogrel for patients with an acute coronary syndrome and planned percutaneous coronary intervention. As novel antiplatelet agents emerge and existing agents are tested in different patient populations, the evaluation of the relative economic efficiency of these oral antiplatelet treatment strategies will continue to be instrumental to optimally inform clinical and health-policy decision-making.

  18. Colorectal Cancer: Cost-effectiveness of Colonoscopy versus CT Colonography Screening with Participation Rates and Costs.

    PubMed

    van der Meulen, Miriam P; Lansdorp-Vogelaar, Iris; Goede, S Lucas; Kuipers, Ernst J; Dekker, Evelien; Stoker, Jaap; van Ballegooijen, Marjolein

    2018-06-01

    Purpose To compare the cost-effectiveness of computed tomographic (CT) colonography and colonoscopy screening by using data on unit costs and participation rates from a randomized controlled screening trial in a dedicated screening setting. Materials and Methods Observed participation rates and screening costs from the Colonoscopy or Colonography for Screening, or COCOS, trial were used in a microsimulation model to estimate costs and quality-adjusted life-years (QALYs) gained with colonoscopy and CT colonography screening. For both tests, the authors determined optimal age range and screening interval combinations assuming a 100% participation rate. Assuming observed participation for these combinations, the cost-effectiveness of both tests was compared. Extracolonic findings were not included because long-term follow-up data are lacking. Results The participation rates for colonoscopy and CT colonography were 21.5% (1276 of 5924 invitees) and 33.6% (982 of 2920 invitees), respectively. Colonoscopy was more cost-effective in the screening strategies with one or two lifetime screenings, whereas CT colonography was more cost-effective in strategies with more lifetime screenings. CT colonography was the preferred test for willingness-to-pay-thresholds of €3200 per QALY gained and higher, which is lower than the Dutch willingness-to-pay threshold of €20 000. With equal participation, colonoscopy was the preferred test independent of willingness-to-pay thresholds. The findings were robust for most of the sensitivity analyses, except with regard to relative screening costs and subsequent participation. Conclusion Because of the higher participation rates, CT colonography screening for colorectal cancer is more cost-effective than colonoscopy screening. The implementation of CT colonography screening requires previous satisfactory resolution to the question as to how best to deal with extracolonic findings. © RSNA, 2018 Online supplemental material is available for this article.

  19. GMOseek: a user friendly tool for optimized GMO testing.

    PubMed

    Morisset, Dany; Novak, Petra Kralj; Zupanič, Darko; Gruden, Kristina; Lavrač, Nada; Žel, Jana

    2014-08-01

    With the increasing pace of new Genetically Modified Organisms (GMOs) authorized or in pipeline for commercialization worldwide, the task of the laboratories in charge to test the compliance of food, feed or seed samples with their relevant regulations became difficult and costly. Many of them have already adopted the so called "matrix approach" to rationalize the resources and efforts used to increase their efficiency within a limited budget. Most of the time, the "matrix approach" is implemented using limited information and some proprietary (if any) computational tool to efficiently use the available data. The developed GMOseek software is designed to support decision making in all the phases of routine GMO laboratory testing, including the interpretation of wet-lab results. The tool makes use of a tabulated matrix of GM events and their genetic elements, of the laboratory analysis history and the available information about the sample at hand. The tool uses an optimization approach to suggest the most suited screening assays for the given sample. The practical GMOseek user interface allows the user to customize the search for a cost-efficient combination of screening assays to be employed on a given sample. It further guides the user to select appropriate analyses to determine the presence of individual GM events in the analyzed sample, and it helps taking a final decision regarding the GMO composition in the sample. GMOseek can also be used to evaluate new, previously unused GMO screening targets and to estimate the profitability of developing new GMO screening methods. The presented freely available software tool offers the GMO testing laboratories the possibility to select combinations of assays (e.g. quantitative real-time PCR tests) needed for their task, by allowing the expert to express his/her preferences in terms of multiplexing and cost. The utility of GMOseek is exemplified by analyzing selected food, feed and seed samples from a national reference laboratory for GMO testing and by comparing its performance to existing tools which use the matrix approach. GMOseek proves superior when tested on real samples in terms of GMO coverage and cost efficiency of its screening strategies, including its capacity of simple interpretation of the testing results.

  20. Incentive-compatible demand-side management for smart grids based on review strategies

    NASA Astrophysics Data System (ADS)

    Xu, Jie; van der Schaar, Mihaela

    2015-12-01

    Demand-side load management is able to significantly improve the energy efficiency of smart grids. Since the electricity production cost depends on the aggregate energy usage of multiple consumers, an important incentive problem emerges: self-interested consumers want to increase their own utilities by consuming more than the socially optimal amount of energy during peak hours since the increased cost is shared among the entire set of consumers. To incentivize self-interested consumers to take the socially optimal scheduling actions, we design a new class of protocols based on review strategies. These strategies work as follows: first, a review stage takes place in which a statistical test is performed based on the daily prices of the previous billing cycle to determine whether or not the other consumers schedule their electricity loads in a socially optimal way. If the test fails, the consumers trigger a punishment phase in which, for a certain time, they adjust their energy scheduling in such a way that everybody in the consumer set is punished due to an increased price. Using a carefully designed protocol based on such review strategies, consumers then have incentives to take the socially optimal load scheduling to avoid entering this punishment phase. We rigorously characterize the impact of deploying protocols based on review strategies on the system's as well as the users' performance and determine the optimal design (optimal billing cycle, punishment length, etc.) for various smart grid deployment scenarios. Even though this paper considers a simplified smart grid model, our analysis provides important and useful insights for designing incentive-compatible demand-side management schemes based on aggregate energy usage information in a variety of practical scenarios.

  1. Improving spacecraft design using a multidisciplinary design optimization methodology

    NASA Astrophysics Data System (ADS)

    Mosher, Todd Jon

    2000-10-01

    Spacecraft design has gone from maximizing performance under technology constraints to minimizing cost under performance constraints. This is characteristic of the "faster, better, cheaper" movement that has emerged within NASA. Currently spacecraft are "optimized" manually through a tool-assisted evaluation of a limited set of design alternatives. With this approach there is no guarantee that a systems-level focus will be taken and "feasibility" rather than "optimality" is commonly all that is achieved. To improve spacecraft design in the "faster, better, cheaper" era, a new approach using multidisciplinary design optimization (MDO) is proposed. Using MDO methods brings structure to conceptual spacecraft design by casting a spacecraft design problem into an optimization framework. Then, through the construction of a model that captures design and cost, this approach facilitates a quicker and more straightforward option synthesis. The final step is to automatically search the design space. As computer processor speed continues to increase, enumeration of all combinations, while not elegant, is one method that is straightforward to perform. As an alternative to enumeration, genetic algorithms are used and find solutions by reviewing fewer possible solutions with some limitations. Both methods increase the likelihood of finding an optimal design, or at least the most promising area of the design space. This spacecraft design methodology using MDO is demonstrated on three examples. A retrospective test for validation is performed using the Near Earth Asteroid Rendezvous (NEAR) spacecraft design. For the second example, the premise that aerobraking was needed to minimize mission cost and was mission enabling for the Mars Global Surveyor (MGS) mission is challenged. While one might expect no feasible design space for an MGS without aerobraking mission, a counterintuitive result is discovered. Several design options that don't use aerobraking are feasible and cost effective. The third example is an original commercial lunar mission entitled Eagle-eye. This example shows how an MDO approach is applied to an original mission with a larger feasible design space. It also incorporates a simplified business case analysis.

  2. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A cost effective process sequence and machinery for the production of flat plate photovoltaic modules are described. Cells were fabricated using the process sequence which was optimized, as was a lamination procedure. Insulator tapes and edge seal material were identified and tested. Encapsulation materials were evaluated.

  3. Impact of Airspace Charges on Transatlantic Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Ng, Hok K.; Linke, Florian; Chen, Neil Y.

    2015-01-01

    Aircraft flying over the airspace of different countries are subject to over-flight charges. These charges vary from country to country. Airspace charges, while necessary to support the communication, navigation and surveillance services, may lead to aircraft flying routes longer than wind-optimal routes and produce additional carbon dioxide and other gaseous emissions. This paper develops an optimal route between city pairs by modifying the cost function to include an airspace cost whenever an aircraft flies through a controlled airspace without landing or departing from that airspace. It is assumed that the aircraft will fly the trajectory at a constant cruise altitude and constant speed. The computationally efficient optimal trajectory is derived by solving a non-linear optimal control problem. The operational strategies investigated in this study for minimizing aircraft fuel burn and emissions include flying fuel-optimal routes and flying cost-optimal routes that may completely or partially reduce airspace charges en route. The results in this paper use traffic data for transatlantic flights during July 2012. The mean daily savings in over-flight charges, fuel cost and total operation cost during the period are 17.6 percent, 1.6 percent, and 2.4 percent respectively, along the cost- optimal trajectories. The transatlantic flights can potentially save $600,000 in fuel cost plus $360,000 in over-flight charges daily by flying the cost-optimal trajectories. In addition, the aircraft emissions can be potentially reduced by 2,070 metric tons each day. The airport pairs and airspace regions that have the highest potential impacts due to airspace charges are identified for possible reduction of fuel burn and aircraft emissions for the transatlantic flights. The results in the paper show that the impact of the variation in fuel price on the optimal routes is to reduce the difference between wind-optimal and cost-optimal routes as the fuel price increases. The additional fuel consumption is quantified using the 30 percent variation in fuel prices during March 2014 to March 2015.

  4. Electric Propulsion System Selection Process for Interplanetary Missions

    NASA Technical Reports Server (NTRS)

    Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul

    2008-01-01

    The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.

  5. Solution for a bipartite Euclidean traveling-salesman problem in one dimension

    NASA Astrophysics Data System (ADS)

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  6. Solution for a bipartite Euclidean traveling-salesman problem in one dimension.

    PubMed

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  7. Joint optimization of regional water-power systems

    NASA Astrophysics Data System (ADS)

    Pereira-Cardenal, Silvio J.; Mo, Birger; Gjelsvik, Anders; Riegels, Niels D.; Arnbjerg-Nielsen, Karsten; Bauer-Gottwein, Peter

    2016-06-01

    Energy and water resources systems are tightly coupled; energy is needed to deliver water and water is needed to extract or produce energy. Growing pressure on these resources has raised concerns about their long-term management and highlights the need to develop integrated solutions. A method for joint optimization of water and electric power systems was developed in order to identify methodologies to assess the broader interactions between water and energy systems. The proposed method is to include water users and power producers into an economic optimization problem that minimizes the cost of power production and maximizes the benefits of water allocation, subject to constraints from the power and hydrological systems. The method was tested on the Iberian Peninsula using simplified models of the seven major river basins and the power market. The optimization problem was successfully solved using stochastic dual dynamic programming. The results showed that current water allocation to hydropower producers in basins with high irrigation productivity, and to irrigation users in basins with high hydropower productivity was sub-optimal. Optimal allocation was achieved by managing reservoirs in very distinct ways, according to the local inflow, storage capacity, hydropower productivity, and irrigation demand and productivity. This highlights the importance of appropriately representing the water users' spatial distribution and marginal benefits and costs when allocating water resources optimally. The method can handle further spatial disaggregation and can be extended to include other aspects of the water-energy nexus.

  8. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  9. Low-cost solar array structure development

    NASA Astrophysics Data System (ADS)

    Wilson, A. H.

    1981-06-01

    Early studies of flat-plate arrays have projected costs on the order of $50/square meter for installed array support structures. This report describes an optimized low-cost frame-truss structure that is estimated to cost below $25/square meter, including all markups, shipping an installation. The structure utilizes a planar frame made of members formed from light-gauge galvanized steel sheet and is supposed in the field by treated-wood trusses that are partially buried in trenches. The buried trusses use the overburden soil to carry uplift wind loads and thus to obviate reinforced-concrete foundations. Details of the concept, including design rationale, fabrication and assembly experience, structural testing and fabrication drawings are included.

  10. Low-cost solar array structure development

    NASA Technical Reports Server (NTRS)

    Wilson, A. H.

    1981-01-01

    Early studies of flat-plate arrays have projected costs on the order of $50/square meter for installed array support structures. This report describes an optimized low-cost frame-truss structure that is estimated to cost below $25/square meter, including all markups, shipping an installation. The structure utilizes a planar frame made of members formed from light-gauge galvanized steel sheet and is supposed in the field by treated-wood trusses that are partially buried in trenches. The buried trusses use the overburden soil to carry uplift wind loads and thus to obviate reinforced-concrete foundations. Details of the concept, including design rationale, fabrication and assembly experience, structural testing and fabrication drawings are included.

  11. Reducing the metabolic cost of walking with an ankle exoskeleton: interaction between actuation timing and power.

    PubMed

    Galle, Samuel; Malcolm, Philippe; Collins, Steven Hartley; De Clercq, Dirk

    2017-04-27

    Powered ankle-foot exoskeletons can reduce the metabolic cost of human walking to below normal levels, but optimal assistance properties remain unclear. The purpose of this study was to test the effects of different assistance timing and power characteristics in an experiment with a tethered ankle-foot exoskeleton. Ten healthy female subjects walked on a treadmill with bilateral ankle-foot exoskeletons in 10 different assistance conditions. Artificial pneumatic muscles assisted plantarflexion during ankle push-off using one of four actuation onset timings (36, 42, 48 and 54% of the stride) and three power levels (average positive exoskeleton power over a stride, summed for both legs, of 0.2, 0.4 and 0.5 W∙kg -1 ). We compared metabolic rate, kinematics and electromyography (EMG) between conditions. Optimal assistance was achieved with an onset of 42% stride and average power of 0.4 W∙kg -1 , leading to 21% reduction in metabolic cost compared to walking with the exoskeleton deactivated and 12% reduction compared to normal walking without the exoskeleton. With suboptimal timing or power, the exoskeleton still reduced metabolic cost, but substantially less so. The relationship between timing, power and metabolic rate was well-characterized by a two-dimensional quadratic function. The assistive mechanisms leading to these improvements included reducing muscular activity in the ankle plantarflexors and assisting leg swing initiation. These results emphasize the importance of optimizing exoskeleton actuation properties when assisting or augmenting human locomotion. Our optimal assistance onset timing and average power levels could be used for other exoskeletons to improve assistance and resulting benefits.

  12. Lockheed L-1011 Test Station on-board in support of the Adaptive Performance Optimization flight res

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This console and its compliment of computers, monitors and commmunications equipment make up the Research Engineering Test Station, the nerve center for a new aerodynamics experiment being conducted by NASA's Dryden Flight Research Center, Edwards, California. The equipment is installed on a modified Lockheed L-1011 Tristar jetliner operated by Orbital Sciences Corp., of Dulles, Va., for Dryden's Adaptive Performance Optimization project. The experiment seeks to improve the efficiency of long-range jetliners by using small movements of the ailerons to improve the aerodynamics of the wing at cruise conditions. About a dozen research flights in the Adaptive Performance Optimization project are planned over the next two to three years. Improving the aerodynamic efficiency should result in equivalent reductions in fuel usage and costs for airlines operating large, wide-bodied jetliners.

  13. An optimized 13C-urea breath test for the diagnosis of H pylori infection

    PubMed Central

    Campuzano-Maya, Germán

    2007-01-01

    AIM: To validate an optimized 13C-urea breath test (13C-UBT) protocol for the diagnosis of H pylori infection that is cost-efficient and maintains excellent diagnostic accuracy. METHODS: 70 healthy volunteers were tested with two simplified 13C-UBT protocols, with test meal (Protocol 2) and without test meal (Protocol 1). Breath samples were collected at 10, 20 and 30 min after ingestion of 50 mg 13C-urea dissolved in 10 mL of water, taken as a single swallow, followed by 200 mL of water (pH 6.0) and a circular motion around the waistline to homogenize the urea solution. Performance of both protocols was analyzed at various cut-off values. Results were validated against the European protocol. RESULTS: According to the reference protocol, 65.7% individuals were positive for H pylori infection and 34.3% were negative. There were no significant differences in the ability of both protocols to correctly identify positive and negative H pylori individuals. However, only Protocol 1 with no test meal achieved accuracy, sensitivity, specificity, positive and negative predictive values of 100%. The highest values achieved by Protocol 2 were 98.57%, 97.83%, 100%, 100% and 100%, respectively. CONCLUSION: A 10 min, 50 mg 13C-UBT with no test meal using a cut-off value of 2-2.5 is a highly accurate test for the diagnosis of H pylori infection at a reduced cost. PMID:17907288

  14. Present and future of cervical cancer prevention in Spain: a cost-effectiveness analysis.

    PubMed

    Georgalis, Leonidas; de Sanjosé, Silvia; Esnaola, Mikel; Bosch, F Xavier; Diaz, Mireia

    2016-09-01

    Human papillomavirus (HPV) vaccination within a nonorganized setting creates a poor cost-effectiveness scenario. However, framed within an organized screening including primary HPV DNA testing with lengthening intervals may provide the best health value for invested money. To compare the effectiveness and cost-effectiveness of different cervical cancer (CC) prevention strategies, including current status and new proposed screening practices, to inform health decision-makers in Spain, a Markov model was developed to simulate the natural history of HPV and CC. Outcomes included cases averted, life expectancy, reduction in the lifetime risk of CC, life years saved, quality-adjusted life years (QALYs), net health benefits, lifetime costs, and incremental cost-effectiveness ratios. The willingness-to-pay threshold is defined at 20 000&OV0556;/QALY. Both costs and health outcomes were discounted at an annual rate of 3%. A strategy of 5-year organized HPV testing has similar effectiveness, but higher efficiency than 3-year cytology. Screening alone and vaccination combined with cytology are dominated by vaccination followed by 5-year HPV testing with cytology triage (12 214&OV0556;/QALY). The optimal age for both ending screening and switching age from cytology to HPV testing in older women is 5 years later for unvaccinated than for vaccinated women. Net health benefits decrease faster with diminishing vaccination coverage than screening coverage. Primary HPV DNA testing is more effective and cost-effective than current cytological screening. Vaccination uptake improvements and a gradual change toward an organized screening practice are critical components for achieving higher effectiveness and efficiency in the prevention of CC in Spain.

  15. Timing of prophylactic surgery in prevention of diverticulitis recurrence: a cost-effectiveness analysis.

    PubMed

    Richards, Robert J; Hammitt, James K

    2002-09-01

    Although surgery is recommended after two or more attacks of uncomplicated diverticulitis, the optimal timing for surgery in terms of cost-effectiveness is unknown. A Markov model was used to compare the costs and outcomes of performing surgery after one, two, or three uncomplicated attacks in 60-year-old hypothetical cohorts. Transition state probabilities were assigned values using published data and expert opinion. Costs were estimated from Medicare reimbursement rates. Surgery after the third attack is cost saving, yielding more years of life and quality adjusted life years at a lower cost than the other two strategies. The results were not sensitive to many of the variables tested in the model or to changes made in the discount rate (0-5%). In conclusion, performing prophylactic resection after the third attack of diverticulitis is cost saving in comparison to resection performed after the first or second attacks and remains cost-effective during sensitivity analysis.

  16. Costs and Cost-Effectiveness of Plasmodium vivax Control.

    PubMed

    White, Michael T; Yeung, Shunmay; Patouillard, Edith; Cibulskis, Richard

    2016-12-28

    The continued success of efforts to reduce the global malaria burden will require sustained funding for interventions specifically targeting Plasmodium vivax The optimal use of limited financial resources necessitates cost and cost-effectiveness analyses of strategies for diagnosing and treating P. vivax and vector control tools. Herein, we review the existing published evidence on the costs and cost-effectiveness of interventions for controlling P. vivax, identifying nine studies focused on diagnosis and treatment and seven studies focused on vector control. Although many of the results from the much more extensive P. falciparum literature can be applied to P. vivax, it is not always possible to extrapolate results from P. falciparum-specific cost-effectiveness analyses. Notably, there is a need for additional studies to evaluate the potential cost-effectiveness of radical cure with primaquine for the prevention of P. vivax relapses with glucose-6-phosphate dehydrogenase testing. © The American Society of Tropical Medicine and Hygiene.

  17. Costs and Cost-Effectiveness of Plasmodium vivax Control

    PubMed Central

    White, Michael T.; Yeung, Shunmay; Patouillard, Edith; Cibulskis, Richard

    2016-01-01

    The continued success of efforts to reduce the global malaria burden will require sustained funding for interventions specifically targeting Plasmodium vivax. The optimal use of limited financial resources necessitates cost and cost-effectiveness analyses of strategies for diagnosing and treating P. vivax and vector control tools. Herein, we review the existing published evidence on the costs and cost-effectiveness of interventions for controlling P. vivax, identifying nine studies focused on diagnosis and treatment and seven studies focused on vector control. Although many of the results from the much more extensive P. falciparum literature can be applied to P. vivax, it is not always possible to extrapolate results from P. falciparum–specific cost-effectiveness analyses. Notably, there is a need for additional studies to evaluate the potential cost-effectiveness of radical cure with primaquine for the prevention of P. vivax relapses with glucose-6-phosphate dehydrogenase testing. PMID:28025283

  18. The incremental costs of recommended therapy versus real world therapy in type 2 diabetes patients

    PubMed Central

    Crivera, C.; Suh, D. C.; Huang, E. S.; Cagliero, E.; Grant, R. W.; Vo, L.; Shin, H. C.; Meigs, J. B.

    2008-01-01

    Background The goals of diabetes management have evolved over the past decade to become the attainment of near-normal glucose and cardiovascular risk factor levels. Improved metabolic control is achieved through optimized medication regimens, but costs specifically associated with such optimization have not been examined. Objective To estimate the incremental medication cost of providing optimal therapy to reach recommended goals versus actual therapy in patients with type 2 diabetes. Methods We randomly selected the charts of 601 type 2 diabetes patients receiving care from the outpatient clinics of Massachusetts General Hospital March 1, 1996–August 31, 1997 and abstracted clinical and medication data. We applied treatment algorithms based on 2004 clinical practice guidelines for hyperglycemia, hyperlipidemia, and hypertension to patients’ current medication therapy to determine how current medication regimens could be improved to attain recommended treatment goals. Four clinicians and three pharmacists independently applied the algorithms and reached consensus on recommended therapies. Mean incremental medication costs, the cost differences between current and recommended therapies, per patient (expressed in 2004 dollars) were calculated with 95% bootstrap confidence intervals (CIs). Results Mean patient age was 65 years old, mean duration of diabetes was 7.7 years, 32% had ideal glucose control, 25% had ideal systolic blood pressure, and 24% had ideal low-density lipoprotein cholesterol. Care for these diabetes patients was similar to that observed in recent national studies. If treatment algorithm recommendations were applied, the average annual medication cost/patient would increase from $1525 to $2164. Annual incremental costs/patient increased by $168 (95% CI $133–$206) for antihyperglycemic medications, $75 ($57–$93) for antihypertensive medications, $392 ($354–$434) for antihyperlipidemic medications, and $3 ($3–$4) for aspirin prophylaxis. Yearly incremental cost of recommended laboratory testing ranged from $77–$189/patient. Limitations Although baseline data come from the clinics of a single academic institution, collected in 1997, the care of these diabetes patients was remarkably similar to care recently observed nationally. In addition, the data are dependent on the medical record and may not accurately reflect patients’ actual experiences. Conclusion Average yearly incremental cost of optimizing drug regimens to achieve recommended treatment goals for type 2 diabetes was approximately $600/patient. These results provide valuable input for assessing the cost-effectiveness of improving comprehensive diabetes care. PMID:17076990

  19. Investigation of a L1-optimized choke ring ground plane for a low-cost GPS receiver-system

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Schwieger, Volker

    2018-01-01

    Besides the geodetic dual-frequency GNSS receivers-systems (receiver and antenna), there are also low-cost single-frequency GPS receiver-systems. The multipath effect is a limiting factor of accuracy for both geodetic dual-frequency and low-cost single-frequency GPS receivers. And the multipath effect is for the short baselines dominating error (typical for the monitoring in Engineering Geodesy). So accuracy and reliability of GPS measurement for monitoring can be improved by reducing the multipath signal. In this paper, the self-constructed L1-optimized choke ring ground plane (CR-GP) is applied to reduce the multipath signal. Its design will be described and its performance will be investigated. The results show that the introduced low-cost single-frequency GPS receiver-system, which contains the Ublox LEA-6T single-frequency GPS receiver and Trimble Bullet III antenna with a self-constructed L1-optimized CR-GP, can reach standard deviations of 3 mm in east, 5 mm in north and 9 mm in height in the test field which has many reflectors. This accuracy is comparable with the geodetic dual-frequency GNSS receiver-system. The improvement of the standard deviation of the measurement using the CR-GP is about 50 % and 35 % compared to the used antenna without shielding and with flat ground plane respectively.

  20. Cost analysis of an electricity supply chain using modification of price based dynamic economic dispatch in wheeling transaction scheme

    NASA Astrophysics Data System (ADS)

    Wahyuda; Santosa, Budi; Rusdiansyah, Ahmad

    2018-04-01

    Deregulation of the electricity market requires coordination between parties to synchronize the optimization on the production side (power station) and the transport side (transmission). Electricity supply chain presented in this article is designed to facilitate the coordination between the parties. Generally, the production side is optimized with price based dynamic economic dispatch (PBDED) model, while the transmission side is optimized with Multi-echelon distribution model. Both sides optimization are done separately. This article proposes a joint model of PBDED and multi-echelon distribution for the combined optimization of production and transmission. This combined optimization is important because changes in electricity demand on the customer side will cause changes to the production side that automatically also alter the transmission path. The transmission will cause two cost components. First, the cost of losses. Second, the cost of using the transmission network (wheeling transaction). Costs due to losses are calculated based on ohmic losses, while the cost of using transmission lines using the MW - mile method. As a result, this method is able to provide best allocation analysis for electrical transactions, as well as emission levels in power generation and cost analysis. As for the calculation of transmission costs, the Reverse MW-mile method produces a cheaper cost than the Absolute MW-mile method

  1. Quality, efficiency, and cost of a physician-assistant-protocol system for managment of diabetes and hypertension.

    PubMed

    Komaroff, A L; Flatley, M; Browne, C; Sherman, H; Fineberg, S E; Knopp, R H

    1976-04-01

    Briefly trained physicians assistants using protocols (clinical algorithms) for diabetes, hypertension, and related chronic arteriosclerotic and hypertensive heart disease abstrated information from the medical record and obtained history and physical examination data on every patient-visit to a city hospital chronic disease clinic over a 18-month period. The care rendered by the protocol system was compared with care rendered by a "traditional" system in the same clinic in which physicians delegated few clinical tasks. Increased thoroughness in collecting clinical data in the protocol system led to an increase in the recognition of new pathology. Outcome criteria reflected equivalent quality of care in both groups. Efficiency time-motion studies demonstrated a 20 per cent saving in physician time with the protocol system. Coct estimates, based on the time spent with patients by various providers and on the laboratory-test-ordering patterns, demonstrated equivalent costs of the two systems, given optimal staffing patterns. Laboratory tests were a major element of the cost of patient care,and the clinical yield per unit cost of different tests varied widely.

  2. DEMONSTRATION AND TESTING OF AN EER OPTIMIZER SYSTEM FOR DX AIR-CONDITIONERS

    DTIC Science & Technology

    2017-10-07

    Performance-Based Maintenance PCS Power Current Sensor PLC Programmable Logic Controller ppm Parts Per Million PSIG Pounds per Square Inch Gauge PVS Power...all utilities and facilities at Patrick AFB, Cape Canaveral AFS, Jonathan Dickinson Military Tracking Annex, Malabar Annex, Ramey Solar Observatory...Cost 8,057 0 Annual O&M Cost 453 1191 Annual FD&D Monitoring 880 ‐ BLCC LIFE CYCLE RESULTS Energy Savings $12,317 O&M Net Savings $493 PV  Life Cycle

  3. Information prioritization for control and automation of space operations

    NASA Technical Reports Server (NTRS)

    Ray, Asock; Joshi, Suresh M.; Whitney, Cynthia K.; Jow, Hong N.

    1987-01-01

    The applicability of a real-time information prioritization technique to the development of a decision support system for control and automation of Space Station operations is considered. The steps involved in the technique are described, including the definition of abnormal scenarios and of attributes, measures of individual attributes, formulation and optimization of a cost function, simulation of test cases on the basis of the cost function, and examination of the simulation scenerios. A list is given comparing the intrinsic importances of various Space Station information data.

  4. The price of performance: a cost and performance analysis of the implementation of cell-free fetal DNA testing for Down syndrome in Ontario, Canada.

    PubMed

    Okun, N; Teitelbaum, M; Huang, T; Dewa, C S; Hoch, J S

    2014-04-01

    To examine the cost and performance implications of introducing cell-free fetal DNA (cffDNA) testing within modeled scenarios in a publicly funded Canadian provincial Down syndrome (DS) prenatal screening program. Two clinical algorithms were created: the first to represent the current screening program and the second to represent one that incorporates cffDNA testing. From these algorithms, eight distinct scenarios were modeled to examine: (1) the current program (no cffDNA), (2) the current program with first trimester screening (FTS) as the nuchal translucency-based primary screen (no cffDNA), (3) a program substituting current screening with primary cffDNA, (4) contingent cffDNA with current FTS performance, (5) contingent cffDNA at a fixed price to result in overall cost neutrality,(6) contingent cffDNA with an improved detection rate (DR) of FTS, (7) contingent cffDNA with higher uptake of FTS, and (8) contingent cffDNA with optimized FTS (higher uptake and improved DR). This modeling study demonstrates that introducing contingent cffDNA testing improves performance by increasing the number of cases of DS detected prenatally, and reducing the number of amniocenteses performed and concomitant iatrogenic pregnancy loss of pregnancies not affected by DS. Costs are modestly increased, although the cost per case of DS detected is decreased with contingent cffDNA testing. Contingent models of cffDNA testing can improve overall screening performance while maintaining the provision of an 11- to 13-week scan. Costs are modestly increased, but cost per prenatally detected case of DS is decreased. © 2013 John Wiley & Sons, Ltd.

  5. Optimizations of geothermal cycle shell and tube exchangers of various configurations with variable fluid properties and site specific fouling. [SIZEHX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, W.L.; Pines, H.S.; Silvester, L.F.

    1978-03-01

    A new heat exchanger program, SIZEHX, is described. This program allows single step multiparameter cost optimizations on single phase or supercritical exchanger arrays with variable properties and arbitrary fouling for a multitude of matrix configurations and fluids. SIZEHX uses a simplified form of Tinker's method for characterization of shell side performance; the Starling modified BWR equation for thermodynamic properties of hydrocarbons; and transport properties developed by NBS. Results of four parameter cost optimizations on exchangers for specific geothermal applications are included. The relative mix of capital cost, pumping cost, and brine cost ($/Btu) is determined for geothermal exchangers illustrating themore » invariant nature of the optimal cost distribution for fixed unit costs.« less

  6. Analytic energy gradient of projected Hartree-Fock within projection after variation

    NASA Astrophysics Data System (ADS)

    Uejima, Motoyuki; Ten-no, Seiichiro

    2017-03-01

    We develop a geometrical optimization technique for the projection-after-variation (PAV) scheme of the recently refined projected Hartree-Fock (PHF) as a fast alternative to the variation-after-projection (VAP) approach for optimizing the structures of molecules/clusters in symmetry-adapted electronic states at the mean-field computational cost. PHF handles the nondynamic correlation effects by restoring the symmetry of a broken-symmetry single reference wavefunction and moreover enables a black-box treatment of orbital selections. Using HF orbitals instead of PHF orbitals, our approach saves the computational cost for the orbital optimization, avoiding the convergence problem that sometimes emerges in the VAP scheme. We show that PAV-PHF provides geometries comparable to those of the complete active space self-consistent field and VAP-PHF for the tested systems, namely, CH2, O3, and the [Cu2O2 ] 2 + core, where nondynamic correlation is abundant. The proposed approach is useful for large systems mainly dominated by nondynamic correlation to find stable structures in many symmetry-adapted states.

  7. High-throughput single nucleotide polymorphism genotyping for breeding applications in rice using the BeadXpress platform

    USDA-ARS?s Scientific Manuscript database

    Multiplexed single nucleotide polymorphism (SNP) markers have the potential to increase the speed and cost-effectiveness of genotyping, provided that an optimal SNP density is used for each application. To test the efficiency of multiplexed SNP genotyping for diversity, mapping and breeding applicat...

  8. High-throughput SNP genotyping for breeding applications in rice using the BeadXpress platform

    USDA-ARS?s Scientific Manuscript database

    Multiplexed single nucleotide polymorphism (SNP) markers have the potential to increase the speed and cost-effectiveness of genotyping, provided that an optimal SNP density is used for each application. To test the efficiency of multiplexed SNP genotyping for diversity, mapping and breeding applicat...

  9. Optically Based Rapid Screening Method for Proven Optimal Treatment Strategies Before Treatment Begins

    DTIC Science & Technology

    to rapidly test /screen breast cancer therapeutics as a strategy to streamline drug development and provide individualized treatment. The results...system can therefore be used to streamline pre-clinical drug development, by reducing the number of animals , cost, and time required to screen new drugs

  10. USING WASTE TO CLEAN UP THE ENVIRONMENT: CELLULOSIC ETHANOL, THE FUTURE OF FUELS

    EPA Science Inventory

    In the process of converting municipal solid waste (MSW) into ethanol we optimized the first two major steps of pretreatment and enzymatic hydrolysis stages to enhance the sugar yield and to reduce the cost. For the pretreatment process, we tested different parameters of react...

  11. A Multi-Year Plan for Research, Development, and Prototype Testing of Standard Modular Hydropower Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Brennan T.; Welch, Tim; Witt, Adam M.

    The Multi-Year Plan for Research, Development, and Prototype Testing of Standard Modular Hydropower Technology (MYRP) presents a strategy for specifying, designing, testing, and demonstrating the efficacy of standard modular hydropower (SMH) as an environmentally compatible and cost-optimized renewable electricity generation technology. The MYRP provides the context, background, and vision for testing the SMH hypothesis: if standardization, modularity, and preservation of stream functionality become essential and fully realized features of hydropower technology, project design, and regulatory processes, they will enable previously unrealized levels of new project development with increased acceptance, reduced costs, increased predictability of outcomes, and increased value to stakeholders.more » To achieve success in this effort, the MYRP outlines a framework of stakeholder-validated criteria, models, design tools, testing facilities, and assessment protocols that will facilitate the development of next-generation hydropower technologies.« less

  12. Development of a multiobjective optimization tool for the selection and placement of best management practices for nonpoint source pollution control

    NASA Astrophysics Data System (ADS)

    Maringanti, Chetan; Chaubey, Indrajeet; Popp, Jennie

    2009-06-01

    Best management practices (BMPs) are effective in reducing the transport of agricultural nonpoint source pollutants to receiving water bodies. However, selection of BMPs for placement in a watershed requires optimization of the available resources to obtain maximum possible pollution reduction. In this study, an optimization methodology is developed to select and place BMPs in a watershed to provide solutions that are both economically and ecologically effective. This novel approach develops and utilizes a BMP tool, a database that stores the pollution reduction and cost information of different BMPs under consideration. The BMP tool replaces the dynamic linkage of the distributed parameter watershed model during optimization and therefore reduces the computation time considerably. Total pollutant load from the watershed, and net cost increase from the baseline, were the two objective functions minimized during the optimization process. The optimization model, consisting of a multiobjective genetic algorithm (NSGA-II) in combination with a watershed simulation tool (Soil Water and Assessment Tool (SWAT)), was developed and tested for nonpoint source pollution control in the L'Anguille River watershed located in eastern Arkansas. The optimized solutions provided a trade-off between the two objective functions for sediment, phosphorus, and nitrogen reduction. The results indicated that buffer strips were very effective in controlling the nonpoint source pollutants from leaving the croplands. The optimized BMP plans resulted in potential reductions of 33%, 32%, and 13% in sediment, phosphorus, and nitrogen loads, respectively, from the watershed.

  13. Optimal Combinations of Diagnostic Tests Based on AUC.

    PubMed

    Huang, Xin; Qin, Gengsheng; Fang, Yixin

    2011-06-01

    When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.

  14. A method for the dynamic management of genetic variability in dairy cattle

    PubMed Central

    Colleau, Jean-Jacques; Moureaux, Sophie; Briend, Michèle; Bechu, Jérôme

    2004-01-01

    According to the general approach developed in this paper, dynamic management of genetic variability in selected populations of dairy cattle is carried out for three simultaneous purposes: procreation of young bulls to be further progeny-tested, use of service bulls already selected and approval of recently progeny-tested bulls for use. At each step, the objective is to minimize the average pairwise relationship coefficient in the future population born from programmed matings and the existing population. As a common constraint, the average estimated breeding value of the new population, for a selection goal including many important traits, is set to a desired value. For the procreation of young bulls, breeding costs are additionally constrained. Optimization is fully analytical and directly considers matings. Corresponding algorithms are presented in detail. The efficiency of these procedures was tested on the current Norman population. Comparisons between optimized and real matings, clearly showed that optimization would have saved substantial genetic variability without reducing short-term genetic gains. PMID:15231230

  15. Predicting the Effects of Powder Feeding Rates on Particle Impact Conditions and Cold Spray Deposited Coatings

    NASA Astrophysics Data System (ADS)

    Ozdemir, Ozan C.; Widener, Christian A.; Carter, Michael J.; Johnson, Kyle W.

    2017-10-01

    As the industrial application of the cold spray technology grows, the need to optimize both the cost and the quality of the process grows with it. Parameter selection techniques available today require the use of a coupled system of equations to be solved to involve the losses due to particle loading in the gas stream. Such analyses cause a significant increase in the computational time in comparison with calculations with isentropic flow assumptions. In cold spray operations, engineers and operators may, therefore, neglect the effects of particle loading to simplify the multiparameter optimization process. In this study, two-way coupled (particle-fluid) quasi-one-dimensional fluid dynamics simulations are used to test the particle loading effects under many potential cold spray scenarios. Output of the simulations is statistically analyzed to build regression models that estimate the changes in particle impact velocity and temperature due to particle loading. This approach eases particle loading optimization for more complete analysis on deposition cost and time. The model was validated both numerically and experimentally. Further numerical analyses were completed to test the particle loading capacity and limitations of a nozzle with a commonly used throat size. Additional experimentation helped document the physical limitations to high-rate deposition.

  16. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis

    2003-01-01

    This document details the progress to date on the ''OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING'' contract for the quarter starting October 2002 through December 2002. Even though we are awaiting the optimization portion of the testing program, accomplishments included the following: (1) Smith International participated in the DOE Mud Hammer program through full scale benchmarking testing during the week of 4 November 2003. (2) TerraTek acknowledges Smith International, BP America, PDVSA, and ConocoPhillips for cost-sharing the Smith benchmarking tests allowing extension of the contract to add to themore » benchmarking testing program. (3) Following the benchmark testing of the Smith International hammer, representatives from DOE/NETL, TerraTek, Smith International and PDVSA met at TerraTek in Salt Lake City to review observations, performance and views on the optimization step for 2003. (4) The December 2002 issue of Journal of Petroleum Technology (Society of Petroleum Engineers) highlighted the DOE fluid hammer testing program and reviewed last years paper on the benchmark performance of the SDS Digger and Novatek hammers. (5) TerraTek's Sid Green presented a technical review for DOE/NETL personnel in Morgantown on ''Impact Rock Breakage'' and its importance on improving fluid hammer performance. Much discussion has taken place on the issues surrounding mud hammer performance at depth conditions.« less

  17. Flight plan optimization

    NASA Astrophysics Data System (ADS)

    Dharmaseelan, Anoop; Adistambha, Keyne D.

    2015-05-01

    Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.

  18. Optimal Sizing of Energy Storage for Community Microgrids Considering Building Thermal Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Li, Zhi; Starke, Michael R.

    This paper proposes an optimization model for the optimal sizing of energy storage in community microgrids considering the building thermal dynamics and customer comfort preference. The proposed model minimizes the annualized cost of the community microgrid, including energy storage investment, purchased energy cost, demand charge, energy storage degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation. The decision variables are the power and energy capacity of invested energy storage. In particular, we assume the heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently by the microgrid central controller while maintainingmore » the indoor temperature in the comfort range set by customers. For this purpose, the detailed thermal dynamic characteristics of buildings have been integrated into the optimization model. Numerical simulation shows significant cost reduction by the proposed model. The impacts of various costs on the optimal solution are investigated by sensitivity analysis.« less

  19. Low-Level Space Optimization of an AES Implementation for a Bit-Serial Fully Pipelined Architecture

    NASA Astrophysics Data System (ADS)

    Weber, Raphael; Rettberg, Achim

    A previously developed AES (Advanced Encryption Standard) implementation is optimized and described in this paper. The special architecture for which this implementation is targeted comprises synchronous and systematic bit-serial processing without a central controlling instance. In order to shrink the design in terms of logic utilization we deeply analyzed the architecture and the AES implementation to identify the most costly logic elements. We propose to merge certain parts of the logic to achieve better area efficiency. The approach was integrated into an existing synthesis tool which we used to produce synthesizable VHDL code. For testing purposes, we simulated the generated VHDL code and ran tests on an FPGA board.

  20. Vortex generator design for aircraft inlet distortion as a numerical optimization problem

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Levy, Ralph

    1991-01-01

    Aerodynamic compatibility of aircraft/inlet/engine systems is a difficult design problem for aircraft that must operate in many different flight regimes. Takeoff, subsonic cruise, supersonic cruise, transonic maneuvering, and high altitude loiter each place different constraints on inlet design. Vortex generators, small wing like sections mounted on the inside surfaces of the inlet duct, are used to control flow separation and engine face distortion. The design of vortex generator installations in an inlet is defined as a problem addressable by numerical optimization techniques. A performance parameter is suggested to account for both inlet distortion and total pressure loss at a series of design flight conditions. The resulting optimization problem is difficult since some of the design parameters take on integer values. If numerical procedures could be used to reduce multimillion dollar development test programs to a small set of verification tests, numerical optimization could have a significant impact on both cost and elapsed time to design new aircraft.

  1. Many-to-Many Multicast Routing Schemes under a Fixed Topology

    PubMed Central

    Ding, Wei; Wang, Hongfa; Wei, Xuerui

    2013-01-01

    Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem. PMID:23589706

  2. Discrete-time Markovian-jump linear quadratic optimal control

    NASA Technical Reports Server (NTRS)

    Chizeck, H. J.; Willsky, A. S.; Castanon, D.

    1986-01-01

    This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.

  3. Optimality and stability of intentional and unintentional actions: I. Origins of drifts in performance.

    PubMed

    Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M; Latash, Mark L

    2017-02-01

    We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force-moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task.

  4. Optimality and stability of intentional and unintentional actions: I. Origins of drifts in performance

    PubMed Central

    Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2016-01-01

    We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force/moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task. PMID:27785549

  5. AIMING FOR THE BULL'S EYE: The Cost-Utility of Screening for Hydroxychloroquine Retinopathy.

    PubMed

    McClellan, Andrew J; Chang, Jonathan S; Smiddy, William E

    2016-10-01

    Throughout medicine, the cost of various treatments has been increasingly studied with the result that certain management guidelines might be reevaluated in their context. Cost-utility is a term referring to the expense of preventing the loss of quality of life, quantified in dollars per quality-adjusted life year. In 2002, the American Academy of Ophthalmology published hydroxychloroquine screening recommendations which were revised in 2011. The purpose of this report is to estimate the cost-utility of these recommendations. A hypothetical care model of screening for hydroxychloroquine retinopathy was formulated. The costs of screening components were calculated using 2016 Medicare fee schedules from the Centers for Medicare and Medicaid Services. The cost-utility of screening for hydroxychloroquine retinopathy with the 2011 American Academy of Ophthalmology guidelines was found to vary from 33,155 to 344,172 dollars per quality-adjusted life year depending on the type and number of objective screening tests chosen, practice setting, and the duration of hydroxychloroquine use. Screening had a more favorable cost-utility when the more sensitive and specific diagnostics were used, and for patients with an increased risk of toxicity. American Academy of Ophthalmology guidelines have a wide-ranging cost-utility. Prudent clinical judgment of risk stratification and tests chosen is necessary to optimize cost-utility without compromising the efficacy of screening.

  6. Economic evaluation of genomic selection in small ruminants: a sheep meat breeding program.

    PubMed

    Shumbusho, F; Raoul, J; Astruc, J M; Palhiere, I; Lemarié, S; Fugeray-Scarbel, A; Elsen, J M

    2016-06-01

    Recent genomic evaluation studies using real data and predicting genetic gain by modeling breeding programs have reported moderate expected benefits from the replacement of classic selection schemes by genomic selection (GS) in small ruminants. The objectives of this study were to compare the cost, monetary genetic gain and economic efficiency of classic selection and GS schemes in the meat sheep industry. Deterministic methods were used to model selection based on multi-trait indices from a sheep meat breeding program. Decisional variables related to male selection candidates and progeny testing were optimized to maximize the annual monetary genetic gain (AMGG), that is, a weighted sum of meat and maternal traits annual genetic gains. For GS, a reference population of 2000 individuals was assumed and genomic information was available for evaluation of male candidates only. In the classic selection scheme, males breeding values were estimated from own and offspring phenotypes. In GS, different scenarios were considered, differing by the information used to select males (genomic only, genomic+own performance, genomic+offspring phenotypes). The results showed that all GS scenarios were associated with higher total variable costs than classic selection (if the cost of genotyping was 123 euros/animal). In terms of AMGG and economic returns, GS scenarios were found to be superior to classic selection only if genomic information was combined with their own meat phenotypes (GS-Pheno) or with their progeny test information. The predicted economic efficiency, defined as returns (proportional to number of expressions of AMGG in the nucleus and commercial flocks) minus total variable costs, showed that the best GS scenario (GS-Pheno) was up to 15% more efficient than classic selection. For all selection scenarios, optimization increased the overall AMGG, returns and economic efficiency. As a conclusion, our study shows that some forms of GS strategies are more advantageous than classic selection, provided that GS is already initiated (i.e. the initial reference population is available). Optimizing decisional variables of the classic selection scheme could be of greater benefit than including genomic information in optimized designs.

  7. Evaluating Diagnostic Point-of-Care Tests in Resource-Limited Settings

    PubMed Central

    Drain, Paul K; Hyle, Emily P; Noubary, Farzad; Freedberg, Kenneth A; Wilson, Douglas; Bishai, William; Rodriguez, William; Bassett, Ingrid V

    2014-01-01

    Diagnostic point-of-care (POC) testing is intended to minimize the time to obtain a test result, thereby allowing clinicians and patients to make an expeditious clinical decision. As POC tests expand into resource-limited settings (RLS), the benefits must outweigh the costs. To optimize POC testing in RLS, diagnostic POC tests need rigorous evaluations focused on relevant clinical outcomes and operational costs, which differ from evaluations of conventional diagnostic tests. Here, we reviewed published studies on POC testing in RLS, and found no clearly defined metric for the clinical utility of POC testing. Therefore, we propose a framework for evaluating POC tests, and suggest and define the term “test efficacy” to describe a diagnostic test’s capacity to support a clinical decision within its operational context. We also proposed revised criteria for an ideal diagnostic POC test in resource-limited settings. Through systematic evaluations, comparisons between centralized diagnostic testing and novel POC technologies can be more formalized, and health officials can better determine which POC technologies represent valuable additions to their clinical programs. PMID:24332389

  8. HIV prevention costs and their predictors: evidence from the ORPHEA Project in Kenya

    PubMed Central

    Galárraga, Omar; Wamai, Richard G; Sosa-Rubí, Sandra G; Mugo, Mercy G; Contreras-Loya, David; Bautista-Arredondo, Sergio; Nyakundi, Helen; Wang’ombe, Joseph K

    2017-01-01

    Abstract We estimate costs and their predictors for three HIV prevention interventions in Kenya: HIV testing and counselling (HTC), prevention of mother-to-child transmission (PMTCT) and voluntary medical male circumcision (VMMC). As part of the ‘Optimizing the Response of Prevention: HIV Efficiency in Africa’ (ORPHEA) project, we collected retrospective data from government and non-governmental health facilities for 2011–12. We used multi-stage sampling to determine a sample of health facilities by type, ownership, size and interventions offered totalling 144 sites in 78 health facilities in 33 districts across Kenya. Data sources included key informants, registers and time-motion observation methods. Total costs of production were computed using both quantity and unit price of each input. Average cost was estimated by dividing total cost per intervention by number of clients accessing the intervention. Multivariate regression methods were used to analyse predictors of log-transformed average costs. Average costs were $7 and $79 per HTC and PMTCT client tested, respectively; and $66 per VMMC procedure. Results show evidence of economies of scale for PMTCT and VMMC: increasing the number of clients per year by 100% was associated with cost reductions of 50% for PMTCT, and 45% for VMMC. Task shifting was associated with reduced costs for both PMTCT (59%) and VMMC (54%). Costs in hospitals were higher for PMTCT (56%) in comparison to non-hospitals. Facilities that performed testing based on risk factors as opposed to universal screening had higher HTC average costs (79%). Lower VMMC costs were associated with availability of male reproductive health services (59%) and presence of community advisory board (52%). Aside from increasing production scale, HIV prevention costs may be contained by using task shifting, non-hospital sites, service integration and community supervision. PMID:29029086

  9. Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin

    2015-11-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  10. Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites

    NASA Astrophysics Data System (ADS)

    Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.

    2015-12-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  11. Motor planning under temporal uncertainty is suboptimal when the gain function is asymmetric

    PubMed Central

    Ota, Keiji; Shinya, Masahiro; Kudo, Kazutoshi

    2015-01-01

    For optimal action planning, the gain/loss associated with actions and the variability in motor output should both be considered. A number of studies make conflicting claims about the optimality of human action planning but cannot be reconciled due to their use of different movements and gain/loss functions. The disagreement is possibly because of differences in the experimental design and differences in the energetic cost of participant motor effort. We used a coincident timing task, which requires decision making with constant energetic cost, to test the optimality of participant's timing strategies under four configurations of the gain function. We compared participant strategies to an optimal timing strategy calculated from a Bayesian model that maximizes the expected gain. We found suboptimal timing strategies under two configurations of the gain function characterized by asymmetry, in which higher gain is associated with higher risk of zero gain. Participants showed a risk-seeking strategy by responding closer than optimal to the time of onset/offset of zero gain. Meanwhile, there was good agreement of the model with actual performance under two configurations of the gain function characterized by symmetry. Our findings show that human ability to make decisions that must reflect uncertainty in one's own motor output has limits that depend on the configuration of the gain function. PMID:26236227

  12. Economic Model Predictive Control of Bihormonal Artificial Pancreas System Based on Switching Control and Dynamic R-parameter.

    PubMed

    Tang, Fengna; Wang, Youqing

    2017-11-01

    Blood glucose (BG) regulation is a long-term task for people with diabetes. In recent years, more and more researchers have attempted to achieve automated regulation of BG using automatic control algorithms, called the artificial pancreas (AP) system. In clinical practice, it is equally important to guarantee the treatment effect and reduce the treatment costs. The main motivation of this study is to reduce the cure burden. The dynamic R-parameter economic model predictive control (R-EMPC) is chosen to regulate the delivery rates of exogenous hormones (insulin and glucagon). It uses particle swarm optimization (PSO) to optimize the economic cost function and the switching logic between insulin delivery and glucagon delivery is designed based on switching control theory. The proposed method is first tested on the standard subject; the result is compared with the switching PID and the switching MPC. The effect of the dynamic R-parameter on improving the control performance is illustrated by comparing the results of the EMPC and the R-EMPC. Finally, the robustness tests on meal change (size and timing), hormone sensitivity (insulin and glucagon), and subject variability are performed. All results show that the proposed method can improve the control performance and reduce the economic costs. The simulation results verify the effectiveness of the proposed algorithm on improving the tracking performance, enhancing robustness, and reducing economic costs. The method proposed in this study owns great worth in practical application.

  13. Optimization of batteries for plug-in hybrid electric vehicles

    NASA Astrophysics Data System (ADS)

    English, Jeffrey Robb

    This thesis presents a method to quickly determine the optimal battery for an electric vehicle given a set of vehicle characteristics and desired performance metrics. The model is based on four independent design variables: cell count, cell capacity, state-of-charge window, and battery chemistry. Performance is measured in seven categories: cost, all-electric range, maximum speed, acceleration, battery lifetime, lifetime greenhouse gas emissions, and charging time. The performance of each battery is weighted according to a user-defined objective function to determine its overall fitness. The model is informed by a series of battery tests performed on scaled-down battery samples. Seven battery chemistries were tested for capacity at different discharge rates, maximum output power at different charge levels, and performance in a real-world automotive duty cycle. The results of these tests enable a prediction of the performance of the battery in an automobile. Testing was performed at both room temperature and low temperature to investigate the effects of battery temperature on operation. The testing highlighted differences in behavior between lithium, nickel, and lead based batteries. Battery performance decreased with temperature across all samples with the largest effect on nickel-based chemistries. Output power also decreased with lead acid batteries being the least affected by temperature. Lithium-ion batteries were found to be highly efficient (>95%) under a vehicular duty cycle; nickel and lead batteries have greater losses. Low temperatures hindered battery performance and resulted in accelerated failure in several samples. Lead acid, lead tin, and lithium nickel alloy batteries were unable to complete the low temperature testing regime without losing significant capacity and power capability. This is a concern for their applicability in electric vehicles intended for cold climates which have to maintain battery temperature during long periods of inactivity. Three sample optimizations were performed: a compact car, a, truck, and a sports car. The compact car benefits from increased battery capacity despite the associated higher cost. The truck returned the smallest possible battery of each chemistry, indicating that electrification is not advisable. The sports car optimization resulted in the largest possible battery, indicating large performance from increased electrification. These results mirror the current state of the electric vehicle market.

  14. Cost-Effectiveness of Antibiotic Prophylaxis Strategies for Transrectal Prostate Biopsy in an Era of Increasing Antimicrobial Resistance.

    PubMed

    Lee, Kyueun; Drekonja, Dimitri M; Enns, Eva A

    2018-03-01

    To determine the optimal antibiotic prophylaxis strategy for transrectal prostate biopsy (TRPB) as a function of the local antibiotic resistance profile. We developed a decision-analytic model to assess the cost-effectiveness of four antibiotic prophylaxis strategies: ciprofloxacin alone, ceftriaxone alone, ciprofloxacin and ceftriaxone in combination, and directed prophylaxis selection based on susceptibility testing. We used a payer's perspective and estimated the health care costs and quality-adjusted life-years (QALYs) associated with each strategy for a cohort of 66-year-old men undergoing TRPB. Costs and benefits were discounted at 3% annually. Base-case resistance prevalence was 29% to ciprofloxacin and 7% to ceftriaxone, reflecting susceptibility patterns observed at the Minneapolis Veterans Affairs Health Care System. Resistance levels were varied in sensitivity analysis. In the base case, single-agent prophylaxis strategies were dominated. Directed prophylaxis strategy was the optimal strategy at a willingness-to-pay threshold of $50,000/QALY gained. Relative to the directed prophylaxis strategy, the incremental cost-effectiveness ratio of the combination strategy was $123,333/QALY gained over the lifetime time horizon. In sensitivity analysis, single-agent prophylaxis strategies were preferred only at extreme levels of resistance. Directed or combination prophylaxis strategies were optimal for a wide range of resistance levels. Facilities using single-agent antibiotic prophylaxis strategies before TRPB should re-evaluate their strategies unless extremely low levels of antimicrobial resistance are documented. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  15. Cost and fuel consumption per nautical mile for two engine jet transports using OPTIM and TRAGEN

    NASA Technical Reports Server (NTRS)

    Wiggs, J. F.

    1982-01-01

    The cost and fuel consumption per nautical mile for two engine jet transports are computed using OPTIM and TRAGEN. The savings in fuel and direct operating costs per nautical mile for each of the different types of optimal trajectories over a standard profile are shown.

  16. Neural-network-based online HJB solution for optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems.

    PubMed

    Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong

    2014-12-01

    In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach.

  17. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    NASA Astrophysics Data System (ADS)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  18. Implications of optimization cost for balancing exploration and exploitation in global search and for experimental optimization

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Anirban

    Global optimization based on expensive and time consuming simulations or experiments usually cannot be carried out to convergence, but must be stopped because of time constraints, or because the cost of the additional function evaluations exceeds the benefits of improving the objective(s). This dissertation sets to explore the implications of such budget and time constraints on the balance between exploration and exploitation and the decision of when to stop. Three different aspects are considered in terms of their effects on the balance between exploration and exploitation: 1) history of optimization, 2) fixed evaluation budget, and 3) cost as a part of objective function. To this end, this research develops modifications to the surrogate-based optimization technique, Efficient Global Optimization algorithm, that controls better the balance between exploration and exploitation, and stopping criteria facilitated by these modifications. Then the focus shifts to examining experimental optimization, which shares the issues of cost and time constraints. Through a study on optimization of thrust and power for a small flapping wing for micro air vehicles, important differences and similarities between experimental and simulation-based optimization are identified. The most important difference is that reduction of noise in experiments becomes a major time and cost issue, and a second difference is that parallelism as a way to cut cost is more challenging. The experimental optimization reveals the tendency of the surrogate to display optimistic bias near the surrogate optimum, and this tendency is then verified to also occur in simulation based optimization.

  19. An Economic Evaluation of Colorectal Cancer Screening in Primary Care Practice

    PubMed Central

    Meenan, Richard T.; Anderson, Melissa L.; Chubak, Jessica; Vernon, Sally W.; Fuller, Sharon; Wang, Ching-Yun; Green, Beverly B.

    2015-01-01

    Introduction Recent colorectal cancer screening studies focus on optimizing adherence. This study evaluated the cost effectiveness of interventions using electronic health records (EHRs), automated mailings, and stepped support increases to improve 2-year colorectal cancer screening adherence. Methods Analyses were based on a parallel-design, randomized trial in which three stepped interventions (EHR-linked mailings [“automated”], automated plus telephone assistance [“assisted”], or automated and assisted plus nurse navigation to testing completion or refusal [navigated”]) were compared to usual care. Data were from August 2008–November 2011 with analyses performed during 2012–2013. Implementation resources were micro-costed; research and registry development costs were excluded. Incremental cost-effectiveness ratios (ICERs) were based on number of participants current for screening per guidelines over 2 years. Bootstrapping examined robustness of results. Results Intervention delivery cost per participant current for screening ranged from $21 (automated) to $27 (navigated). Inclusion of induced testing costs (e.g., screening colonoscopy) lowered expenditures for automated (ICER=−$159) and assisted (ICER=−$36) relative to usual care over 2 years. Savings arose from increased fecal occult blood testing, substituting for more expensive colonoscopies in usual care. Results were broadly consistent across demographic subgroups. More intensive interventions were consistently likely to be cost effective relative to less intensive interventions, with willingness to pay values of $600–$1,200 for an additional person current for screening yielding ≥80% probability of cost effectiveness. Conclusions Two-year cost effectiveness of a stepped approach to colorectal cancer screening promotion based on EHR data is indicated, but longer-term cost effectiveness requires further study. PMID:25998922

  20. Rational risk-based decision support for drinking water well managers by optimized monitoring designs

    NASA Astrophysics Data System (ADS)

    Enzenhöfer, R.; Geiges, A.; Nowak, W.

    2011-12-01

    Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill-delineated fractions of protection zones. Within an illustrative simplified 2D synthetic test case, we demonstrate our concept, involving synthetic transmissivity and head measurements for conditioning. We demonstrate the worth of optimally collected data in the context of protection zone delineation by assessing the reduced areal demand of delineated area at user-specified risk acceptance level. Results indicate that, thanks to optimally collected data, risk-aware delineation can be made at low to moderate additional costs compared to conventional delineation strategies.

  1. Optimal tracking and testing of U.S. and Canadian herds for BSE: a value-of-information (VOI) approach.

    PubMed

    Cox, Louis Anthony; Popken, Douglas A; VanSickle, John J; Sahu, Ranajit

    2005-08-01

    The U.S. Department of Agriculture (USDA) tests a subset of cattle slaughtered in the United States for bovine spongiform encephalitis (BSE). Knowing the origin of cattle (U.S. vs. Canadian) at testing could enable new testing or surveillance policies based on the origin of cattle testing positive. For example, if a Canadian cow tests positive for BSE, while no U.S. origin cattle do, the United States could subject Canadian cattle to more stringent testing. This article illustrates the application of a value-of-information (VOI) framework to quantify and compare potential economic costs to the United States of implementing tracking cattle origins to the costs of not doing so. The potential economic value of information from a tracking program is estimated to exceed its costs by more than five-fold if such information can reduce future losses in export and domestic markets and reduce future testing costs required to reassure or win back customers. Sensitivity analyses indicate that this conclusion is somewhat robust to many technical, scientific, and market uncertainties, including the current prevalence of BSE in the United States and/or Canada and the likely reactions of consumers to possible future discoveries of BSE in the United States and/or Canada. Indeed, the potential value of tracking information is great enough to justify locating and tracking Canadian cattle already in the United States when this can be done for a reasonable cost. If aggressive tracking and testing can win back lost exports, then the VOI of a tracking program may increase to over half a billion dollars per year.

  2. A predictive control framework for optimal energy extraction of wind farms

    NASA Astrophysics Data System (ADS)

    Vali, M.; van Wingerden, J. W.; Boersma, S.; Petrović, V.; Kühn, M.

    2016-09-01

    This paper proposes an adjoint-based model predictive control for optimal energy extraction of wind farms. It employs the axial induction factor of wind turbines to influence their aerodynamic interactions through the wake. The performance index is defined here as the total power production of the wind farm over a finite prediction horizon. A medium-fidelity wind farm model is utilized to predict the inflow propagation in advance. The adjoint method is employed to solve the formulated optimization problem in a cost effective way and the first part of the optimal solution is implemented over the control horizon. This procedure is repeated at the next controller sample time providing the feedback into the optimization. The effectiveness and some key features of the proposed approach are studied for a two turbine test case through simulations.

  3. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    NASA Astrophysics Data System (ADS)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  4. Assessing the shelf life of cost-efficient conservation plans for species at risk across gradients of agricultural land use.

    PubMed

    Robillard, Cassandra M; Kerr, Jeremy T

    2017-08-01

    High costs of land in agricultural regions warrant spatial prioritization approaches to conservation that explicitly consider land prices to produce protected-area networks that accomplish targets efficiently. However, land-use changes in such regions and delays between plan design and implementation may render optimized plans obsolete before implementation occurs. To measure the shelf life of cost-efficient conservation plans, we simulated a land-acquisition and restoration initiative aimed at conserving species at risk in Canada's farmlands. We accounted for observed changes in land-acquisition costs and in agricultural intensity based on censuses of agriculture taken from 1986 to 2011. For each year of data, we mapped costs and areas of conservation priority designated using Marxan. We compared plans to test for changes through time in the arrangement of high-priority sites and in the total cost of each plan. For acquisition costs, we measured the savings from accounting for prices during site selection. Land-acquisition costs and land-use intensity generally rose over time independent of inflation (24-78%), although rates of change were heterogeneous through space and decreased in some areas. Accounting for spatial variation in land price lowered the cost of conservation plans by 1.73-13.9%, decreased the range of costs by 19-82%, and created unique solutions from which to choose. Despite the rise in plan costs over time, the high conservation priority of particular areas remained consistent. Delaying conservation in these critical areas may compromise what optimized conservation plans can achieve. In the case of Canadian farmland, rapid conservation action is cost-effective, even with moderate levels of uncertainty in how to implement restoration goals. © 2016 Society for Conservation Biology.

  5. Portable oxygen subsystem. [design analysis and performance tests

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The concept and design of a portable oxygen device for use in the space shuttle orbiter is presented. Hardware fabrication and acceptance tests (i.e., breadboard models) are outlined and discussed. Optimization of the system (for weight, volume, safety, costs) is discussed. The device is of the rebreather type, and provides a revitalized breathing gas supply to a crewman for denitrogenization and emergency activities. Engineering drawings and photographs of the device are shown.

  6. Glucose monitoring as a guide to diabetes management. Critical subject review.

    PubMed

    Koch, B

    1996-06-01

    To encourage a balanced approach to blood glucose monitoring in diabetes by a critical review of the history, power and cost of glucose testing. The Cambridge Data Base was searched and was supplemented by a random review of other relevant sources, including textbooks, company pamphlets, and laboratory manuals. Keywords used were "glucosuria diagnosis," "blood glucose self-monitoring," "glycosylated hemoglobin," and "fructosamine" for the 10-year period ending 1992, restricted to English language and human. About 200 titles were retrieved and reviewed according to the author's judgment of relevance. "Snapshot tests" (venous and capillary blood glucose) and "memory tests" (urine glucose, glycated hemoglobin fractions and fructosamine) must be employed according to individual patients treatment goals. Day-to-day metabolic guidance is facilitated by capillary blood glucose testing for patients receiving insulin and by urine glucose testing for others. Capillary blood glucose testing is mandatory in cases of hypoglycemia unawareness (inability to sense hypoglycemia because of neuropathy) but is not a substitute for a knowledge of clinical hypoglycemia self-care. Criteria by reason (clinical judgement and cost effectiveness) must be separated from criteria by emotion (preoccupation with technology and marketing). No randomized studies show that any of these tests consistently improve clinical outcome. Optimal metabolic control and cost savings can be expected from a rational selection of tests.

  7. A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.

    2003-01-01

    In this paper we present, a comparison of trajectory optimization approaches for the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP). Quasi- Newton and Nelder-Meade Simplex. Several cost function parameterizations are considered for the direct approach. We choose one direct approach that appears to be the most flexible. Both the direct and indirect methods are applied to a variety of test cases which are chosen to demonstrate the performance of each method in different flight regimes. The first test case is a simple circular-to-circular coplanar rendezvous. The second test case is an elliptic-to-elliptic line of apsides rotation. The final test case is an orbit phasing maneuver sequence in a highly elliptic orbit. For each test case we present a comparison of the performance of all methods we consider in this paper.

  8. A reliability as an independent variable (RAIV) methodology for optimizing test planning for liquid rocket engines

    NASA Astrophysics Data System (ADS)

    Strunz, Richard; Herrmann, Jeffrey W.

    2011-12-01

    The hot fire test strategy for liquid rocket engines has always been a concern of space industry and agency alike because no recognized standard exists. Previous hot fire test plans focused on the verification of performance requirements but did not explicitly include reliability as a dimensioning variable. The stakeholders are, however, concerned about a hot fire test strategy that balances reliability, schedule, and affordability. A multiple criteria test planning model is presented that provides a framework to optimize the hot fire test strategy with respect to stakeholder concerns. The Staged Combustion Rocket Engine Demonstrator, a program of the European Space Agency, is used as example to provide the quantitative answer to the claim that a reduced thrust scale demonstrator is cost beneficial for a subsequent flight engine development. Scalability aspects of major subsystems are considered in the prior information definition inside the Bayesian framework. The model is also applied to assess the impact of an increase of the demonstrated reliability level on schedule and affordability.

  9. Advanced Structural Optimization Under Consideration of Cost Tracking

    NASA Astrophysics Data System (ADS)

    Zell, D.; Link, T.; Bickelmaier, S.; Albinger, J.; Weikert, S.; Cremaschi, F.; Wiegand, A.

    2014-06-01

    In order to improve the design process of launcher configurations in the early development phase, the software Multidisciplinary Optimization (MDO) was developed. The tool combines different efficient software tools such as Optimal Design Investigations (ODIN) for structural optimizations, Aerospace Trajectory Optimization Software (ASTOS) for trajectory and vehicle design optimization for a defined payload and mission.The present paper focuses to the integration and validation of ODIN. ODIN enables the user to optimize typical axis-symmetric structures by means of sizing the stiffening designs concerning strength and stability while minimizing the structural mass. In addition a fully automatic finite element model (FEM) generator module creates ready-to-run FEM models of a complete stage or launcher assembly.Cost tracking respectively future improvements concerning cost optimization are indicated.

  10. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  11. A Decision-making Model for a Two-stage Production-delivery System in SCM Environment

    NASA Astrophysics Data System (ADS)

    Feng, Ding-Zhong; Yamashiro, Mitsuo

    A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.

  12. Active distribution network planning considering linearized system loss

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Wang, Mingqiang; Xu, Hao

    2018-02-01

    In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.

  13. Cast iron-base alloy for cylinder/regenerator housing

    NASA Technical Reports Server (NTRS)

    Witter, Stewart L.; Simmons, Harold E.; Woulds, Michael J.

    1985-01-01

    NASACC-1 is a castable iron-base alloy designed to replace the costly and strategic cobalt-base X-40 alloy used in the automotive Stirling engine cylinder/generator housing. Over 40 alloy compositions were evaluated using investment cast test bars for stress-rupture testing. Also, hydrogen compatibility and oxygen corrosion resistance tests were used to determine the optimal alloy. NASACC-1 alloy was characterized using elevated and room temperature tensile, creep-rupture, low cycle fatigue, heat capacity, specific heat, and thermal expansion testing. Furthermore, phase analysis was performed on samples with several heat treated conditions. The properties are very encouraging. NASACC-1 alloy shows stress-rupture and low cycle fatigue properties equivalent to X-40. The oxidation resistance surpassed the program goal while maintaining acceptable resistance to hydrogen exposure. The welding, brazing, and casting characteristics are excellent. Finally, the cost of NASACC-1 is significantly lower than that of X-40.

  14. Propulsion and Power Rapid Response Research and Development (R&D) Support. Task Order 0004: Advanced Propulsion Fuels R&D, Subtask: Optimization of Lipid Production and Processing of Microalgae for the Development of Biofuels

    DTIC Science & Technology

    2013-02-01

    Purified cultures are tested for optimized production under heterotrophic conditions with several organic carbon sources like beet and sorghum juice using ...Moreover, AFRL support sponsored the Master’s in Chemical Engineering project titled “Cost Analysis Of Local Bio- Products Processing Plant Using ...unlimited. 2.5 Screening for High Lipid Production Mutants Procedure: A selection of 84 single colony cultures was analyzed in this phase using the

  15. An optimization-based approach for facility energy management with uncertainties, and, Power portfolio optimization in deregulated electricity markets with risk management

    NASA Astrophysics Data System (ADS)

    Xu, Jun

    Topic 1. An Optimization-Based Approach for Facility Energy Management with Uncertainties. Effective energy management for facilities is becoming increasingly important in view of the rising energy costs, the government mandate on the reduction of energy consumption, and the human comfort requirements. This part of dissertation presents a daily energy management formulation and the corresponding solution methodology for HVAC systems. The problem is to minimize the energy and demand costs through the control of HVAC units while satisfying human comfort, system dynamics, load limit constraints, and other requirements. The problem is difficult in view of the fact that the system is nonlinear, time-varying, building-dependent, and uncertain; and that the direct control of a large number of HVAC components is difficult. In this work, HVAC setpoints are the control variables developed on top of a Direct Digital Control (DDC) system. A method that combines Lagrangian relaxation, neural networks, stochastic dynamic programming, and heuristics is developed to predict the system dynamics and uncontrollable load, and to optimize the setpoints. Numerical testing and prototype implementation results show that our method can effectively reduce total costs, manage uncertainties, and shed the load, is computationally efficient. Furthermore, it is significantly better than existing methods. Topic 2. Power Portfolio Optimization in Deregulated Electricity Markets with Risk Management. In a deregulated electric power system, multiple markets of different time scales exist with various power supply instruments. A load serving entity (LSE) has multiple choices from these instruments to meet its load obligations. In view of the large amount of power involved, the complex market structure, risks in such volatile markets, stringent constraints to be satisfied, and the long time horizon, a power portfolio optimization problem is of critical importance but difficulty for an LSE to serve the load, maximize its profit, and manage risks. In this topic, a mid-term power portfolio optimization problem with risk management is presented. Key instruments are considered, risk terms based on semi-variances of spot market transactions are introduced, and penalties on load obligation violations are added to the objective function to improve algorithm convergence and constraint satisfaction. To overcome the inseparability of the resulting problem, a surrogate optimization framework is developed enabling a decomposition and coordination approach. Numerical testing results show that our method effectively provides decisions for various instruments to maximize profit, manage risks, and is computationally efficient.

  16. Cost effectiveness of meniscal allograft for torn discoid lateral meniscus in young women.

    PubMed

    Ramme, Austin J; Strauss, Eric J; Jazrawi, Laith; Gold, Heather T

    2016-09-01

    A discoid meniscus is more prone to tears than a normal meniscus. Patients with a torn discoid lateral meniscus are at increased risk for early onset osteoarthritis requiring total knee arthroplasty (TKA). Optimal management for this condition is controversial given the up-front cost difference between the two treatment options: the more expensive meniscal allograft transplantation compared with standard partial meniscectomy. We hypothesize that meniscal allograft transplantation following excision of a torn discoid lateral meniscus is more cost-effective compared with partial meniscectomy alone because allografts will extend the time to TKA. A decision analytic Markov model was created to compare the cost effectiveness of two treatments for symptomatic, torn discoid lateral meniscus: meniscal allograft and partial meniscectomy. Probability estimates and event rates were derived from the scientific literature, and costs and benefits were discounted by 3%. One-way sensitivity analyses were performed to test model robustness. Over 25 years, the partial meniscectomy strategy cost $10,430, whereas meniscal allograft cost on average $4040 more, at $14,470. Partial meniscectomy postponed TKA an average of 12.5 years, compared with 17.30 years for meniscal allograft, an increase of 4.8 years. Allograft cost $842 per-year-gained in time to TKA. Meniscal allografts have been shown to reduce pain and improve function in patients with discoid lateral meniscus tears. Though more costly, meniscal allografts may be more effective than partial meniscectomy in delaying TKA in this model. Additional future long term clinical studies will provide more insight into optimal surgical options.

  17. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  18. Community Microgrid Scheduling Considering Network Operational Constraints and Building Thermal Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu

    Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less

  19. Community Microgrid Scheduling Considering Network Operational Constraints and Building Thermal Dynamics

    DOE PAGES

    Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu; ...

    2017-10-10

    Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less

  20. Suboptimal LQR-based spacecraft full motion control: Theory and experimentation

    NASA Astrophysics Data System (ADS)

    Guarnaccia, Leone; Bevilacqua, Riccardo; Pastorelli, Stefano P.

    2016-05-01

    This work introduces a real time suboptimal control algorithm for six-degree-of-freedom spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) approach and real-time linearization of the equations of motion. The control strategy is sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at each sample time. The cost function of the proposed controller has been compared with the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implementability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six-degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, and control algorithms for nano-satellites in a one-g laboratory environment. The tests show the real-time feasibility of the proposed approach.

  1. Using predictive uncertainty analysis to optimise tracer test design and data acquisition

    NASA Astrophysics Data System (ADS)

    Wallis, Ilka; Moore, Catherine; Post, Vincent; Wolf, Leif; Martens, Evelien; Prommer, Henning

    2014-07-01

    Tracer injection tests are regularly-used tools to identify and characterise flow and transport mechanisms in aquifers. Examples of practical applications are manifold and include, among others, managed aquifer recharge schemes, aquifer thermal energy storage systems and, increasingly important, the disposal of produced water from oil and shale gas wells. The hydrogeological and geochemical data collected during the injection tests are often employed to assess the potential impacts of injection on receptors such as drinking water wells and regularly serve as a basis for the development of conceptual and numerical models that underpin the prediction of potential impacts. As all field tracer injection tests impose substantial logistical and financial efforts, it is crucial to develop a solid a-priori understanding of the value of the various monitoring data to select monitoring strategies which provide the greatest return on investment. In this study, we demonstrate the ability of linear predictive uncertainty analysis (i.e. “data worth analysis”) to quantify the usefulness of different tracer types (bromide, temperature, methane and chloride as examples) and head measurements in the context of a field-scale aquifer injection trial of coal seam gas (CSG) co-produced water. Data worth was evaluated in terms of tracer type, in terms of tracer test design (e.g., injection rate, duration of test and the applied measurement frequency) and monitoring disposition to increase the reliability of injection impact assessments. This was followed by an uncertainty targeted Pareto analysis, which allowed the interdependencies of cost and predictive reliability for alternative monitoring campaigns to be compared directly. For the evaluated injection test, the data worth analysis assessed bromide as superior to head data and all other tracers during early sampling times. However, with time, chloride became a more suitable tracer to constrain simulations of physical transport processes, followed by methane. Temperature data was assessed as the least informative of the solute tracers. However, taking costs of data acquisition into account, it could be shown that temperature data when used in conjunction with other tracers was a valuable and cost-effective marker species due to temperatures low cost to worth ratio. In contrast, the high costs of acquisition of methane data compared to its muted worth, highlighted methanes unfavourable return on investment. Areas of optimal monitoring bore position as well as optimal numbers of bores for the investigated injection site were also established. The proposed tracer test optimisation is done through the application of common use groundwater flow and transport models in conjunction with publicly available tools for predictive uncertainty analysis to provide modelers and practitioners with a powerful yet efficient and cost effective tool which is generally applicable and easily transferrable from the present study to many applications beyond the case study of injection of treated CSG produced water.

  2. A simple and cost effective liquid culture system for the micropropagation of two commercially important apple rootstocks.

    PubMed

    Mehta, Mohina; Ram, Raja; Bhattacharya, Amita

    2014-07-01

    The two commercially important apple rootstocks i.e., MM106 and B9 were micropropagated using a liquid culture system. Three different strengths of 0.8% agar solidified PGR free basal MS medium were first tested to optimize the culture media for both the rootstocks. Full strength medium (MS0) supported maximum in vitro growth, multiplication, rooting and survival under field conditions as opposed to quarter and half strength media. When three different volumes of liquid MS0 were tested, highest in vitro growth, multiplication, rooting and also survival under field conditions were achieved in 20 mL liquid MS0. The cost of one litre of liquid medium was also reduced by 8 times to Rs. 6.29 as compared to solid medium. The cost of 20 mL medium was further reduced to Rs. 0.125.

  3. Development of Fully Automated Low-Cost Immunoassay System for Research Applications.

    PubMed

    Wang, Guochun; Das, Champak; Ledden, Bradley; Sun, Qian; Nguyen, Chien

    2017-10-01

    Enzyme-linked immunosorbent assay (ELISA) automation for routine operation in a small research environment would be very attractive. A portable fully automated low-cost immunoassay system was designed, developed, and evaluated with several protein analytes. It features disposable capillary columns as the reaction sites and uses real-time calibration for improved accuracy. It reduces the overall assay time to less than 75 min with the ability of easy adaptation of new testing targets. The running cost is extremely low due to the nature of automation, as well as reduced material requirements. Details about system configuration, components selection, disposable fabrication, system assembly, and operation are reported. The performance of the system was initially established with a rabbit immunoglobulin G (IgG) assay, and an example of assay adaptation with an interleukin 6 (IL6) assay is shown. This system is ideal for research use, but could work for broader testing applications with further optimization.

  4. An interprovincial cooperative game model for air pollution control in China.

    PubMed

    Xue, Jian; Zhao, Laijun; Fan, Longzhen; Qian, Ying

    2015-07-01

    The noncooperative air pollution reduction model (NCRM) that is currently adopted in China to manage air pollution reduction of each individual province has inherent drawbacks. In this paper, we propose a cooperative air pollution reduction game model (CRM) that consists of two parts: (1) an optimization model that calculates the optimal pollution reduction quantity for each participating province to meet the joint pollution reduction goal; and (2) a model that distribute the economic benefit of the cooperation (i.e., pollution reduction cost saving) among the provinces in the cooperation based on the Shapley value method. We applied the CRM to the case of SO2 reduction in the Beijing-Tianjin-Hebei region in China. The results, based on the data from 2003-2009, show that cooperation helps lower the overall SO2 pollution reduction cost from 4.58% to 11.29%. Distributed across the participating provinces, such a cost saving from interprovincial cooperation brings significant benefits to each local government and stimulates them for further cooperation in pollution reduction. Finally, sensitivity analysis is performed using the year 2009 data to test the parameters' effects on the pollution reduction cost savings. China is increasingly facing unprecedented pressure for immediate air pollution control. The current air pollution reduction policy does not allow cooperation and is less efficient. In this paper we developed a cooperative air pollution reduction game model that consists of two parts: (1) an optimization model that calculates the optimal pollution reduction quantity for each participating province to meet the joint pollution reduction goal; and (2) a model that distributes the cooperation gains (i.e., cost reduction) among the provinces in the cooperation based on the Shapley value method. The empirical case shows that such a model can help improve efficiency in air pollution reduction. The result of the model can serve as a reference for Chinese government pollution reduction policy design.

  5. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE PAGES

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    2018-01-28

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  6. Synthesizing epidemiological and economic optima for control of immunizing infections.

    PubMed

    Klepac, Petra; Laxminarayan, Ramanan; Grenfell, Bryan T

    2011-08-23

    Epidemic theory predicts that the vaccination threshold required to interrupt local transmission of an immunizing infection like measles depends only on the basic reproductive number and hence transmission rates. When the search for optimal strategies is expanded to incorporate economic constraints, the optimum for disease control in a single population is determined by relative costs of infection and control, rather than transmission rates. Adding a spatial dimension, which precludes local elimination unless it can be achieved globally, can reduce or increase optimal vaccination levels depending on the balance of costs and benefits. For weakly coupled populations, local optimal strategies agree with the global cost-effective strategy; however, asymmetries in costs can lead to divergent control optima in more strongly coupled systems--in particular, strong regional differences in costs of vaccination can preclude local elimination even when elimination is locally optimal. Under certain conditions, it is locally optimal to share vaccination resources with other populations.

  7. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  8. Application of improved Vogel’s approximation method in minimization of rice distribution costs of Perum BULOG

    NASA Astrophysics Data System (ADS)

    Nahar, J.; Rusyaman, E.; Putri, S. D. V. E.

    2018-03-01

    This research was conducted at Perum BULOG Sub-Divre Medan which is the implementing institution of Raskin program for several regencies and cities in North Sumatera. Raskin is a program of distributing rice to the poor. In order to minimize rice distribution costs then rice should be allocated optimally. The method used in this study consists of the Improved Vogel Approximation Method (IVAM) to analyse the initial feasible solution, and Modified Distribution (MODI) to test the optimum solution. This study aims to determine whether the IVAM method can provide savings or cost efficiency of rice distribution. From the calculation with IVAM obtained the optimum cost is lower than the company's calculation of Rp945.241.715,5 while the cost of the company's calculation of Rp958.073.750,40. Thus, the use of IVAM can save rice distribution costs of Rp12.832.034,9.

  9. A novel heuristic for optimization aggregate production problem: Evidence from flat panel display in Malaysia

    NASA Astrophysics Data System (ADS)

    Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.

    2015-05-01

    Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.

  10. Development of Miniaturized Optimized Smart Sensors (MOSS) for space plasmas

    NASA Technical Reports Server (NTRS)

    Young, D. T.

    1993-01-01

    The cost of space plasma sensors is high for several reasons: (1) Most are one-of-a-kind and state-of-the-art, (2) the cost of launch to orbit is high, (3) ruggedness and reliability requirements lead to costly development and test programs, and (4) overhead is added by overly elaborate or generalized spacecraft interface requirements. Possible approaches to reducing costs include development of small 'sensors' (defined as including all necessary optics, detectors, and related electronics) that will ultimately lead to cheaper missions by reducing (2), improving (3), and, through work with spacecraft designers, reducing (4). Despite this logical approach, there is no guarantee that smaller sensors are necessarily either better or cheaper. We have previously advocated applying analytical 'quality factors' to plasma sensors (and spacecraft) and have begun to develop miniaturized particle optical systems by applying quantitative optimization criteria. We are currently designing a Miniaturized Optimized Smart Sensor (MOSS) in which miniaturized electronics (e.g., employing new power supply topology and extensive us of gate arrays and hybrid circuits) are fully integrated with newly developed particle optics to give significant savings in volume and mass. The goal of the SwRI MOSS program is development of a fully self-contained and functional plasma sensor weighing 1 lb and requiring 1 W. MOSS will require only a typical spacecraft DC power source (e.g., 30 V) and command/data interfaces in order to be fully functional, and will provide measurement capabilities comparable in most ways to current sensors.

  11. CMOST: an open-source framework for the microsimulation of colorectal cancer screening strategies.

    PubMed

    Prakash, Meher K; Lang, Brian; Heinrich, Henriette; Valli, Piero V; Bauerfeind, Peter; Sonnenberg, Amnon; Beerenwinkel, Niko; Misselwitz, Benjamin

    2017-06-05

    Colorectal cancer (CRC) is a leading cause of cancer-related mortality. CRC incidence and mortality can be reduced by several screening strategies, including colonoscopy, but randomized CRC prevention trials face significant obstacles such as the need for large study populations with long follow-up. Therefore, CRC screening strategies will likely be designed and optimized based on computer simulations. Several computational microsimulation tools have been reported for estimating efficiency and cost-effectiveness of CRC prevention. However, none of these tools is publicly available. There is a need for an open source framework to answer practical questions including testing of new screening interventions and adapting findings to local conditions. We developed and implemented a new microsimulation model, Colon Modeling Open Source Tool (CMOST), for modeling the natural history of CRC, simulating the effects of CRC screening interventions, and calculating the resulting costs. CMOST facilitates automated parameter calibration against epidemiological adenoma prevalence and CRC incidence data. Predictions of CMOST were highly similar compared to a large endoscopic CRC prevention study as well as predictions of existing microsimulation models. We applied CMOST to calculate the optimal timing of a screening colonoscopy. CRC incidence and mortality are reduced most efficiently by a colonoscopy between the ages of 56 and 59; while discounted life years gained (LYG) is maximal at 49-50 years. With a dwell time of 13 years, the most cost-effective screening is at 59 years, at $17,211 discounted USD per LYG. While cost-efficiency varied according to dwell time it did not influence the optimal time point of screening interventions within the tested range. Predictions of CMOST are highly similar compared to a randomized CRC prevention trial as well as those of other microsimulation tools. This open source tool will enable health-economics analyses in for various countries, health-care scenarios and CRC prevention strategies. CMOST is freely available under the GNU General Public License at https://gitlab.com/misselwb/CMOST.

  12. Value of Information Analysis of Multiparameter Tests for Chemotherapy in Early Breast Cancer: The OPTIMA Prelim Trial.

    PubMed

    Hall, Peter S; Smith, Alison; Hulme, Claire; Vargas-Palacios, Armando; Makris, Andreas; Hughes-Davies, Luke; Dunn, Janet A; Bartlett, John M S; Cameron, David A; Marshall, Andrea; Campbell, Amy; Macpherson, Iain R; Dan Rea; Francis, Adele; Earl, Helena; Morgan, Adrienne; Stein, Robert C; McCabe, Christopher

    2017-12-01

    Precision medicine is heralded as offering more effective treatments to smaller targeted patient populations. In breast cancer, adjuvant chemotherapy is standard for patients considered as high-risk after surgery. Molecular tests may identify patients who can safely avoid chemotherapy. To use economic analysis before a large-scale clinical trial of molecular testing to confirm the value of the trial and help prioritize between candidate tests as randomized comparators. Women with surgically treated breast cancer (estrogen receptor-positive and lymph node-positive or tumor size ≥30 mm) were randomized to standard care (chemotherapy for all) or test-directed care using Oncotype DX™. Additional testing was undertaken using alternative tests: MammaPrint TM , PAM-50 (Prosigna TM ), MammaTyper TM , IHC4, and IHC4-AQUA™ (NexCourse Breast™). A probabilistic decision model assessed the cost-effectiveness of all tests from a UK perspective. Value of information analysis determined the most efficient publicly funded ongoing trial design in the United Kingdom. There was an 86% probability of molecular testing being cost-effective, with most tests producing cost savings (range -£1892 to £195) and quality-adjusted life-year gains (range 0.17-0.20). There were only small differences in costs and quality-adjusted life-years between tests. Uncertainty was driven by long-term outcomes. Value of information demonstrated value of further research into all tests, with Prosigna currently being the highest priority for further research. Molecular tests are likely to be cost-effective, but an optimal test is yet to be identified. Health economics modeling to inform the design of a randomized controlled trial looking at diagnostic technology has been demonstrated to be feasible as a method for improving research efficiency. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  13. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  14. Novel Structured Metal Bipolar Plates for Low Cost Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Conghua

    2013-08-15

    Bipolar plates are an important component in fuel cell stacks and accounts for more than 75% of stack weight and volume, and 20% of the stack cost. The technology development of metal bipolar plates can effectively reduce the fuel cells stack weight and volume over 50%. The challenge is to protect metal plate from corrosion at low cost for the broad commercial applications. While most of today’s PEM fuel cell metallic bipolar plate technologies use some precious metal, the focus of this SBIR project is to develop a low cost, novel nano-structured metal bipolar plate technology without using any preciousmore » metal. The technology will meet the performance and cost requirements for automobile applications. Through the Phase I project, TreadStone has identified the corrosion resistant and electrically conductive titanium oxide for the metal bipolar plate surface protection for automotive PEM fuel cell applications. TreadStone has overcome the manufacturing issues to apply the coating on metal substrate surface, and has demonstrated the feasibility of the coated stainless steel plates by ex-situ evaluation tests and the in-situ fuel cell long term durability test. The test results show the feasibility of the proposed nano-structured coating as the low cost metal bipolar plates of PEM fuel cells. The plan for further technology optimization is also outlined for the Phase II project.« less

  15. Optimization of Automobile Crush Characteristics: Technical Report

    DOT National Transportation Integrated Search

    1975-10-01

    A methodology is developed for the evaluation and optimization of societal costs of two-vehicle automobile collisions. Costs considered in a Figure of Merit include costs of injury/mortality, occupant compartment penetration, collision damage repairs...

  16. Design Authority in the Test Programme Definition: The Alenia Spazio Experience

    NASA Astrophysics Data System (ADS)

    Messidoro, P.; Sacchi, E.; Beruto, E.; Fleming, P.; Marucchi Chierro, P.-P.

    2004-08-01

    In addition, being the Verification and Test Programme a significant part of the spacecraft development life cycle in terms of cost and time, very often the subject of the mentioned discussion has the objective to optimize the verification campaign by possible deletion or limitation of some testing activities. The increased market pressure to reduce the project's schedule and cost is originating a dialecting process inside the project teams, involving program management and design authorities, in order to optimize the verification and testing programme. The paper introduces the Alenia Spazio experience in this context, coming from the real project life on different products and missions (science, TLC, EO, manned, transportation, military, commercial, recurrent and one-of-a-kind). Usually the applicable verification and testing standards (e.g. ECSS-E-10 part 2 "Verification" and ECSS-E-10 part 3 "Testing" [1]) are tailored to the specific project on the basis of its peculiar mission constraints. The Model Philosophy and the associated verification and test programme are defined following an iterative process which suitably combines several aspects (including for examples test requirements and facilities) as shown in Fig. 1 (from ECSS-E-10). The considered cases are mainly oriented to the thermal and mechanical verification, where the benefits of possible test programme optimizations are more significant. Considering the thermal qualification and acceptance testing (i.e. Thermal Balance and Thermal Vacuum) the lessons learned originated by the development of several satellites are presented together with the corresponding recommended approaches. In particular the cases are indicated in which a proper Thermal Balance Test is mandatory and others, in presence of more recurrent design, where a qualification by analysis could be envisaged. The importance of a proper Thermal Vacuum exposure for workmanship verification is also highlighted. Similar considerations are summarized for the mechanical testing with particular emphasis on the importance of Modal Survey, Static and Sine Vibration Tests in the qualification stage in combination with the effectiveness of Vibro-Acoustic Test in acceptance. The apparent relative importance of the Sine Vibration Test for workmanship verification in specific circumstances is also highlighted. Fig. 1. Model philosophy, Verification and Test Programme definition The verification of the project requirements is planned through a combination of suitable verification methods (in particular Analysis and Test) at the different verification levels (from System down to Equipment), in the proper verification stages (e.g. in Qualification and Acceptance).

  17. Malaria diagnosis and treatment under the strategy of the integrated management of childhood illness (IMCI): relevance of laboratory support from the rapid immunochromatographic tests of ICT Malaria P.f/P.v and OptiMal.

    PubMed

    Tarimo, D S; Minjas, J N; Bygbjerg, I C

    2001-07-01

    The algorithm developed for the integrated management of childhood illness (IMCI) provides guidelines for the treatment of paediatric malaria. In areas where malaria is endemic, for example, the IMCI strategy may indicate that children who present with fever, a recent history of fever and/or pallor should receive antimalarial chemotherapy. In many holo-endemic areas, it is unclear whether laboratory tests to confirm that such signs are the result of malaria would be very relevant or useful. Children from a holo-endemic region of Tanzania were therefore checked for malarial parasites by microscopy and by using two rapid immunochromatographic tests (RIT) for the diagnosis of malaria (ICT Malaria P.f/P.v and OptiMal. At the time they were tested, each of these children had been targeted for antimalarial treatment (following the IMCI strategy) because of fever and/or pallor. Only 70% of the 395 children classified to receive antimalarial drugs by the IMCI algorithm had malarial parasitaemias (68.4% had Plasmodium falciparum trophozoites, 1.3% only P. falciparum gametocytes, 0.3% P. ovale and 0.3% P. malariae). As indicators of P. falciparum trophozoites in the peripheral blood, fever had a sensitivity of 93.0% and a specificity of 15.5% whereas pallor had a sensitivity of 72.2% and a specificity of 50.8%. The RIT both had very high corresponding sensitivities (of 100.0% for the ICT and 94.0% for OptiMal) but the specificity of the ICT (74.0%) was significantly lower than that for OptiMal (100.0%). Fever and pallor were significantly associated with the P. falciparum asexual parasitaemias that equalled or exceeded the threshold intensity (2000/microl) that has the optimum sensitivity and specificity for the definition of a malarial episode. Diagnostic likelihood ratios (DLR) showed that a positive result in the OptiMal test (DLR = infinity) was a better indication of malaria than a positive result in the ICT (DLR = 3.85). In fact, OptiMal had diagnostic reliability (0.93) which approached that of an ideal test and, since it only detects live parasites, OptiMal is superior to the ICT in monitoring therapeutic responses. Although the RIT may seem attractive for use in primary health facilities because relatively inexperienced staff can perform them, the high cost of these tests is prohibitive. In holo-endemic areas, use of RIT or microscopical examination of bloodsmears may only be relevant when malaria needs to be excluded as a cause of illness (e.g. prior to treatment with toxic or expensive drugs, or during malaria epidemics). Wherever the effective drugs for the first-line treatment of malaria are cheap (e.g. chloroquine and Fansidar), treatment based on clinical diagnosis alone should prove cost-saving in health facilities without microscopy.

  18. Two-step optimization of pressure and recovery of reverse osmosis desalination process.

    PubMed

    Liang, Shuang; Liu, Cui; Song, Lianfa

    2009-05-01

    Driving pressure and recovery are two primary design variables of a reverse osmosis process that largely determine the total cost of seawater and brackish water desalination. A two-step optimization procedure was developed in this paper to determine the values of driving pressure and recovery that minimize the total cost of RO desalination. It was demonstrated that the optimal net driving pressure is solely determined by the electricity price and the membrane price index, which is a lumped parameter to collectively reflect membrane price, resistance, and service time. On the other hand, the optimal recovery is determined by the electricity price, initial osmotic pressure, and costs for pretreatment of raw water and handling of retentate. Concise equations were derived for the optimal net driving pressure and recovery. The dependences of the optimal net driving pressure and recovery on the electricity price, membrane price, and costs for raw water pretreatment and retentate handling were discussed.

  19. Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances.

    PubMed

    Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang

    2018-06-01

    Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.

  20. Considerations in STS payload environmental verification

    NASA Technical Reports Server (NTRS)

    Keegan, W. B.

    1978-01-01

    The current philosophy of the GSFS regarding environmental verification of Shuttle payloads is reviewed. In the structures area, increased emphasis will be placed on the use of analysis for design verification, with selective testing performed as necessary. Furthermore, as a result of recent cost optimization analysis, the multitier test program will presumably give way to a comprehensive test program at the major payload subassembly level after adequate workmanship at the component level has been verified. In the thermal vacuum area, thought is being given to modifying the approaches used for conventional spacecraft.

  1. Cost-Based Optimization of a Papermaking Wastewater Regeneration Recycling System

    NASA Astrophysics Data System (ADS)

    Huang, Long; Feng, Xiao; Chu, Khim H.

    2010-11-01

    Wastewater can be regenerated for recycling in an industrial process to reduce freshwater consumption and wastewater discharge. Such an environment friendly approach will also lead to cost savings that accrue due to reduced freshwater usage and wastewater discharge. However, the resulting cost savings are offset to varying degrees by the costs incurred for the regeneration of wastewater for recycling. Therefore, systematic procedures should be used to determine the true economic benefits for any water-using system involving wastewater regeneration recycling. In this paper, a total cost accounting procedure is employed to construct a comprehensive cost model for a paper mill. The resulting cost model is optimized by means of mathematical programming to determine the optimal regeneration flowrate and regeneration efficiency that will yield the minimum total cost.

  2. Optimum coagulant forecasting by modeling jar test experiments using ANNs

    NASA Astrophysics Data System (ADS)

    Haghiri, Sadaf; Daghighi, Amin; Moharramzadeh, Sina

    2018-01-01

    Currently, the proper utilization of water treatment plants and optimizing their use is of particular importance. Coagulation and flocculation in water treatment are the common ways through which the use of coagulants leads to instability of particles and the formation of larger and heavier particles, resulting in improvement of sedimentation and filtration processes. Determination of the optimum dose of such a coagulant is of particular significance. A high dose, in addition to adding costs, can cause the sediment to remain in the filtrate, a dangerous condition according to the standards, while a sub-adequate dose of coagulants can result in the reducing the required quality and acceptable performance of the coagulation process. Although jar tests are used for testing coagulants, such experiments face many constraints with respect to evaluating the results produced by sudden changes in input water because of their significant costs, long time requirements, and complex relationships among the many factors (turbidity, temperature, pH, alkalinity, etc.) that can influence the efficiency of coagulant and test results. Modeling can be used to overcome these limitations; in this research study, an artificial neural network (ANN) multi-layer perceptron (MLP) with one hidden layer has been used for modeling the jar test to determine the dosage level of used coagulant in water treatment processes. The data contained in this research have been obtained from the drinking water treatment plant located in Ardabil province in Iran. To evaluate the performance of the model, the mean squared error (MSE) and correlation coefficient (R2) parameters have been used. The obtained values are within an acceptable range that demonstrates the high accuracy of the models with respect to the estimation of water-quality characteristics and the optimal dosages of coagulants; so using these models will allow operators to not only reduce costs and time taken to perform experimental jar tests but also to predict a proper dosage for coagulant amounts and to project the quality of the output water under real conditions.

  3. Designing, optimization and validation of tetra-primer ARMS PCR protocol for genotyping mutations in caprine Fec genes

    PubMed Central

    Ahlawat, Sonika; Sharma, Rekha; Maitra, A.; Roy, Manoranjan; Tantia, M.S.

    2014-01-01

    New, quick, and inexpensive methods for genotyping novel caprine Fec gene polymorphisms through tetra-primer ARMS PCR were developed in the present investigation. Single nucleotide polymorphism (SNP) genotyping needs to be attempted to establish association between the identified mutations and traits of economic importance. In the current study, we have successfully genotyped three new SNPs identified in caprine fecundity genes viz. T(-242)C (BMPR1B), G1189A (GDF9) and G735A (BMP15). Tetra-primer ARMS PCR protocol was optimized and validated for these SNPs with short turn-around time and costs. The optimized techniques were tested on 158 random samples of Black Bengal goat breed. Samples with known genotypes for the described genes, previously tested in duplicate using the sequencing methods, were employed for validation of the assay. Upon validation, complete concordance was observed between the tetra-primer ARMS PCR assays and the sequencing results. These results highlight the ability of tetra-primer ARMS PCR in genotyping of mutations in Fec genes. Any associated SNP could be used to accelerate the improvement of goat reproductive traits by identifying high prolific animals at an early stage of life. Our results provide direct evidence that tetra-primer ARMS-PCR is a rapid, reliable, and cost-effective method for SNP genotyping of mutations in caprine Fec genes. PMID:25606428

  4. Trajectory optimization for dynamic couch rotation during volumetric modulated arc radiotherapy

    NASA Astrophysics Data System (ADS)

    Smyth, Gregory; Bamber, Jeffrey C.; Evans, Philip M.; Bedford, James L.

    2013-11-01

    Non-coplanar radiation beams are often used in three-dimensional conformal and intensity modulated radiotherapy to reduce dose to organs at risk (OAR) by geometric avoidance. In volumetric modulated arc radiotherapy (VMAT) non-coplanar geometries are generally achieved by applying patient couch rotations to single or multiple full or partial arcs. This paper presents a trajectory optimization method for a non-coplanar technique, dynamic couch rotation during VMAT (DCR-VMAT), which combines ray tracing with a graph search algorithm. Four clinical test cases (partial breast, brain, prostate only, and prostate and pelvic nodes) were used to evaluate the potential OAR sparing for trajectory-optimized DCR-VMAT plans, compared with standard coplanar VMAT. In each case, ray tracing was performed and a cost map reflecting the number of OAR voxels intersected for each potential source position was generated. The least-cost path through the cost map, corresponding to an optimal DCR-VMAT trajectory, was determined using Dijkstra’s algorithm. Results show that trajectory optimization can reduce dose to specified OARs for plans otherwise comparable to conventional coplanar VMAT techniques. For the partial breast case, the mean heart dose was reduced by 53%. In the brain case, the maximum lens doses were reduced by 61% (left) and 77% (right) and the globes by 37% (left) and 40% (right). Bowel mean dose was reduced by 15% in the prostate only case. For the prostate and pelvic nodes case, the bowel V50 Gy and V60 Gy were reduced by 9% and 45% respectively. Future work will involve further development of the algorithm and assessment of its performance over a larger number of cases in site-specific cohorts.

  5. Optimal combinations of control strategies and cost-effective analysis for visceral leishmaniasis disease transmission.

    PubMed

    Biswas, Santanu; Subramanian, Abhishek; ELMojtaba, Ibrahim M; Chattopadhyay, Joydev; Sarkar, Ram Rup

    2017-01-01

    Visceral leishmaniasis (VL) is a deadly neglected tropical disease that poses a serious problem in various countries all over the world. Implementation of various intervention strategies fail in controlling the spread of this disease due to issues of parasite drug resistance and resistance of sandfly vectors to insecticide sprays. Due to this, policy makers need to develop novel strategies or resort to a combination of multiple intervention strategies to control the spread of the disease. To address this issue, we propose an extensive SIR-type model for anthroponotic visceral leishmaniasis transmission with seasonal fluctuations modeled in the form of periodic sandfly biting rate. Fitting the model for real data reported in South Sudan, we estimate the model parameters and compare the model predictions with known VL cases. Using optimal control theory, we study the effects of popular control strategies namely, drug-based treatment of symptomatic and PKDL-infected individuals, insecticide treated bednets and spray of insecticides on the dynamics of infected human and vector populations. We propose that the strategies remain ineffective in curbing the disease individually, as opposed to the use of optimal combinations of the mentioned strategies. Testing the model for different optimal combinations while considering periodic seasonal fluctuations, we find that the optimal combination of treatment of individuals and insecticide sprays perform well in controlling the disease for the time period of intervention introduced. Performing a cost-effective analysis we identify that the same strategy also proves to be efficacious and cost-effective. Finally, we suggest that our model would be helpful for policy makers to predict the best intervention strategies for specific time periods and their appropriate implementation for elimination of visceral leishmaniasis.

  6. Bayesian estimation of sensitivity and specificity of a milk pregnancy-associated glycoprotein-based ELISA and of transrectal ultrasonographic exam for diagnosis of pregnancy at 28-45 days following breeding in dairy cows.

    PubMed

    Dufour, Simon; Durocher, Jean; Dubuc, Jocelyn; Dendukuri, Nandini; Hassan, Shereen; Buczinski, Sébastien

    2017-05-01

    Using a milk sample for pregnancy diagnosis in dairy cattle is extremely convenient due to the low technical inputs required for collection of biological materials. Determining accuracy of a novel pregnancy diagnostic test that relies on a milk sample is, however, difficult since no gold standard test is available for comparison. The objective of the current study was to estimate diagnostic accuracy of the milk PAG-based ELISA and of transrectal ultrasonographic (TUS) exam for determining pregnancy status of individual dairy cows using a methodology suited for test validation in the absence of gold standard. Secondary objectives were to evaluate whether test accuracy varies with cow's characteristics and to identify the optimal ELISA optical density threshold for PAG test interpretation. Cows (n=519) from 18 commercial dairies tested with both TUS and PAG between 28 and 45days following breeding were included in the study. Other covariates (number of days since breeding, parity, and daily milk production) hypothesized to affect TUS or PAG test accuracy were measured. A Bayesian hierarchical latent class model (LCM) methodology assuming conditional independence between tests was used to obtain estimates of tests' sensitivities (Se) and specificities (Sp), to evaluate impact of covariates on these, and to compute misclassification costs across a range of ELISA thresholds. Very little disagreement was observed between tests with only 23 cows yielding discordant results. Using the LCM model with non-informative priors for tests accuracy parameters, median (95% credibility intervals [CI]) TUS Se and Sp estimates of 0.96 (0.91, 1.00) and 0.99 (0.97, 1.0) were obtained. For the PAG test, median (95% CI) Se of 0.99 (0.98, 1.00) and Sp of 0.95 (0.89, 1.0) were observed. The impact of adjusting for conditional dependence between tests was negligible. Test accuracy of the PAG test varied slightly by parity number. When assuming false negative to false positive costs ratio≥3:1, the optimal ELISA optical density threshold allowing minimization of misclassification costs was 0.25. In conclusion, both TUS and PAG showed excellent accuracy for pregnancy diagnosis in dairy cows. When using the PAG test, a threshold of 0.25 could be used for test interpretation. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Heliostat cost optimization study

    NASA Astrophysics Data System (ADS)

    von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus

    2016-05-01

    This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.

  8. Cost-sensitive case-based reasoning using a genetic algorithm: application to medical diagnosis.

    PubMed

    Park, Yoon-Joo; Chun, Se-Hak; Kim, Byung-Chun

    2011-02-01

    The paper studies the new learning technique called cost-sensitive case-based reasoning (CSCBR) incorporating unequal misclassification cost into CBR model. Conventional CBR is now considered as a suitable technique for diagnosis, prognosis and prescription in medicine. However it lacks the ability to reflect asymmetric misclassification and often assumes that the cost of a positive diagnosis (an illness) as a negative one (no illness) is the same with that of the opposite situation. Thus, the objective of this research is to overcome the limitation of conventional CBR and encourage applying CBR to many real world medical cases associated with costs of asymmetric misclassification errors. The main idea involves adjusting the optimal cut-off classification point for classifying the absence or presence of diseases and the cut-off distance point for selecting optimal neighbors within search spaces based on similarity distribution. These steps are dynamically adapted to new target cases using a genetic algorithm. We apply this proposed method to five real medical datasets and compare the results with two other cost-sensitive learning methods-C5.0 and CART. Our finding shows that the total misclassification cost of CSCBR is lower than other cost-sensitive methods in many cases. Even though the genetic algorithm has limitations in terms of unstable results and over-fitting training data, CSCBR results with GA are better overall than those of other methods. Also the paired t-test results indicate that the total misclassification cost of CSCBR is significantly less than C5.0 and CART for several datasets. We have proposed a new CBR method called cost-sensitive case-based reasoning (CSCBR) that can incorporate unequal misclassification costs into CBR and optimize the number of neighbors dynamically using a genetic algorithm. It is meaningful not only for introducing the concept of cost-sensitive learning to CBR, but also for encouraging the use of CBR in the medical area. The result shows that the total misclassification costs of CSCBR do not increase in arithmetic progression as the cost of false absence increases arithmetically, thus it is cost-sensitive. We also show that total misclassification costs of CSCBR are the lowest among all methods in four datasets out of five and the result is statistically significant in many cases. The limitation of our proposed CSCBR is confined to classify binary cases for minimizing misclassification cost because our proposed CSCBR is originally designed to classify binary case. Our future work extends this method for multi-classification which can classify more than two groups. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Determination of the optimal cutoff value for a serological assay: an example using the Johne's Absorbed EIA.

    PubMed Central

    Ridge, S E; Vizard, A L

    1993-01-01

    Traditionally, in order to improve diagnostic accuracy, existing tests have been replaced with newly developed diagnostic tests with superior sensitivity and specificity. However, it is possible to improve existing tests by altering the cutoff value chosen to distinguish infected individuals from uninfected individuals. This paper uses data obtained from an investigation of the operating characteristics of the Johne's Absorbed EIA to demonstrate a method of determining a preferred cutoff value from several potentially useful cutoff settings. A method of determining the financial gain from using the preferred rather than the current cutoff value and a decision analysis method to assist in determining the optimal cutoff value when critical population parameters are not known with certainty are demonstrated. The results of this study indicate that the currently recommended cutoff value for the Johne's Absorbed EIA is only close to optimal when the disease prevalence is very low and false-positive test results are deemed to be very costly. In other situations, there were considerable financial advantages to using cutoff values calculated to maximize the benefit of testing. It is probable that the current cutoff values for other diagnostic tests may not be the most appropriate for every testing situation. This paper offers methods for identifying the cutoff value that maximizes the benefit of medical and veterinary diagnostic tests. PMID:8501227

  10. Pavement maintenance optimization model using Markov Decision Processes

    NASA Astrophysics Data System (ADS)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  11. Photovoltaic design optimization for terrestrial applications

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1978-01-01

    As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, a comprehensive program of module cost-optimization has been carried out. The objective of these studies has been to define means of reducing the cost and improving the utility and reliability of photovoltaic modules for the broad spectrum of terrestrial applications. This paper describes one of the methods being used for module optimization, including the derivation of specific equations which allow the optimization of various module design features. The method is based on minimizing the life-cycle cost of energy for the complete system. Comparison of the life-cycle energy cost with the marginal cost of energy each year allows the logical plant lifetime to be determined. The equations derived allow the explicit inclusion of design parameters such as tracking, site variability, and module degradation with time. An example problem involving the selection of an optimum module glass substrate is presented.

  12. Optimal synthesis and design of the number of cycles in the leaching process for surimi production.

    PubMed

    Reinheimer, M Agustina; Scenna, Nicolás J; Mussati, Sergio F

    2016-12-01

    Water consumption required during the leaching stage in the surimi manufacturing process strongly depends on the design and the number and size of stages connected in series for the soluble protein extraction target, and it is considered as the main contributor to the operating costs. Therefore, the optimal synthesis and design of the leaching stage is essential to minimize the total annual cost. In this study, a mathematical optimization model for the optimal design of the leaching operation is presented. Precisely, a detailed Mixed Integer Nonlinear Programming (MINLP) model including operating and geometric constraints was developed based on our previous optimization model (NLP model). Aspects about quality, water consumption and main operating parameters were considered. The minimization of total annual costs, which considered a trade-off between investment and operating costs, led to an optimal solution with lesser number of stages (2 instead of 3 stages) and higher volumes of the leaching tanks comparing with previous results. An analysis was performed in order to investigate how the optimal solution was influenced by the variations of the unitary cost of fresh water, waste treatment and capital investment.

  13. An innovative time-cost-quality tradeoff modeling of building construction project based on resource allocation.

    PubMed

    Hu, Wenfa; He, Xinhua

    2014-01-01

    The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated.

  14. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  15. The cost of hybrid waste water systems: A systematic framework for specifying minimum cost-connection rates.

    PubMed

    Eggimann, Sven; Truffer, Bernhard; Maurer, Max

    2016-10-15

    To determine the optimal connection rate (CR) for regional waste water treatment is a challenge that has recently gained the attention of academia and professional circles throughout the world. We contribute to this debate by proposing a framework for a total cost assessment of sanitation infrastructures in a given region for the whole range of possible CRs. The total costs comprise the treatment and transportation costs of centralised and on-site waste water management systems relative to specific CRs. We can then identify optimal CRs that either deliver waste water services at the lowest overall regional cost, or alternatively, CRs that result from households freely choosing whether they want to connect or not. We apply the framework to a Swiss region, derive a typology for regional cost curves and discuss whether and by how much the empirically observed CRs differ from the two optimal ones. Both optimal CRs may be reached by introducing specific regulatory incentive structures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. An economic evaluation of colorectal cancer screening in primary care practice.

    PubMed

    Meenan, Richard T; Anderson, Melissa L; Chubak, Jessica; Vernon, Sally W; Fuller, Sharon; Wang, Ching-Yun; Green, Beverly B

    2015-06-01

    Recent colorectal cancer screening studies focus on optimizing adherence. This study evaluated the cost effectiveness of interventions using electronic health records (EHRs); automated mailings; and stepped support increases to improve 2-year colorectal cancer screening adherence. Analyses were based on a parallel-design, randomized trial in which three stepped interventions (EHR-linked mailings ["automated"]; automated plus telephone assistance ["assisted"]; or automated and assisted plus nurse navigation to testing completion or refusal [navigated"]) were compared to usual care. Data were from August 2008 to November 2011, with analyses performed during 2012-2013. Implementation resources were micro-costed; research and registry development costs were excluded. Incremental cost-effectiveness ratios (ICERs) were based on number of participants current for screening per guidelines over 2 years. Bootstrapping examined robustness of results. Intervention delivery cost per participant current for screening ranged from $21 (automated) to $27 (navigated). Inclusion of induced testing costs (e.g., screening colonoscopy) lowered expenditures for automated (ICER=-$159) and assisted (ICER=-$36) relative to usual care over 2 years. Savings arose from increased fecal occult blood testing, substituting for more expensive colonoscopies in usual care. Results were broadly consistent across demographic subgroups. More intensive interventions were consistently likely to be cost effective relative to less intensive interventions, with willingness to pay values of $600-$1,200 for an additional person current for screening yielding ≥80% probability of cost effectiveness. Two-year cost effectiveness of a stepped approach to colorectal cancer screening promotion based on EHR data is indicated, but longer-term cost effectiveness requires further study. Copyright © 2015 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  17. Cost-effectiveness simulation and analysis of colorectal cancer screening in Hong Kong Chinese population: comparison amongst colonoscopy, guaiac and immunologic fecal occult blood testing.

    PubMed

    Wong, Carlos K H; Lam, Cindy L K; Wan, Y F; Fong, Daniel Y T

    2015-10-15

    The aim of this study was to evaluate the cost-effectiveness of CRC screening strategies from the healthcare service provider perspective based on Chinese population. A Markov model was constructed to compare the cost-effectiveness of recommended screening strategies including annual/biennial guaiac fecal occult blood testing (G-FOBT), annual/biennial immunologic FOBT (I-FOBT), and colonoscopy every 10 years in Chinese aged 50 year over a 25-year period. External validity of model was tested against data retrieved from published randomized controlled trials of G-FOBT. Recourse use data collected from Chinese subjects among staging of colorectal neoplasm were combined with published unit cost data ($USD in 2009 price values) to estimate a stage-specific cost per patient. Quality-adjusted life-years (QALYs) were quantified based on the stage duration and SF-6D preference-based value of each stage. The cost-effectiveness outcome was the incremental cost-effectiveness ratio (ICER) represented by costs per life-years (LY) and costs per QALYs gained. In base-case scenario, the non-dominated strategies were annual and biennial I-FOBT. Compared with no screening, the ICER presented $20,542/LYs and $3155/QALYs gained for annual I-FOBT, and $19,838/LYs gained and $2976/QALYs gained for biennial I-FOBT. The optimal screening strategy was annual I-FOBT that attained the highest ICER at the threshold of $50,000 per LYs or QALYs gained. The Markov model informed the health policymakers that I-FOBT every year may be the most effective and cost-effective CRC screening strategy among recommended screening strategies, depending on the willingness-to-pay of mass screening for Chinese population. ClinicalTrials.gov Identifier NCT02038283.

  18. Optimization of active distribution networks: Design and analysis of significative case studies for enabling control actions of real infrastructure

    NASA Astrophysics Data System (ADS)

    Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca

    2014-12-01

    The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.

  19. Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches

    NASA Astrophysics Data System (ADS)

    Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo

    This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.

  20. Integrated beam orientation and scanning-spot optimization in intensity-modulated proton therapy for brain and unilateral head and neck tumors.

    PubMed

    Gu, Wenbo; O'Connor, Daniel; Nguyen, Dan; Yu, Victoria Y; Ruan, Dan; Dong, Lei; Sheng, Ke

    2018-04-01

    Intensity-Modulated Proton Therapy (IMPT) is the state-of-the-art method of delivering proton radiotherapy. Previous research has been mainly focused on optimization of scanning spots with manually selected beam angles. Due to the computational complexity, the potential benefit of simultaneously optimizing beam orientations and spot pattern could not be realized. In this study, we developed a novel integrated beam orientation optimization (BOO) and scanning-spot optimization algorithm for intensity-modulated proton therapy (IMPT). A brain chordoma and three unilateral head-and-neck patients with a maximal target size of 112.49 cm 3 were included in this study. A total number of 1162 noncoplanar candidate beams evenly distributed across 4π steradians were included in the optimization. For each candidate beam, the pencil-beam doses of all scanning spots covering the PTV and a margin were calculated. The beam angle selection and spot intensity optimization problem was formulated to include three terms: a dose fidelity term to penalize the deviation of PTV and OAR doses from ideal dose distribution; an L1-norm sparsity term to reduce the number of active spots and improve delivery efficiency; a group sparsity term to control the number of active beams between 2 and 4. For the group sparsity term, convex L2,1-norm and nonconvex L2,1/2-norm were tested. For the dose fidelity term, both quadratic function and linearized equivalent uniform dose (LEUD) cost function were implemented. The optimization problem was solved using the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The IMPT BOO method was tested on three head-and-neck patients and one skull base chordoma patient. The results were compared with IMPT plans created using column generation selected beams or manually selected beams. The L2,1-norm plan selected spatially aggregated beams, indicating potential degeneracy using this norm. L2,1/2-norm was able to select spatially separated beams and achieve smaller deviation from the ideal dose. In the L2,1/2-norm plans, the [mean dose, maximum dose] of OAR were reduced by an average of [2.38%, 4.24%] and[2.32%, 3.76%] of the prescription dose for the quadratic and LEUD cost function, respectively, compared with the IMPT plan using manual beam selection while maintaining the same PTV coverage. The L2,1/2 group sparsity plans were dosimetrically superior to the column generation plans as well. Besides beam orientation selection, spot sparsification was observed. Generally, with the quadratic cost function, 30%~60% spots in the selected beams remained active. With the LEUD cost function, the percentages of active spots were in the range of 35%~85%.The BOO-IMPT run time was approximately 20 min. This work shows the first IMPT approach integrating noncoplanar BOO and scanning-spot optimization in a single mathematical framework. This method is computationally efficient, dosimetrically superior and produces delivery-friendly IMPT plans. © 2018 American Association of Physicists in Medicine.

  1. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  2. Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.

    PubMed

    Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David

    2014-01-01

    We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.

  3. Fuel efficient traffic signal operation and evaluation: Garden Grove Demonstration Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1983-02-01

    The procedures and results of a case study of fuel efficient traffic signal operation and evaluation in the City of Garden Grove, California are documented. Improved traffic signal timing was developed for a 70-intersection test network in Garden Grove using an optimization tool called the TRANSYT Version 8 computer program. Full-scale field testing of five alternative timing plans was conducted using two instrumented vehicles equipped to measure traffic performance characteristics and fuel consumption. The field tests indicated that significant improvements in traffic flow and fuel consumption result from the use of timing plans generated by the TRANSYT optimization model. Changingmore » from pre-existing to an optimized timing plan yields a networkwide 5 percent reduction in total travel time, more than 10 percent reduction in both the number of stops and stopped delay time, and 6 percent reduction in fuel consumption. Projections are made of the benefits and costs of implementing such a program at the 20,000 traffic signals in networks throughout the State of California.« less

  4. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGES

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less

  5. Economic analysis of pilot-scale production of B-phycoerythrin.

    PubMed

    Torres-Acosta, Mario A; Ruiz-Ruiz, Federico; Aguilar-Yáñez, José M; Benavides, Jorge; Rito-Palomares, Marco

    2016-11-01

    β-Phycoerythrin is a color protein with several applications, from food coloring to molecular labeling. Depending on the application, different purity is required, affecting production cost and price. Different production and purification strategies for B-phycoerythrin have been developed, the most studied are based on the production using Porphyridium cruentum and purified using chromatographic techniques or aqueous two-phase systems. The use of the latter can result in a less expensive and intensive recovery of the protein, but there is lack of a proper economic analysis to study the effect of using aqueous two-phase systems in a scaled-up process. This study analyzed the production of B-Phycoerythrin using real data obtained during the scale-up of a bioprocess using specialized software (BioSolve, Biopharm Services, UK). First, a sensitivity analysis was performed to identify critical parameters for the production cost, then a Monte Carlo analysis to emulate real processes by adding uncertainty to the identified parameters. Next, the bioprocess was analyzed to determine its financial attractiveness and possible optimization strategies were tested and discussed. Results show that aqueous two-phase systems retain their advantages of low cost and intensive recovery (54.56%); the costs of production per gram calculated (before titer optimization: US$15,709 and after optimization: US$2,374) allowed to obtain profit (in the range of US$millions in a 10-year period) for a potential company taking this production method by comparing the production cost against commercial prices. The bioprocess analyzed is a promising and profitable method for the generation of a highly purified B-phycoerythrin. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1472-1479, 2016. © 2016 American Institute of Chemical Engineers.

  6. CNV detection method optimized for high-resolution arrayCGH by normality test.

    PubMed

    Ahn, Jaegyoon; Yoon, Youngmi; Park, Chihyun; Park, Sanghyun

    2012-04-01

    High-resolution arrayCGH platform makes it possible to detect small gains and losses which previously could not be measured. However, current CNV detection tools fitted to early low-resolution data are not applicable to larger high-resolution data. When CNV detection tools are applied to high-resolution data, they suffer from high false-positives, which increases validation cost. Existing CNV detection tools also require optimal parameter values. In most cases, obtaining these values is a difficult task. This study developed a CNV detection algorithm that is optimized for high-resolution arrayCGH data. This tool operates up to 1500 times faster than existing tools on a high-resolution arrayCGH of whole human chromosomes which has 42 million probes whose average length is 50 bases, while preserving false positive/negative rates. The algorithm also uses a normality test, thereby removing the need for optimal parameters. To our knowledge, this is the first formulation for CNV detecting problems that results in a near-linear empirical overall complexity for real high-resolution data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Game Theory and Risk-Based Levee System Design

    NASA Astrophysics Data System (ADS)

    Hui, R.; Lund, J. R.; Madani, K.

    2014-12-01

    Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.

  8. An effective and optimal quality control approach for green energy manufacturing using design of experiments framework and evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Saavedra, Juan Alejandro

    Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.

  9. An adequacy-constrained integrated planning method for effective accommodation of DG and electric vehicles in smart distribution systems

    NASA Astrophysics Data System (ADS)

    Tan, Zhukui; Xie, Baiming; Zhao, Yuanliang; Dou, Jinyue; Yan, Tong; Liu, Bin; Zeng, Ming

    2018-06-01

    This paper presents a new integrated planning framework for effective accommodating electric vehicles in smart distribution systems (SDS). The proposed method incorporates various investment options available for the utility collectively, including distributed generation (DG), capacitors and network reinforcement. Using a back-propagation algorithm combined with cost-benefit analysis, the optimal network upgrade plan, allocation and sizing of the selected components are determined, with the purpose of minimizing the total system capital and operating costs of DG and EV accommodation. Furthermore, a new iterative reliability test method is proposed. It can check the optimization results by subsequently simulating the reliability level of the planning scheme, and modify the generation reserve margin to guarantee acceptable adequacy levels for each year of the planning horizon. Numerical results based on a 32-bus distribution system verify the effectiveness of the proposed method.

  10. Simultaneous personnel and vehicle shift scheduling in the waste management sector.

    PubMed

    Ghiani, Gianpaolo; Guerriero, Emanuela; Manni, Andrea; Manni, Emanuele; Potenza, Agostino

    2013-07-01

    Urban waste management is becoming an increasingly complex task, absorbing a huge amount of resources, and having a major environmental impact. The design of a waste management system consists in various activities, and one of these is related to the definition of shift schedules for both personnel and vehicles. This activity has a great incidence on the tactical and operational cost for companies. In this paper, we propose an integer programming model to find an optimal solution to the integrated problem. The aim is to determine optimal schedules at minimum cost. Moreover, we design a fast and effective heuristic to face large-size problems. Both approaches are tested on data from a real-world case in Southern Italy and compared to the current practice utilized by the company managing the service, showing that simultaneously solving these problems can lead to significant monetary savings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Planning Image-Based Measurements in Wind Tunnels by Virtual Imaging

    NASA Technical Reports Server (NTRS)

    Kushner, Laura Kathryn; Schairer, Edward T.

    2011-01-01

    Virtual imaging is routinely used at NASA Ames Research Center to plan the placement of cameras and light sources for image-based measurements in production wind tunnel tests. Virtual imaging allows users to quickly and comprehensively model a given test situation, well before the test occurs, in order to verify that all optical testing requirements will be met. It allows optimization of the placement of cameras and light sources and leads to faster set-up times, thereby decreasing tunnel occupancy costs. This paper describes how virtual imaging was used to plan optical measurements for three tests in production wind tunnels at NASA Ames.

  12. Conception et analyse d'un systeme d'optimisation de plans de vol pour les avions

    NASA Astrophysics Data System (ADS)

    Maazoun, Wissem

    The main objective of this thesis is to develop an optimization method for the preparation of flight plans for aircrafts. The flight plan minimizes all costs associated with the flight. We determine an optimal path for an airplane from a departure airport to a destination airport. The optimal path minimizes the sum of all costs, i.e. the cost of fuel added to the cost of time (wages, rental of the aircraft, arrival delays, etc.). The optimal trajectory is obtained by considering all possible trajectories on a 3D graph (longitude, latitude and altitude) where the altitude levels are separated by 2,000 feet, and by applying a shortest path algorithm. The main task was to accurately compute fuel consumption on each edge of the graph, making sure that each arc has a minimal cost and is covered in a realistic way from the point of view of control, i.e. in accordance with the rules of navigation. To compute the cost of an arc, we take into account weather conditions (temperature, pressure, wind components, etc.). The optimization of each arc is done via the evaluation of an optimum speed that takes all costs into account. Each arc of the graph typically includes several sub-phases of the flight, e.g. altitude change, speed change, and constant speed and altitude. In the initial climb and the final descent phases, the costs are determined by considering altitude changes at constant CAS (Calibrated Air Speed) or constant Mach number. CAS and Mach number are adjusted to minimize cost. The aerodynamic model used is the one proposed by Eurocontrol, which uses the BADA (Base of Aircraft Data) tables. This model is based on the total energy equation that determines the instantaneous fuel consumption. Calculations on each arc are done by solving a system of differential equations that systematically takes all costs into account. To compute the cost of an arc, we must know the time to go through it, which is generally unknown. To have well-posed boundary conditions, we use the horizontal displacement as the independent variable of the system of differential equations. We consider the velocity components of the wind in a 3D system of coordinates to compute the instantaneous ground speed of the aircraft. To consider the cost of time, we use the cost index. The cost of an arc depends on the aircraft mass at the beginning of this arc, and this mass depends on the path. As we consider all possible paths, the cost of an arc must be computed for each trajectory to which it belongs. For a long-distance flight, the number of arcs to be considered in the graph is large and therefore the cost of an arc is typically computed many times. Our algorithm computes the costs of one million arcs in seconds while having a high accuracy. The determination of the optimal trajectory can therefore be done in a short time. To get the optimal path, the mass of the aircraft at the departure point must also be optimal. It is therefore necessary to know the optimal amount of fuel for the journey. The aircraft mass is known only at the arrival point. This mass is the mass of the aircraft including passengers, cargo and reserve fuel mass. The optimal path is determined by calculating backwards, i.e. from the arrival point to the departure point. For the determination of the optimal trajectory, we use an elliptical grid that has focal points at the departure and arrival points. The use of this grid is essential for the construction of a direct and acyclic graph. We use the Bellman-Ford algorithm on a DAG to determine the shortest path. This algorithm is easy to implement and results in short computation times. Our algorithm computes an optimal trajectory with an optimal cost for each arc. Altitude changes are done optimally with respect to the mass of the aircraft and the cost of time. Our algorithm gives the mass, speed, altitude and total cost at any point of the trajectory as well as the optimal profiles of climb and descent. A prototype has been implemented in C. We made simulations of all types of possible arcs and of several complete trajectories to illustrate the behaviour of the algorithm.

  13. Economic analysis of transmission line engineering based on industrial engineering

    NASA Astrophysics Data System (ADS)

    Li, Yixuan

    2017-05-01

    The modern industrial engineering is applied to the technical analysis and cost analysis of power transmission and transformation engineering. It can effectively reduce the cost of investment. First, the power transmission project is economically analyzed. Based on the feasibility study of power transmission and transformation project investment, the proposal on the company system cost management is put forward through the economic analysis of the effect of the system. The cost management system is optimized. Then, through the cost analysis of power transmission and transformation project, the new situation caused by the cost of construction is found. It is of guiding significance to further improve the cost management of power transmission and transformation project. Finally, according to the present situation of current power transmission project cost management, concrete measures to reduce the cost of power transmission project are given from the two aspects of system optimization and technology optimization.

  14. Optimal investment strategies and hedging of derivatives in the presence of transaction costs (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Muratore-Ginanneschi, Paolo

    2005-05-01

    Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.

  15. Operation costs and pollutant emissions reduction by definition of new collection scheduling and optimization of MSW collection routes using GIS. The case study of Barreiro, Portugal.

    PubMed

    Zsigraiova, Zdena; Semiao, Viriato; Beijoco, Filipa

    2013-04-01

    This work proposes an innovative methodology for the reduction of the operation costs and pollutant emissions involved in the waste collection and transportation. Its innovative feature lies in combining vehicle route optimization with that of waste collection scheduling. The latter uses historical data of the filling rate of each container individually to establish the daily circuits of collection points to be visited, which is more realistic than the usual assumption of a single average fill-up rate common to all the system containers. Moreover, this allows for the ahead planning of the collection scheduling, which permits a better system management. The optimization process of the routes to be travelled makes recourse to Geographical Information Systems (GISs) and uses interchangeably two optimization criteria: total spent time and travelled distance. Furthermore, rather than using average values, the relevant parameters influencing fuel consumption and pollutant emissions, such as vehicle speed in different roads and loading weight, are taken into consideration. The established methodology is applied to the glass-waste collection and transportation system of Amarsul S.A., in Barreiro. Moreover, to isolate the influence of the dynamic load on fuel consumption and pollutant emissions a sensitivity analysis of the vehicle loading process is performed. For that, two hypothetical scenarios are tested: one with the collected volume increasing exponentially along the collection path; the other assuming that the collected volume decreases exponentially along the same path. The results evidence unquestionable beneficial impacts of the optimization on both the operation costs (labor and vehicles maintenance and fuel consumption) and pollutant emissions, regardless the optimization criterion used. Nonetheless, such impact is particularly relevant when optimizing for time yielding substantial improvements to the existing system: potential reductions of 62% for the total spent time, 43% for the fuel consumption and 40% for the emitted pollutants. This results in total cost savings of 57%, labor being the greatest contributor, representing over €11,000 per year for the two vehicles collecting glass-waste. Moreover, it is shown herein that the dynamic loading process of the collection vehicle impacts on both the fuel consumption and on pollutant emissions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Estimation of optimal educational cost per medical student.

    PubMed

    Yang, Eunbae B; Lee, Seunghee

    2009-09-01

    This study aims to estimate the optimal educational cost per medical student. A private medical college in Seoul was targeted by the study, and its 2006 learning environment and data from the 2003~2006 budget and settlement were carefully analyzed. Through interviews with 3 medical professors and 2 experts in the economics of education, the study attempted to establish the educational cost estimation model, which yields an empirically computed estimate of the optimal cost per student in medical college. The estimation model was based primarily upon the educational cost which consisted of direct educational costs (47.25%), support costs (36.44%), fixed asset purchases (11.18%) and costs for student affairs (5.14%). These results indicate that the optimal cost per student is approximately 20,367,000 won each semester; thus, training a doctor costs 162,936,000 won over 4 years. Consequently, we inferred that the tuition levels of a local medical college or professional medical graduate school cover one quarter or one-half of the per- student cost. The findings of this study do not necessarily imply an increase in medical college tuition; the estimation of the per-student cost for training to be a doctor is one matter, and the issue of who should bear this burden is another. For further study, we should consider the college type and its location for general application of the estimation method, in addition to living expenses and opportunity costs.

  17. Immunohistochemistry for predictive biomarkers in non-small cell lung cancer.

    PubMed

    Mino-Kenudson, Mari

    2017-10-01

    In the era of targeted therapy, predictive biomarker testing has become increasingly important for non-small cell lung cancer. Of multiple predictive biomarker testing methods, immunohistochemistry (IHC) is widely available and technically less challenging, can provide clinically meaningful results with a rapid turn-around-time and is more cost efficient than molecular platforms. In fact, several IHC assays for predictive biomarkers have already been implemented in routine pathology practice. In this review, we will discuss: (I) the details of anaplastic lymphoma kinase (ALK) and proto-oncogene tyrosine-protein kinase ROS (ROS1) IHC assays including the performance of multiple antibody clones, pros and cons of IHC platforms and various scoring systems to design an optimal algorithm for predictive biomarker testing; (II) issues associated with programmed death-ligand 1 (PD-L1) IHC assays; (III) appropriate pre-analytical tissue handling and selection of optimal tissue samples for predictive biomarker IHC.

  18. Immunohistochemistry for predictive biomarkers in non-small cell lung cancer

    PubMed Central

    2017-01-01

    In the era of targeted therapy, predictive biomarker testing has become increasingly important for non-small cell lung cancer. Of multiple predictive biomarker testing methods, immunohistochemistry (IHC) is widely available and technically less challenging, can provide clinically meaningful results with a rapid turn-around-time and is more cost efficient than molecular platforms. In fact, several IHC assays for predictive biomarkers have already been implemented in routine pathology practice. In this review, we will discuss: (I) the details of anaplastic lymphoma kinase (ALK) and proto-oncogene tyrosine-protein kinase ROS (ROS1) IHC assays including the performance of multiple antibody clones, pros and cons of IHC platforms and various scoring systems to design an optimal algorithm for predictive biomarker testing; (II) issues associated with programmed death-ligand 1 (PD-L1) IHC assays; (III) appropriate pre-analytical tissue handling and selection of optimal tissue samples for predictive biomarker IHC. PMID:29114473

  19. Algorithm For Optimal Control Of Large Structures

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Garba, John A..; Utku, Senol

    1989-01-01

    Cost of computation appears competitive with other methods. Problem to compute optimal control of forced response of structure with n degrees of freedom identified in terms of smaller number, r, of vibrational modes. Article begins with Hamilton-Jacobi formulation of mechanics and use of quadratic cost functional. Complexity reduced by alternative approach in which quadratic cost functional expressed in terms of control variables only. Leads to iterative solution of second-order time-integral matrix Volterra equation of second kind containing optimal control vector. Cost of algorithm, measured in terms of number of computations required, is of order of, or less than, cost of prior algoritms applied to similar problems.

  20. A Method of Dynamic Extended Reactive Power Optimization in Distribution Network Containing Photovoltaic-Storage System

    NASA Astrophysics Data System (ADS)

    Wang, Wu; Huang, Wei; Zhang, Yongjun

    2018-03-01

    The grid-integration of Photovoltaic-Storage System brings some undefined factors to the network. In order to make full use of the adjusting ability of Photovoltaic-Storage System (PSS), this paper puts forward a reactive power optimization model, which are used to construct the objective function based on power loss and the device adjusting cost, including energy storage adjusting cost. By using Cataclysmic Genetic Algorithm to solve this optimization problem, and comparing with other optimization method, the result proved that: the method of dynamic extended reactive power optimization this article puts forward, can enhance the effect of reactive power optimization, including reducing power loss and device adjusting cost, meanwhile, it gives consideration to the safety of voltage.

  1. Optimal design of green and grey stormwater infrastructure for small urban catchment based on life-cycle cost-effectiveness analysis

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Chui, T. F. M.

    2016-12-01

    Green infrastructure (GI) is identified as sustainable and environmentally friendly alternatives to the conventional grey stormwater infrastructure. Commonly used GI (e.g. green roof, bioretention, porous pavement) can provide multifunctional benefits, e.g. mitigation of urban heat island effects, improvements in air quality. Therefore, to optimize the design of GI and grey drainage infrastructure, it is essential to account for their benefits together with the costs. In this study, a comprehensive simulation-optimization modelling framework that considers the economic and hydro-environmental aspects of GI and grey infrastructure for small urban catchment applications is developed. Several modelling tools (i.e., EPA SWMM model, the WERF BMP and LID Whole Life Cycle Cost Modelling Tools) and optimization solvers are coupled together to assess the life-cycle cost-effectiveness of GI and grey infrastructure, and to further develop optimal stormwater drainage solutions. A typical residential lot in New York City is examined as a case study. The life-cycle cost-effectiveness of various GI and grey infrastructure are first examined at different investment levels. The results together with the catchment parameters are then provided to the optimization solvers, to derive the optimal investment and contributing area of each type of the stormwater controls. The relationship between the investment and optimized environmental benefit is found to be nonlinear. The optimized drainage solutions demonstrate that grey infrastructure is preferred at low total investments while more GI should be adopted at high investments. The sensitivity of the optimized solutions to the prices the stormwater controls is evaluated and is found to be highly associated with their utilizations in the base optimization case. The overall simulation-optimization framework can be easily applied to other sites world-wide, and to be further developed into powerful decision support systems.

  2. "RCL-Pooling Assay": A Simplified Method for the Detection of Replication-Competent Lentiviruses in Vector Batches Using Sequential Pooling.

    PubMed

    Corre, Guillaume; Dessainte, Michel; Marteau, Jean-Brice; Dalle, Bruno; Fenard, David; Galy, Anne

    2016-02-01

    Nonreplicative recombinant HIV-1-derived lentiviral vectors (LV) are increasingly used in gene therapy of various genetic diseases, infectious diseases, and cancer. Before they are used in humans, preparations of LV must undergo extensive quality control testing. In particular, testing of LV must demonstrate the absence of replication-competent lentiviruses (RCL) with suitable methods, on representative fractions of vector batches. Current methods based on cell culture are challenging because high titers of vector batches translate into high volumes of cell culture to be tested in RCL assays. As vector batch size and titers are continuously increasing because of the improvement of production and purification methods, it became necessary for us to modify the current RCL assay based on the detection of p24 in cultures of indicator cells. Here, we propose a practical optimization of this method using a pairwise pooling strategy enabling easier testing of higher vector inoculum volumes. These modifications significantly decrease material handling and operator time, leading to a cost-effective method, while maintaining optimal sensibility of the RCL testing. This optimized "RCL-pooling assay" ameliorates the feasibility of the quality control of large-scale batches of clinical-grade LV while maintaining the same sensitivity.

  3. Periodic Application of Stochastic Cost Optimization Methodology to Achieve Remediation Objectives with Minimized Life Cycle Cost

    NASA Astrophysics Data System (ADS)

    Kim, U.; Parker, J.

    2016-12-01

    Many dense non-aqueous phase liquid (DNAPL) contaminated sites in the U.S. are reported as "remediation in progress" (RIP). However, the cost to complete (CTC) remediation at these sites is highly uncertain and in many cases, the current remediation plan may need to be modified or replaced to achieve remediation objectives. This study evaluates the effectiveness of iterative stochastic cost optimization that incorporates new field data for periodic parameter recalibration to incrementally reduce prediction uncertainty and implement remediation design modifications as needed to minimize the life cycle cost (i.e., CTC). This systematic approach, using the Stochastic Cost Optimization Toolkit (SCOToolkit), enables early identification and correction of problems to stay on track for completion while minimizing the expected (i.e., probability-weighted average) CTC. This study considers a hypothetical site involving multiple DNAPL sources in an unconfined aquifer using thermal treatment for source reduction and electron donor injection for dissolved plume control. The initial design is based on stochastic optimization using model parameters and their joint uncertainty based on calibration to site characterization data. The model is periodically recalibrated using new monitoring data and performance data for the operating remediation systems. Projected future performance using the current remediation plan is assessed and reoptimization of operational variables for the current system or consideration of alternative designs are considered depending on the assessment results. We compare remediation duration and cost for the stepwise re-optimization approach with single stage optimization as well as with a non-optimized design based on typical engineering practice.

  4. Antitumor Efficacy Testing in Rodents

    PubMed Central

    2008-01-01

    The preclinical research and human clinical trials necessary for developing anticancer therapeutics are costly. One contributor to these costs is preclinical rodent efficacy studies, which, in addition to the costs associated with conducting them, often guide the selection of agents for clinical development. If inappropriate or inaccurate recommendations are made on the basis of these preclinical studies, then additional costs are incurred. In this commentary, I discuss the issues associated with preclinical rodent efficacy studies. These include the identification of proper preclinical efficacy models, the selection of appropriate experimental endpoints, and the correct statistical evaluation of the resulting data. I also describe important experimental design considerations, such as selecting the drug vehicle, optimizing the therapeutic treatment plan, properly powering the experiment by defining appropriate numbers of replicates in each treatment arm, and proper randomization. Improved preclinical selection criteria can aid in reducing unnecessary human studies, thus reducing the overall costs of anticancer drug development. PMID:18957675

  5. Communications systems technology assessment study. Volume 2: Results

    NASA Technical Reports Server (NTRS)

    Kelley, R. L.; Khatri, R. K.; Kiesling, J. D.; Weiss, J. A.

    1977-01-01

    The cost and technology characteristics are examined for providing special satellite services at UHF, 2.5 GHz, and 14/12 GHz. Considered are primarily health, educational, informational and emergency disaster type services. The total cost of each configuration including space segment, earth station, installation operation and maintenance was optimized to reduce the user's total annual cost and establish preferred equipment performance parameters. Technology expected to be available between now and 1985 is identified and comparisons made between selected alternatives. A key element of the study is a survey of earth station equipment updating past work in the field, providing new insight into technology, and evaluating production and test methods that can reduce costs in large production runs. Various satellite configurations were examined. The cost impact of rain attenuation at Ku-band was evaluated. The factors affecting the ultimate capacity achievable with the available orbital arc and available bandwidth were analyzed.

  6. Optimal linear reconstruction of dark matter from halo catalogues

    DOE PAGES

    Cai, Yan -Chuan; Bernstein, Gary; Sheth, Ravi K.

    2011-04-01

    The dark matter lumps (or "halos") that contain galaxies have locations in the Universe that are to some extent random with respect to the overall matter distributions. We investigate how best to estimate the total matter distribution from the locations of the halos. We derive the weight function w(M) to apply to dark-matter haloes that minimizes the stochasticity between the weighted halo distribution and its underlying mass density field. The optimal w(M) depends on the range of masses of halos being used. While the standard biased-Poisson model of the halo distribution predicts that bias weighting is optimal, the simple factmore » that the mass is comprised of haloes implies that the optimal w(M) will be a mixture of mass-weighting and bias-weighting. In N-body simulations, the Poisson estimator is up to 15× noisier than the optimal. Optimal weighting could make cosmological tests based on the matter power spectrum or cross-correlations much more powerful and/or cost effective.« less

  7. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    NASA Astrophysics Data System (ADS)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  8. Invasive urodynamic testing prior to surgical treatment for stress urinary incontinence in women: cost-effectiveness and value of information analyses in the context of a mixed methods feasibility study.

    PubMed

    Homer, Tara; Shen, Jing; Vale, Luke; McColl, Elaine; Tincello, Douglas G; Hilton, Paul

    2018-01-01

    INVESTIGATE-I (INVasive Evaluation before Surgical Treatment of Incontinence Gives Added Therapeutic Effect?) was a mixed methods study to assess the feasibility of a future randomised controlled trial of invasive urodynamic testing (IUT) prior to surgery for stress urinary incontinence (SUI) in women. Here we report one of the study's five components, with the specific objectives of (i) exploring the cost-effectiveness of IUT compared with clinical assessment plus non-invasive tests (henceforth described as 'IUT' and 'no IUT' respectively) in women with SUI or stress-predominant mixed urinary incontinence (MUI) prior to surgery, and (ii) determining the expected net gain (ENG) from additional research. Study participants were women with SUI or stress-predominant MUI who had failed to respond to conservative treatments recruited from seven UK urogynaecology and female urology units. They were randomised to receive either 'IUT' or 'no IUT' before undergoing further treatment. Data from 218 women were used in the economic analysis. Cost utility, net benefit and value of information (VoI) analyses were performed within a randomised controlled pilot trial. Costs and quality-adjusted life years (QALYs) were estimated over 6 months to determine the incremental cost per QALY of 'IUT' compared to 'no IUT'. Net monetary benefit informed the VoI analysis. The VoI estimated the ENG and optimal sample size for a future definitive trial. At 6 months, the mean difference in total average cost was £138 ( p  = 0.071) in favour of 'IUT'; there was no difference in QALYs estimated from the SF-12 (difference 0.004; p  = 0.425) and EQ-5D-3L (difference - 0.004; p  = 0.725); therefore, the probability of IUT being cost-effective remains uncertain. The estimated ENG was positive for further research to address this uncertainty with an optimal sample size of 404 women. This is the largest economic evaluation of IUT. On average, up to 6 months after treatment, 'IUT' may be cost-saving compared to 'no IUT' because of the reduction in surgery following invasive investigation. However, uncertainty remains over the probability of 'IUT' being considered cost-effective, especially in the longer term. The VoI analysis indicated that further research would be of value. ISRCTN. ISRCTN71327395. Registered 7 June 2010.

  9. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  10. The added clinical and economic value of diagnostic testing for epilepsy surgery.

    PubMed

    Hinde, Sebastian; Soares, Marta; Burch, Jane; Marson, Anthony; Woolacott, Nerys; Palmer, Stephen

    2014-05-01

    The costs, benefits and risks associated with diagnostic imaging investigations for epilepsy surgery necessitate the identification of an optimal pathway in the pre-surgical workup. In order to assess the added value of additional investigations a full cost-effectiveness evaluation should be conducted, taking into account all of the life-time costs and benefits associated with undertaking additional investigations. This paper considers and applies the appropriate framework against which a full evaluation should be assessed. We conducted a systematic review to evaluate the progression of the literature through this framework, finding that only isolated elements of added value have been appropriately evaluated. The results from applying the full added value framework are also presented, identifying an optimal strategy for pre-surgical evaluation for temporal lobe epilepsy surgery. Our results suggest that additional FDG-PET and invasive EEG investigations after an initially discordant MRI and video-EEG appears cost-effective, and that the value of subsequent invasive-EEGs is closely linked to the maintenance of longer-term benefits after surgery. It is integral to the evaluation of imaging technologies in the work-up for epilepsy surgery that the impact of the use of these technologies on clinical decision-making, and on further treatment decisions, is considered fully when informing cost-effectiveness. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Fast Quaternion Attitude Estimation from Two Vector Measurements

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. Existing closed-form attitude estimates based on Wahba's optimality criterion for two arbitrarily weighted observations are somewhat slow to evaluate. This paper presents two new fast quaternion attitude estimation algorithms using two vector observations, one optimal and one suboptimal. The suboptimal method gives the same estimate as the TRIAD algorithm, at reduced computational cost. Simulations show that the TRIAD estimate is almost as accurate as the optimal estimate in representative test scenarios.

  12. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  13. Determination of an Optimal Commercial Data Bus Architecture for a Flight Data System

    NASA Technical Reports Server (NTRS)

    Crawford, Kevin; Johnson, Martin; Humphries, Rick (Technical Monitor)

    2001-01-01

    NASA/Marshall Space Flight Center (MSFC) is continually looking for methods to reduce cost and schedule while keeping the quality of work high. MSFC is NASA's lead center for space transportation and microgravity research. When supporting NASA's programs several decisions concerning the avionics system must be made. Usually many trade studies must be conducted to determine the best ways to meet the customer's requirements. When deciding the flight data system, one of the first trade studies normally conducted is the determination of the data bus architecture. The schedule, cost, reliability, and environments are some of the factors that are reviewed in the determination of the data bus architecture. Based on the studies, the data bus architecture could result in a proprietary data bus or a commercial data bus. The cost factor usually removes the proprietary data bus from consideration. The commercial data bus's range from Versa Module Eurocard (VME) to Compact PCI to STD 32 to PC 104. If cost, schedule and size are prime factors, VME is usually not considered. If the prime factors are cost, schedule, and size then Compact PCI, STD 32 and PC104 are the choices for the data bus architecture. MSFC's center director has funded a study from his discretionary fund to determine an optimal low cost commercial data bus architecture. The goal of the study is to functionally and environmentally test Compact PCI, STD 32 and PC 104 data bus architectures. This paper will summarize the results of the data bus architecture study.

  14. Large-area triple-junction a-Si alloy production scaleup. Annual subcontract report, 17 March 1993--18 March 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oswald, R.; Morris, J.

    1994-11-01

    The objective of this subcontract over its three-year duration is to advance Solarex`s photovoltaic manufacturing technologies, reduce its a-Si:H module production costs, increase module performance and expand the Solarex commercial production capacity. Solarex shall meet these objectives by improving the deposition and quality of the transparent front contact, by optimizing the laser patterning process, scaling-up the semiconductor deposition process, improving the back contact deposition, scaling-up and improving the encapsulation and testing of its a-Si:H modules. In the Phase 2 portion of this subcontract, Solarex focused on improving deposition of the front contact, investigating alternate feed stocks for the front contact,more » maximizing throughput and area utilization for all laser scribes, optimizing a-Si:H deposition equipment to achieve uniform deposition over large-areas, optimizing the triple-junction module fabrication process, evaluating the materials to deposit the rear contact, and optimizing the combination of isolation scribe and encapsulant to pass the wet high potential test. Progress is reported on the following: Front contact development; Laser scribe process development; Amorphous silicon based semiconductor deposition; Rear contact deposition process; Frit/bus/wire/frame; Materials handling; and Environmental test, yield and performance analysis.« less

  15. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  16. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    PubMed

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  17. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    NASA Astrophysics Data System (ADS)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  18. Fuzzy multi-objective optimization case study based on an anaerobic co-digestion process of food waste leachate and piggery wastewater.

    PubMed

    Choi, Angelo Earvin Sy; Park, Hung Suck

    2018-06-20

    This paper presents the development and evaluation of fuzzy multi-objective optimization for decision-making that includes the process optimization of anaerobic digestion (AD) process. The operating cost criteria which is a fundamental research gap in previous AD analysis was integrated for the case study in this research. In this study, the mixing ratio of food waste leachate (FWL) and piggery wastewater (PWW), calcium carbonate (CaCO 3 ) and sodium chloride (NaCl) concentrations were optimized to enhance methane production while minimizing operating cost. The results indicated a maximum of 63.3% satisfaction for both methane production and operating cost under the following optimal conditions: mixing ratio (FWL: PWW) - 1.4, CaCO 3 - 2970.5 mg/L and NaCl - 2.7 g/L. In multi-objective optimization, the specific methane yield (SMY) was 239.0 mL CH 4 /g VS added , while 41.2% volatile solids reduction (VSR) was obtained at an operating cost of 56.9 US$/ton. In comparison with the previous optimization study that utilized the response surface methodology, the SMY, VSR and operating cost of the AD process were 310 mL/g, 54% and 83.2 US$/ton, respectively. The results from multi-objective fuzzy optimization proves to show the potential application of this technique for practical decision-making in the process optimization of AD process. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  20. OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE

    NASA Technical Reports Server (NTRS)

    Lee, H.

    1994-01-01

    For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.

  1. An Innovative Time-Cost-Quality Tradeoff Modeling of Building Construction Project Based on Resource Allocation

    PubMed Central

    2014-01-01

    The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated. PMID:24672351

  2. Comparing the economic and health benefits of different approaches to diagnosing Clostridium difficile infection.

    PubMed

    Bartsch, Sarah M; Umscheid, Craig A; Nachamkin, Irving; Hamilton, Keith; Lee, Bruce Y

    2015-01-01

    Accurate diagnosis of Clostridium difficile infection (CDI) is essential to effectively managing patients and preventing transmission. Despite the availability of several diagnostic tests, the optimal strategy is debatable and their economic values are unknown. We modified our previously existing C. difficile simulation model to determine the economic value of different CDI diagnostic approaches from the hospital perspective. We evaluated four diagnostic methods for a patient suspected of having CDI: 1) toxin A/B enzyme immunoassay, 2) glutamate dehydrogenase (GDH) antigen/toxin AB combined in one test, 3) nucleic acid amplification test (NAAT), and 4) GDH antigen/toxin AB combination test with NAAT confirmation of indeterminate results. Sensitivity analysis varied the proportion of those tested with clinically significant diarrhoea, the probability of CDI, NAAT cost and CDI treatment delay resulting from a false-negative test, length of stay and diagnostic sensitivity and specificity. The GDH/toxin AB plus NAAT approach leads to the timeliest treatment with the fewest unnecessary treatments given, resulted in the best bed management and generated the lowest cost. The NAAT-alone approach also leads to timely treatment. The GDH/toxin AB diagnostic (without NAAT confirmation) approach resulted in a large number of delayed treatments, but results in the fewest secondary colonisations. Results were robust to the sensitivity analysis. Choosing the right diagnostic approach is a matter of cost and test accuracy. GDH/toxin AB plus NAAT diagnosis led to the timeliest treatment and was the least costly. Copyright © 2014 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  3. Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim

    2016-05-01

    A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.

  4. Foraging optimally for home ranges

    USGS Publications Warehouse

    Mitchell, Michael S.; Powell, Roger A.

    2012-01-01

    Economic models predict behavior of animals based on the presumption that natural selection has shaped behaviors important to an animal's fitness to maximize benefits over costs. Economic analyses have shown that territories of animals are structured by trade-offs between benefits gained from resources and costs of defending them. Intuitively, home ranges should be similarly structured, but trade-offs are difficult to assess because there are no costs of defense, thus economic models of home-range behavior are rare. We present economic models that predict how home ranges can be efficient with respect to spatially distributed resources, discounted for travel costs, under 2 strategies of optimization, resource maximization and area minimization. We show how constraints such as competitors can influence structure of homes ranges through resource depression, ultimately structuring density of animals within a population and their distribution on a landscape. We present simulations based on these models to show how they can be generally predictive of home-range behavior and the mechanisms that structure the spatial distribution of animals. We also show how contiguous home ranges estimated statistically from location data can be misleading for animals that optimize home ranges on landscapes with patchily distributed resources. We conclude with a summary of how we applied our models to nonterritorial black bears (Ursus americanus) living in the mountains of North Carolina, where we found their home ranges were best predicted by an area-minimization strategy constrained by intraspecific competition within a social hierarchy. Economic models can provide strong inference about home-range behavior and the resources that structure home ranges by offering falsifiable, a priori hypotheses that can be tested with field observations.

  5. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  6. 4E analysis and multi objective optimization of a micro gas turbine and solid oxide fuel cell hybrid combined heat and power system

    NASA Astrophysics Data System (ADS)

    Sanaye, Sepehr; Katebi, Arash

    2014-02-01

    Energy, exergy, economic and environmental (4E) analysis and optimization of a hybrid solid oxide fuel cell and micro gas turbine (SOFC-MGT) system for use as combined generation of heat and power (CHP) is investigated in this paper. The hybrid system is modeled and performance related results are validated using available data in literature. Then a multi-objective optimization approach based on genetic algorithm is incorporated. Eight system design parameters are selected for the optimization procedure. System exergy efficiency and total cost rate (including capital or investment cost, operational cost and penalty cost of environmental emissions) are the two objectives. The effects of fuel unit cost, capital investment and system power output on optimum design parameters are also investigated. It is observed that the most sensitive and important design parameter in the hybrid system is fuel cell current density which has a significant effect on the balance between system cost and efficiency. The selected design point from the Pareto distribution of optimization results indicates a total system exergy efficiency of 60.7%, with estimated electrical energy cost 0.057 kW-1 h-1, and payback period of about 6.3 years for the investment.

  7. Clinical application of pharmacogenetics: focusing on practical issues.

    PubMed

    Chang, Matthew T; McCarthy, Jeanette J; Shin, Jaekyu

    2015-01-01

    Recent large-scale genetic-based studies have transformed the field of pharmacogenetics to identify, characterize and leverage genetic information to inform patient care. Genetic testing can be used to alter drug selection, optimize drug dosing and prevent unnecessary adverse events. As precision medicine becomes the mainstay in the clinic, it becomes critical for clinicians to utilize pharmacogenetics to guide patient care. One primary challenge is identifying patients where genetic tests that can potentially impact patient care. To address this challenge, our review highlights many practical issues clinicians may encounter: identifying candidate patients and clinical laboratories for pharmacogenetic testing, selecting highly curated resources to help asses test validity, reimbursing costs of pharmacogenetic tests, and interpreting of pharmacogenetic test results.

  8. CONDUIT: A New Multidisciplinary Integration Environment for Flight Control Development

    NASA Technical Reports Server (NTRS)

    Tischler, Mark B.; Colbourne, Jason D.; Morel, Mark R.; Biezad, Daniel J.; Levine, William S.; Moldoveanu, Veronica

    1997-01-01

    A state-of-the-art computational facility for aircraft flight control design, evaluation, and integration called CONDUIT (Control Designer's Unified Interface) has been developed. This paper describes the CONDUIT tool and case study applications to complex rotary- and fixed-wing fly-by-wire flight control problems. Control system analysis and design optimization methods are presented, including definition of design specifications and system models within CONDUIT, and the multi-objective function optimization (CONSOL-OPTCAD) used to tune the selected design parameters. Design examples are based on flight test programs for which extensive data are available for validation. CONDUIT is used to analyze baseline control laws against pertinent military handling qualities and control system specifications. In both case studies, CONDUIT successfully exploits trade-offs between forward loop and feedback dynamics to significantly improve the expected handling, qualities and minimize the required actuator authority. The CONDUIT system provides a new environment for integrated control system analysis and design, and has potential for significantly reducing the time and cost of control system flight test optimization.

  9. Testing of Strategies for the Acceleration of the Cost Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponciroli, Roberto; Vilim, Richard B.

    The general problem addressed in the Nuclear-Renewable Hybrid Energy System (N-R HES) project is finding the optimum economical dispatch (ED) and capacity planning solutions for the hybrid energy systems. In the present test-problem configuration, the N-R HES unit is composed of three electrical power-generating components, i.e. the Balance of Plant (BOP), the Secondary Energy Source (SES), and the Energy Storage (ES). In addition, there is an Industrial Process (IP), which is devoted to hydrogen generation. At this preliminary stage, the goal is to find the power outputs of each one of the N-R HES unit components (BOP, SES, ES) andmore » the IP hydrogen production level that maximizes the unit profit by simultaneously satisfying individual component operational constraints. The optimization problem is meant to be solved in the Risk Analysis Virtual Environment (RAVEN) framework. The dynamic response of the N-R HES unit components is simulated by using dedicated object-oriented models written in the Modelica modeling language. Though this code coupling provides for very accurate predictions, the ensuing optimization problem is characterized by a very large number of solution variables. To ease the computational burden and to improve the path to a converged solution, a method to better estimate the initial guess for the optimization problem solution was developed. The proposed approach led to the definition of a suitable Monte Carlo-based optimization algorithm (called the preconditioner), which provides an initial guess for the optimal N-R HES power dispatch and the optimal installed capacity for each one of the unit components. The preconditioner samples a set of stochastic power scenarios for each one of the N-R HES unit components, and then for each of them the corresponding value of a suitably defined cost function is evaluated. After having simulated a sufficient number of power histories, the configuration which ensures the highest profit is selected as the optimal one. The component physical dynamics are represented through suitable ramp constraints, which considerably simplify the numerical solving. In order to test the capabilities of the proposed approach, in the present report, the dispatch problem only is tackled, i.e. a reference unit configuration is assumed, and each one of the N-R HES unit components is assumed to have a fixed installed capacity. As for the next steps, the main improvement will concern the operation strategy of the ES facility. In particular, in order to describe a more realistic battery commitment strategy, the ES operation will be regulated according to the electricity price forecasts.« less

  10. Integrated strategic and tactical biomass-biofuel supply chain optimization.

    PubMed

    Lin, Tao; Rodríguez, Luis F; Shastri, Yogendra N; Hansen, Alan C; Ting, K C

    2014-03-01

    To ensure effective biomass feedstock provision for large-scale biofuel production, an integrated biomass supply chain optimization model was developed to minimize annual biomass-ethanol production costs by optimizing both strategic and tactical planning decisions simultaneously. The mixed integer linear programming model optimizes the activities range from biomass harvesting, packing, in-field transportation, stacking, transportation, preprocessing, and storage, to ethanol production and distribution. The numbers, locations, and capacities of facilities as well as biomass and ethanol distribution patterns are key strategic decisions; while biomass production, delivery, and operating schedules and inventory monitoring are key tactical decisions. The model was implemented to study Miscanthus-ethanol supply chain in Illinois. The base case results showed unit Miscanthus-ethanol production costs were $0.72L(-1) of ethanol. Biorefinery related costs accounts for 62% of the total costs, followed by biomass procurement costs. Sensitivity analysis showed that a 50% reduction in biomass yield would increase unit production costs by 11%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Procedure for minimizing the cost per watt of photovoltaic systems

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance

  12. Life-cycle cost as basis to optimize waste collection in space and time: A methodology for obtaining a detailed cost breakdown structure.

    PubMed

    Sousa, Vitor; Dias-Ferreira, Celia; Vaz, João M; Meireles, Inês

    2018-05-01

    Extensive research has been carried out on waste collection costs mainly to differentiate costs of distinct waste streams and spatial optimization of waste collection services (e.g. routes, number, and location of waste facilities). However, waste collection managers also face the challenge of optimizing assets in time, for instance deciding when to replace and how to maintain, or which technological solution to adopt. These issues require a more detailed knowledge about the waste collection services' cost breakdown structure. The present research adjusts the methodology for buildings' life-cycle cost (LCC) analysis, detailed in the ISO 15686-5:2008, to the waste collection assets. The proposed methodology is then applied to the waste collection assets owned and operated by a real municipality in Portugal (Cascais Ambiente - EMAC). The goal is to highlight the potential of the LCC tool in providing a baseline for time optimization of the waste collection service and assets, namely assisting on decisions regarding equipment operation and replacement.

  13. Performance of a Limiting-Antigen Avidity Enzyme Immunoassay for Cross-Sectional Estimation of HIV Incidence in the United States

    PubMed Central

    Konikoff, Jacob; Brookmeyer, Ron; Longosz, Andrew F.; Cousins, Matthew M.; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Koblin, Beryl A.; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Eshleman, Susan H.; Laeyendecker, Oliver

    2013-01-01

    Background A limiting antigen avidity enzyme immunoassay (HIV-1 LAg-Avidity assay) was recently developed for cross-sectional HIV incidence estimation. We evaluated the performance of the LAg-Avidity assay alone and in multi-assay algorithms (MAAs) that included other biomarkers. Methods and Findings Performance of testing algorithms was evaluated using 2,282 samples from individuals in the United States collected 1 month to >8 years after HIV seroconversion. The capacity of selected testing algorithms to accurately estimate incidence was evaluated in three longitudinal cohorts. When used in a single-assay format, the LAg-Avidity assay classified some individuals infected >5 years as assay positive and failed to provide reliable incidence estimates in cohorts that included individuals with long-term infections. We evaluated >500,000 testing algorithms, that included the LAg-Avidity assay alone and MAAs with other biomarkers (BED capture immunoassay [BED-CEIA], BioRad-Avidity assay, HIV viral load, CD4 cell count), varying the assays and assay cutoffs. We identified an optimized 2-assay MAA that included the LAg-Avidity and BioRad-Avidity assays, and an optimized 4-assay MAA that included those assays, as well as HIV viral load and CD4 cell count. The two optimized MAAs classified all 845 samples from individuals infected >5 years as MAA negative and estimated incidence within a year of sample collection. These two MAAs produced incidence estimates that were consistent with those from longitudinal follow-up of cohorts. A comparison of the laboratory assay costs of the MAAs was also performed, and we found that the costs associated with the optimal two assay MAA were substantially less than with the four assay MAA. Conclusions The LAg-Avidity assay did not perform well in a single-assay format, regardless of the assay cutoff. MAAs that include the LAg-Avidity and BioRad-Avidity assays, with or without viral load and CD4 cell count, provide accurate incidence estimates. PMID:24386116

  14. The application of simulation modeling to the cost and performance ranking of solar thermal power plants

    NASA Technical Reports Server (NTRS)

    Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.

    1981-01-01

    Small solar thermal power systems (up to 10 MWe in size) were tested. The solar thermal power plant ranking study was performed to aid in experiment activity and support decisions for the selection of the most appropriate technological approach. The cost and performance were determined for insolation conditions by utilizing the Solar Energy Simulation computer code (SESII). This model optimizes the size of the collector field and energy storage subsystem for given engine generator and energy transport characteristics. The development of the simulation tool, its operation, and the results achieved from the analysis are discussed.

  15. Socially optimal electric driving range of plug-in hybrid electric vehicles

    DOE PAGES

    Kontou, Eleftheria; Yin, Yafeng; Lin, Zhenhong

    2015-07-25

    Our study determines the optimal electric driving range of plug-in hybrid electric vehicles (PHEVs) that minimizes the daily cost borne by the society when using this technology. An optimization framework is developed and applied to datasets representing the US market. Results indicate that the optimal range is 16 miles with an average social cost of 3.19 per day when exclusively charging at home, compared to 3.27 per day of driving a conventional vehicle. The optimal range is found to be sensitive to the cost of battery packs and the price of gasoline. Moreover, when workplace charging is available, the optimalmore » electric driving range surprisingly increases from 16 to 22 miles, as larger batteries would allow drivers to better take advantage of the charging opportunities to achieve longer electrified travel distances, yielding social cost savings. If workplace charging is available, the optimal density is to deploy a workplace charger for every 3.66 vehicles. Finally, the diversification of the battery size, i.e., introducing a pair and triple of electric driving ranges to the market, could further decrease the average societal cost per PHEV by 7.45% and 11.5% respectively.« less

  16. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  17. PACE: Power-Aware Computing Engines

    DTIC Science & Technology

    2005-02-01

    more costly than compu- tation on our test platform, and it is memory access that dominates most lossless data compression algorithms . In fact, even...Performance and implementation concerns A compression algorithm may be implemented with many different, yet reasonable, data structures (including...Related work This section discusses data compression for low- bandwidth devices and optimizing algorithms for low energy. Though much work has gone

  18. Effect of plot and sample size on timing and precision of urban forest assessments

    Treesearch

    David J. Nowak; Jeffrey T. Walton; Jack C. Stevens; Daniel E. Crane; Robert E. Hoehn

    2008-01-01

    Accurate field data can be used to assess ecosystem services from trees and to improve urban forest management, yet little is known about the optimization of field data collection in the urban environment. Various field and Geographic Information System (GIS) tests were performed to help understand how time costs and precision of tree population estimates change with...

  19. Mathematical Modeling for Optimal System Testing under Fixed-cost Constraint

    DTIC Science & Technology

    2009-04-22

    Logistics Network Strategic Sourcing Program Management Building Collaborative Capacity Business Process Reengineering (BPR) for LCS Mission...research presented at the symposium was supported by the Acquisition Chair of the Graduate School of Business & Public Policy at the Naval...James B. Greene, RADM, USN, (Ret) Acquisition Chair Graduate School of Business and Public Policy Naval Postgraduate School 555 Dyer Road, Room

  20. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  1. Airfoil optimization by the one-shot method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1994-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  2. Model-based optimal design of active cool thermal energy storage for maximal life-cycle cost saving from demand management in commercial buildings

    DOE PAGES

    Cui, Borui; Gao, Dian-ce; Xiao, Fu; ...

    2016-12-23

    This article provides a method in comprehensive evaluation of cost-saving potential of active cool thermal energy storage (CTES) integrated with HVAC system for demand management in non-residential building. The active storage is beneficial by shifting peak demand for peak load management (PLM) as well as providing longer duration and larger capacity of demand response (DR). In this research, a model-based optimal design method using genetic algorithm is developed to optimize the capacity of active CTES aiming for maximizing the life-cycle cost saving concerning capital cost associated with storage capacity as well as incentives from both fast DR and PLM. Inmore » the method, the active CTES operates under a fast DR control strategy during DR events while under the storage-priority operation mode to shift peak demand during normal days. The optimal storage capacities, maximum annual net cost saving and corresponding power reduction set-points during DR event are obtained by using the proposed optimal design method. Lastly, this research provides guidance in comprehensive evaluation of cost-saving potential of CTES integrated with HVAC system for building demand management including both fast DR and PLM.« less

  3. Model-based optimal design of active cool thermal energy storage for maximal life-cycle cost saving from demand management in commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Borui; Gao, Dian-ce; Xiao, Fu

    This article provides a method in comprehensive evaluation of cost-saving potential of active cool thermal energy storage (CTES) integrated with HVAC system for demand management in non-residential building. The active storage is beneficial by shifting peak demand for peak load management (PLM) as well as providing longer duration and larger capacity of demand response (DR). In this research, a model-based optimal design method using genetic algorithm is developed to optimize the capacity of active CTES aiming for maximizing the life-cycle cost saving concerning capital cost associated with storage capacity as well as incentives from both fast DR and PLM. Inmore » the method, the active CTES operates under a fast DR control strategy during DR events while under the storage-priority operation mode to shift peak demand during normal days. The optimal storage capacities, maximum annual net cost saving and corresponding power reduction set-points during DR event are obtained by using the proposed optimal design method. Lastly, this research provides guidance in comprehensive evaluation of cost-saving potential of CTES integrated with HVAC system for building demand management including both fast DR and PLM.« less

  4. A Genetic Algorithm for the Generation of Packetization Masks for Robust Image Communication

    PubMed Central

    Zapata-Quiñones, Katherine; Duran-Faundez, Cristian; Gutiérrez, Gilberto; Lecuire, Vincent; Arredondo-Flores, Christopher; Jara-Lipán, Hugo

    2017-01-01

    Image interleaving has proven to be an effective solution to provide the robustness of image communication systems when resource limitations make reliable protocols unsuitable (e.g., in wireless camera sensor networks); however, the search for optimal interleaving patterns is scarcely tackled in the literature. In 2008, Rombaut et al. presented an interesting approach introducing a packetization mask generator based in Simulated Annealing (SA), including a cost function, which allows assessing the suitability of a packetization pattern, avoiding extensive simulations. In this work, we present a complementary study about the non-trivial problem of generating optimal packetization patterns. We propose a genetic algorithm, as an alternative to the cited work, adopting the mentioned cost function, then comparing it to the SA approach and a torus automorphism interleaver. In addition, we engage the validation of the cost function and provide results attempting to conclude about its implication in the quality of reconstructed images. Several scenarios based on visual sensor networks applications were tested in a computer application. Results in terms of the selected cost function and image quality metric PSNR show that our algorithm presents similar results to the other approaches. Finally, we discuss the obtained results and comment about open research challenges. PMID:28452934

  5. Development of a codon optimization strategy using the efor RED reporter gene as a test case

    NASA Astrophysics Data System (ADS)

    Yip, Chee-Hoo; Yarkoni, Orr; Ajioka, James; Wan, Kiew-Lian; Nathan, Sheila

    2018-04-01

    Synthetic biology is a platform that enables high-level synthesis of useful products such as pharmaceutically related drugs, bioplastics and green fuels from synthetic DNA constructs. Large-scale expression of these products can be achieved in an industrial compliant host such as Escherichia coli. To maximise the production of recombinant proteins in a heterologous host, the genes of interest are usually codon optimized based on the codon usage of the host. However, the bioinformatics freeware available for standard codon optimization might not be ideal in determining the best sequence for the synthesis of synthetic DNA. Synthesis of incorrect sequences can prove to be a costly error and to avoid this, a codon optimization strategy was developed based on the E. coli codon usage using the efor RED reporter gene as a test case. This strategy replaces codons encoding for serine, leucine, proline and threonine with the most frequently used codons in E. coli. Furthermore, codons encoding for valine and glycine are substituted with the second highly used codons in E. coli. Both the optimized and original efor RED genes were ligated to the pJS209 plasmid backbone using Gibson Assembly and the recombinant DNAs were transformed into E. coli E. cloni 10G strain. The fluorescence intensity per cell density of the optimized sequence was improved by 20% compared to the original sequence. Hence, the developed codon optimization strategy is proposed when designing an optimal sequence for heterologous protein production in E. coli.

  6. Combined use of Kappa Free Light Chain Index and Isoelectrofocusing of Cerebro-Spinal Fluid in Diagnosing Multiple Sclerosis: Performances and Costs.

    PubMed

    Crespi, Ilaria; Sulas, Maria Giovanna; Mora, Riccardo; Naldi, Paola; Vecchio, Domizia; Comi, Cristoforo; Cantello, Roberto; Bellomo, Giorgio

    2017-03-01

    Isoelectrofocusing (IEF) to detect oligoclonal bands (OBCs) in cerebrospinal fluid (CSF) is the gold standard approach for evaluating intrathecal immunoglobulin synthesis in multiple sclerosis (MS) but the kappa free light chain index (KFLCi) is emerging as an alternative marker, and the combined/sequential uses of IEF and KFLCi have never been challenged. CSF and serum albumin, IgG, kFLC and lFLC were measured by nephelometry; albumin, IgG and kFLC quotients as well as Link and kFLC indexes were calculated; OCBs were evaluated by immunofixation. A total of 150 consecutive patients: 48 with MS, 32 with other neurological inflammatory diseases (NID), 62 with neurological non-inflammatory diseases (NNID), and 8 without any detectable neurological disease (NND) were investigated. Both IEF and KFLCi showed a similar accuracy as diagnostic tests for multiple sclerosis. The high sensitivity and specificity associated with the lower cost of KFLCi suggested to use this test first, followed by IEF as a confirmative procedure. The sequential use of IEF and KFLCi showed high diagnostic efficiency with cost reduction of 43 and 21%, if compared to the contemporary use of both tests, or the unique use of IEF in all patients. The "sequential testing" using KFLCi followed by IEF in MS represents an optimal procedure with accurate performance and lower costs.

  7. A case management tool for occupational health nurses: development, testing, and application.

    PubMed

    Mannon, J A; Conrad, K M; Blue, C L; Muran, S

    1994-08-01

    1. Case management is a process of coordinating an individual client's health care services to achieve optimal, quality care delivered in a cost effective manner. The case manager establishes a provider network, recommends treatment plans that assure quality and efficacy while controlling costs, monitors outcomes, and maintains a strong communication link among all the parties. 2. Through development of audit tools such as the one presented in this article, occupational health nurses can document case management activities and provide employers with measurable outcomes. 3. The Case Management Activity Checklist was tested using data from 61 firefighters' musculoskeletal injury cases. 4. The activities on the checklist are a step by step process: case identification/case disposition; assessment; return to work plan; resource identification; collaborative communication; and evaluation.

  8. Application of multi-objective optimization to pooled experiments of next generation sequencing for detection of rare mutations.

    PubMed

    Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario

    2014-01-01

    In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.

  9. Optimization of joint energy micro-grid with cold storage

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Luo, Simin; Tian, Yan; Chen, Xianda; Xiong, Botao; Zhou, Bowen

    2018-02-01

    To accommodate distributed photovoltaic (PV) curtailment, to make full use of the joint energy micro-grid with cold storage, and to reduce the high operating costs, the economic dispatch of joint energy micro-grid load is particularly important. Considering the different prices during the peak and valley durations, an optimization model is established, which takes the minimum production costs and PV curtailment fluctuations as the objectives. Linear weighted sum method and genetic-taboo Particle Swarm Optimization (PSO) algorithm are used to solve the optimization model, to obtain optimal power supply output. Taking the garlic market in Henan as an example, the simulation results show that considering distributed PV and different prices in different time durations, the optimization strategies are able to reduce the operating costs and accommodate PV power efficiently.

  10. Additive Manufacturing of Low Cost Upper Stage Propulsion Components

    NASA Technical Reports Server (NTRS)

    Protz, Christopher; Bowman, Randy; Cooper, Ken; Fikes, John; Taminger, Karen; Wright, Belinda

    2014-01-01

    NASA is currently developing Additive Manufacturing (AM) technologies and design tools aimed at reducing the costs and manufacturing time of regeneratively cooled rocket engine components. These Low Cost Upper Stage Propulsion (LCUSP) tasks are funded through NASA's Game Changing Development Program in the Space Technology Mission Directorate. The LCUSP project will develop a copper alloy additive manufacturing design process and develop and optimize the Electron Beam Freeform Fabrication (EBF3) manufacturing process to direct deposit a nickel alloy structural jacket and manifolds onto an SLM manufactured GRCop chamber and Ni-alloy nozzle. In order to develop these processes, the project will characterize both the microstructural and mechanical properties of the SLMproduced GRCop-84, and will explore and document novel design techniques specific to AM combustion devices components. These manufacturing technologies will be used to build a 25K-class regenerative chamber and nozzle (to be used with tested DMLS injectors) that will be tested individually and as a system in hot fire tests to demonstrate the applicability of the technologies. These tasks are expected to bring costs and manufacturing time down as spacecraft propulsion systems typically comprise more than 70% of the total vehicle cost and account for a significant portion of the development schedule. Additionally, high pressure/high temperature combustion chambers and nozzles must be regeneratively cooled to survive their operating environment, causing their design to be time consuming and costly to build. LCUSP presents an opportunity to develop and demonstrate a process that can infuse these technologies into industry, build competition, and drive down costs of future engines.

  11. Cost optimization for buildings with hybrid ventilation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Kun; Lu, Yan

    A method including: computing a total cost for a first zone in a building, wherein the total cost is equal to an actual energy cost of the first zone plus a thermal discomfort cost of the first zone; and heuristically optimizing the total cost to identify temperature setpoints for a mechanical heating/cooling system and a start time and an end time of the mechanical heating/cooling system, based on external weather data and occupancy data of the first zone.

  12. Replica Approach for Minimal Investment Risk with Cost

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-06-01

    In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.

  13. Low-Cost Bio-Based Phase Change Materials as an Energy Storage Medium in Building Envelopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biswas, Kaushik; Abhari, Mr. Ramin; Shukla, Dr. Nitin

    2015-01-01

    A promising approach to increasing the energy efficiency of buildings is the implementation of phase change material (PCM) in building envelope systems. Several studies have reported the energy saving potential of PCM in building envelopes. However, wide application of PCMs in building applications has been inhibited, in part, by their high cost. This article describes a novel paraffin product made of naturally occurring fatty acids/glycerides trapped into high density polyethylene (HDPE) pellets and its performance in a building envelope application, with the ultimate goal of commercializing a low-cost PCM platform. The low-cost PCM pellets were mixed with cellulose insulation, installedmore » in external walls and field-tested under natural weatherization conditions for a period of several months. In addition, several PCM samples and PCM-cellulose samples were prepared under controlled conditions for laboratory-scale testing. The laboratory tests were performed to determine the phase change properties of PCM-enhanced cellulose insulation both at microscopic and macroscopic levels. This article presents the data and analysis from the exterior test wall and the laboratory-scale test data. PCM behavior is influenced by the weather and interior conditions, PCM phase change temperature and PCM distribution within the wall cavity, among other factors. Under optimal conditions, the field data showed up to 20% reduction in weekly heat transfer through an external wall due to the PCM compared to cellulose-only insulation.« less

  14. Space Transportatioin System (STS) propellant scavenging system study. Volume 3: Cost and work breakdown structure-dictionary

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Fundamentally, the volumes of the oxidizer and fuel propellant scavenged from the orbiter and external tank determine the size and weight of the scavenging system. The optimization of system dimensions and weights is stimulated by the requirement to minimize the use of partial length of the orbiter payload bay. Thus, the cost estimates begin with weights established for the optimum design. Both the design, development, test, and evaluation and theoretical first unit hardware production costs are estimated from parametric cost weight scaling relations for four subsystems. For cryogenic propellants, the widely differing characteristics of the oxidizer and the fuel lead to two separate tank subsystems, in addition to the electrical and instrumentation subsystems. Hardwares costs also involve quantity, as an independent variable, since the number of production scavenging systems is not firm. For storable propellants, since the tankage volume of the oxidizer and fuel are equal, the hardware production costs for developing these systems are lower than for cryogenic propellants.

  15. High Volume Pulsed EPC for T/R Modules in Satellite Constellation

    NASA Astrophysics Data System (ADS)

    Notarianni, Michael; Maynadier, Paul; Marin, Marc

    2014-08-01

    In the frame of Iridium Next business, a mobile satellite service, Thales Alenia Space (TAS) has to produce more than 2400 x 65W and 162 x 250W pulsed Electronic Power Conditioners (EPC) to supply the RF transmit/receive modules that compose the active antenna of the satellites.The company has to deal with mass production constraints where cost, volume and performances are crucial factors. Compared to previous constellations realized by TAS, the overall challenge is to make further improvements in a short time:- Predictable electrical models- Deeper design-to-cost approach- Streamlining improvements and test coverageAs the active antenna drives the consumption of the payload, accurate performances have been evaluated early owing to the use of simulation (based on average model) and breadboard tests at the same time.The necessary cost reduction has been done owing to large use of COTS (Components Off The Shelf). In order to secure cost and schedule, each manufacturing step has been optimized to maximize test coverage in order to guarantee high reliability.At this time, more than 200 flight models have already been manufactured, validating this approach.This paper is focused on the 65W EPC but the same activities have been led on the 250W EPC.

  16. A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications

    NASA Technical Reports Server (NTRS)

    DuMonthier, Jeffrey; Suarez, George

    2013-01-01

    Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the new process specific device models. The system has been used in the design of time to digital converters for laser ranging and time-of-flight mass spectrometry to optimize analog, mixed signal and digital circuits such as charge sensitive amplifiers, comparators, delay elements, radiation tolerant dual interlocked (DICE) flip-flops and two of three voter gates.

  17. Optimization of Medication Use at Accountable Care Organizations.

    PubMed

    Wilks, Chrisanne; Krisle, Erik; Westrich, Kimberly; Lunner, Kristina; Muhlestein, David; Dubois, Robert

    2017-10-01

    Optimized medication use involves the effective use of medications for better outcomes, improved patient experience, and lower costs. Few studies systematically gather data on the actions accountable care organizations (ACOs) have taken to optimize medication use. To (a) assess how ACOs optimize medication use; (b) establish an association between efforts to optimize medication use and achievement on financial and quality metrics; (c) identify organizational factors that correlate with optimized medication use; and (d) identify barriers to optimized medication use. This cross-sectional study consisted of a survey and interviews that gathered information on the perceptions of ACO leadership. The survey contained a medication practices inventory (MPI) composed of 38 capabilities across 6 functional domains related to optimizing medication use. ACOs completed self-assessments that included rating each component of the MPI on a scale of 1 to 10. Fisher's exact tests, 2-proportions tests, t-tests, and logistic regression were used to test for associations between ACO scores on the MPI and performance on financial and quality metrics, and on ACO descriptive characteristics. Of the 847 ACOs that were contacted, 49 provided usable survey data. These ACOs rated their own system's ability to manage the quality and costs of optimizing medication use, providing a 64% and 31% affirmative response, respectively. Three ACOs achieved an overall MPI score of 8 or higher, 45 scored between 4 and 7.9, and 1 scored between 0 and 3.9. Using the 3 score groups, the study did not identify a relationship between MPI scores and achievement on financial or quality benchmarks, ACO provider type, member volume, date of ACO creation, or the presence of a pharmacist in a leadership position. Barriers to optimizing medication use relate to reimbursement for pharmacist integration, lack of health information technology interoperability, lack of data, feasibility issues, and physician buy-in. Compared with 2012 data, data on ACOs that participated in this study show that they continue to build effective strategies to optimize medication use. These ACOs struggle with both notification related to prescription use and measurement of the influence optimized medication use has on costs and quality outcomes. Compared with the earlier study, these data find that more ACOs are involving pharmacists directly in care, expanding the use of generics, electronically transmitting prescriptions, identifying gaps in care and potential adverse events, and educating patients on therapeutic alternatives. ACO-level policies that facilitate practices to optimize medication use are needed. Integrating pharmacists into care, giving both pharmacists and physicians access to clinical data, obtaining physician buy-in, and measuring the impact of practices to optimize medication use may improve these practices. This research was sponsored and funded by the National Pharmaceutical Council (NPC), an industry funded health policy research group that is not involved in lobbying or advocacy. Employees of the sponsor contributed to the research questions, determination of the relevance of the research questions, and the research design. Specifically, there was involvement in the survey and interview instruments. They also contributed to some data interpretation and revision of the manuscript. Leavitt Partners was hired by NPC to conduct research for this study and also serves a number of health care clients, including life sciences companies, provider organizations, accountable care organizations, and payers. Westrich and Dubois are employed by the NPC. Wilks, Krisle, Lunner, and Muhlestein are employed by Leavitt Partners and did not receive separate compensation. Study concept and design were contributed by Krisle, Dubois, and Muhlestein, along with Lunner and Westrich. Krisle and Muhlestein collected the data, and data interpretation was performed by Wilks, Krisle, and Muhlestein, along with Dubois and Westrich. The manuscript was written primarily by Wilks, along with Krisle and Muhlestein, and revised by Wilks, Westrich, Lunner, and Krisle. Preliminary versions of this work were presented at the following: National Council for Prescription Drug Programs Educational Summit, November 1, 2016; Academy Health 2016 Annual Research Meeting, June 27, 2016; Accountable Care Learning Collaborative Webinar, June 16, 2016; the 21st Annual PBMI Drug Benefit Conference, February 29, 2016; National Value-Based Payment and Pay for Performance Summit, February 17, 2016; National Accountable Care Congress, November 17, 2015; and American Journal of Managed Care's ACO Emerging Healthcare Delivery Coalition, Fall 2015 Live Meeting, October 15, 2015.

  18. Strategies for Diagnosing and Treating Suspected Acute Bacterial Sinusitis

    PubMed Central

    Balk, Ethan M; Zucker, Deborah R; Engels, Eric A; Wong, John B; Williams, John W; Lau, Joseph

    2001-01-01

    OBJECTIVE Symptoms suggestive of acute bacterial sinusitis are common. Available diagnostic and treatment options generate substantial costs with uncertain benefits. We assessed the cost-effectiveness of alternative management strategies to identify the optimal approach. DESIGN For such patients, we created a Markov model to examine four strategies: 1) no antibiotic treatment; 2) empirical antibiotic treatment; 3) clinical criteria-guided treatment; and 4) radiography-guided treatment. The model simulated a 14-day course of illness, included sinusitis prevalence, antibiotic side effects, sinusitis complications, direct and indirect costs, and symptom severity. Strategies costing less than $50,000 per quality-adjusted life year gained were considered “cost-effective.” MEASUREMENTS AND MAIN RESULTS For mild or moderate disease, basing antibiotic treatment on clinical criteria was cost-effective in clinical settings where sinusitis prevalence is within the range of 15% to 93% or 3% to 63%, respectively. For severe disease, or to prevent sinusitis or antibiotic side effect symptoms, use of clinical criteria was cost-effective in settings with lower prevalence (below 51% or 44%, respectively); empirical antibiotics was cost-effective with higher prevalence. Sinus radiography-guided treatment was never cost-effective for initial treatment. CONCLUSIONS Use of a simple set of clinical criteria to guide treatment is a cost-effective strategy in most clinical settings. Empirical antibiotics are cost-effective in certain settings; however, their use results in many unnecessary prescriptions. If this resulted in increased antibiotic resistance, costs would substantially rise and efficacy would fall. Newer, expensive antibiotics are of limited value. Additional testing is not cost-effective. Further studies are needed to find an accurate, low-cost diagnostic test for acute bacterial sinusitis. PMID:11679039

  19. Low cost Ku-band earth terminals for voice/data/facsimile

    NASA Technical Reports Server (NTRS)

    Kelley, R. L.

    1977-01-01

    A Ku-band satellite earth terminal capable of providing two way voice/facsimile teleconferencing, 128 Kbps data, telephone, and high-speed imagery services is proposed. Optimized terminal cost and configuration are presented as a function of FDMA and TDMA approaches to multiple access. The entire terminal from the antenna to microphones, speakers and facsimile equipment is considered. Component cost versus performance has been projected as a function of size of the procurement and predicted hardware innovations and production techniques through 1985. The lowest cost combinations of components has been determined in a computer optimization algorithm. The system requirements including terminal EIRP and G/T, satellite size, power per spacecraft transponder, satellite antenna characteristics, and link propagation outage were selected using a computerized system cost/performance optimization algorithm. System cost and terminal cost and performance requirements are presented as a function of the size of a nationwide U.S. network. Service costs are compared with typical conference travel costs to show the viability of the proposed terminal.

  20. Robotic lower limb prosthesis design through simultaneous computer optimizations of human and prosthesis costs

    NASA Astrophysics Data System (ADS)

    Handford, Matthew L.; Srinivasan, Manoj

    2016-02-01

    Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost - even lower than assuming that the non-amputee’s ankle torques are cost-free.

  1. Wind tunnel technology for the development of future commercial aircraft

    NASA Technical Reports Server (NTRS)

    Szodruch, J.

    1986-01-01

    Requirements for new technologies in the area of civil aircraft design are mainly related to the high cost involved in the purchase of modern, fuel saving aircraft. A second important factor is the long term rise in the price of fuel. The demonstration of the benefits of new technologies, as far as these are related to aerodynamics, will,for the foreseeable future, still be based on wind tunnel measurements. Theoretical computation methods are very successfully used in design work, wing optimization, and an estimation of the Reynolds number effect. However, wind tunnel tests are still needed to verify the feasibility of the considered concepts. Along with other costs, the cost for the wind tunnel tests needed for the development of an aircraft is steadily increasing. The present investigation is concerned with the effect of numerical aerodynamics and civil aircraft technology on the development of wind tunnels. Attention is given to the requirements for the wind tunnel, investigative methods, measurement technology, models, and the relation between wind tunnel experiments and theoretical methods.

  2. Development and use of computational techniques in Army Aviation research and development programs for crash resistant helicopter technology

    NASA Technical Reports Server (NTRS)

    Burrows, Leroy T.

    1993-01-01

    During the 1960's over 30 full-scale aircraft crash tests were conducted by the Flight Safety Foundation under contract to the Aviation Applied Technology Directorate (AATD) of the U.S. Army Aviation Systems Command (AVSCOM). The purpose of these tests were to conduct crash injury investigations that would provide a basis for the formulation of sound crash resistance design criteria for light fixed-wing and rotary wing aircraft. This resulted in the Crash Survival Design Criteria Designer's Guide which was first published in 1967 and has been revised numerous times, the last being in 1989. Full-scale aircraft crash testing is an expensive way to investigate structural deformations of occupied spaces and to determine the decelerative loadings experienced by occupants in a crash. This gave initial impetus to the U.S. Army to develop analytical methods to predict the dynamic response of aircraft structures in a crash. It was believed that such analytical tools could be very useful in the preliminary design stage of a new helicopter system which is required to demonstrate a level of crash resistance and had to be more cost effective than full-scale crash tests or numerous component design support tests. From an economic point of view, it is more efficient to optimize for the incorporation of crash resistance features early in the design stage. However, during preliminary design it is doubtful if sufficient design details, which influence the exact plastic deformation shape of structural elements, will be available. The availability of simple procedures to predict energy absorption and load-deformation characteristics will allow the designer to initiate valuable cost, weight, and geometry tradeoff studies. The development of these procedures will require some testing of typical specimens. This testing should, as a minimum, verify the validity of proposed procedures for providing pertinent nonlinear load-deformation data. It was hoped that through the use of these analytical models, the designer could optimize aircraft design for crash resistance from both a weight and cost increment standpoint, thus enhancing the acceptance of the design criteria for crash resistance.

  3. Baseload Nitrate Salt Central Receiver Power Plant Design Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tilley, Drake; Kelly, Bruce; Burkholder, Frank

    The objectives of the work were to demonstrate that a 100 MWe central receiver plant, using nitrate salt as the receiver coolant, thermal storage medium, and heat transport fluid in the steam generator, can 1) operate, at full load, for 6,400 hours each year using only solar energy, and 2) satisfy the DOE levelized energy cost goal of $0.09/kWhe (real 2009 $). To achieve these objectives the work incorporated a large range of tasks relating to many different aspects of a molten salt tower plant. The first Phase of the project focused on developing a baseline design for a Moltenmore » Salt Tower and validating areas for improvement. Tasks included a market study, receiver design, heat exchanger design, preliminary heliostat design, solar field optimization, baseline system design including PFDs and P&IDs and detailed cost estimate. The baseline plant met the initial goal of less than $0.14/kWhe, and reinforced the need to reduce costs in several key areas to reach the overall $0.09/kWhe goal. The major improvements identified from Phase I were: 1) higher temperature salt to improve cycle efficiency and reduce storage requirements, 2) an improved receiver coating to increase the efficiency of the receiver, 3) a large receiver design to maximize storage and meet the baseload hours objective, and 4) lower cost heliostat field. The second Phase of the project looked at advancing the baseline tower with the identified improvements and included key prototypes. To validate increasing the standard solar salt temperature to 600 °C a dynamic test was conducted at Sandia. The results ultimately proved the hypothesis incorrect and showed high oxide production and corrosion rates. The results lead to further testing of systems to mitigate the oxide production to be able to increase the salt temperature for a commercial plant. Foster Wheeler worked on the receiver design in both Phase I and Phase II looking at both design and lowering costs utilizing commercial fossil boiler manufacturing. The cost and design goals for the project were met with this task, but the most interesting results had to do with defining the failure modes and looking at a “shakedown analysis” of the combined creep-fatigue failure. A separate task also looked at improving the absorber coatings on the receiver tubes that would improve the efficiency of the receiver. Significant progress was made on developing a novel paint with a high absorptivity that was on par with the current Pyromark, but shows additional potential to be optimized further. Although the coating did not meet the emissivity goals, preliminary testing the new paint shows potential to be much more durable, and potential to improve the receiver efficiency through a higher average absorptivity over the lifetime. Additional coatings were also designed and modeled results meet the project goals, but were not tested. Testing for low cycle fatigue of the full length receiver tubes was designed and constructed, but is still currently undergoing testing. A novel small heliostat was developed through an extensive brainstorming and down select. The concept was then detailed further with inputs from component testing and eventually a full prototype was built and tested. This task met or exceeded the accuracy and structure goals and also beat the cost goal. This provides a significant solar field costs savings for Abengoa that will be developed further to be used in future commercial plants. Ultimately the $0.09/kWhe (real 2009 $) and 6,400 hours goals of the project were met.« less

  4. Optimal management of a stochastically varying population when policy adjustment is costly.

    PubMed

    Boettiger, Carl; Bode, Michael; Sanchirico, James N; Lariviere, Jacob; Hastings, Alan; Armsworth, Paul R

    2016-04-01

    Ecological systems are dynamic and policies to manage them need to respond to that variation. However, policy adjustments will sometimes be costly, which means that fine-tuning a policy to track variability in the environment very tightly will only sometimes be worthwhile. We use a classic fisheries management problem, how to manage a stochastically varying population using annually varying quotas in order to maximize profit, to examine how costs of policy adjustment change optimal management recommendations. Costs of policy adjustment (changes in fishing quotas through time) could take different forms. For example, these costs may respond to the size of the change being implemented, or there could be a fixed cost any time a quota change is made. We show how different forms of policy costs have contrasting implications for optimal policies. Though it is frequently assumed that costs to adjusting policies will dampen variation in the policy, we show that certain cost structures can actually increase variation through time. We further show that failing to account for adjustment costs has a consistently worse economic impact than would assuming these costs are present when they are not.

  5. Evidence for composite cost functions in arm movement planning: an inverse optimal control approach.

    PubMed

    Berret, Bastien; Chiovetto, Enrico; Nori, Francesco; Pozzo, Thierry

    2011-10-01

    An important issue in motor control is understanding the basic principles underlying the accomplishment of natural movements. According to optimal control theory, the problem can be stated in these terms: what cost function do we optimize to coordinate the many more degrees of freedom than necessary to fulfill a specific motor goal? This question has not received a final answer yet, since what is optimized partly depends on the requirements of the task. Many cost functions were proposed in the past, and most of them were found to be in agreement with experimental data. Therefore, the actual principles on which the brain relies to achieve a certain motor behavior are still unclear. Existing results might suggest that movements are not the results of the minimization of single but rather of composite cost functions. In order to better clarify this last point, we consider an innovative experimental paradigm characterized by arm reaching with target redundancy. Within this framework, we make use of an inverse optimal control technique to automatically infer the (combination of) optimality criteria that best fit the experimental data. Results show that the subjects exhibited a consistent behavior during each experimental condition, even though the target point was not prescribed in advance. Inverse and direct optimal control together reveal that the average arm trajectories were best replicated when optimizing the combination of two cost functions, nominally a mix between the absolute work of torques and the integrated squared joint acceleration. Our results thus support the cost combination hypothesis and demonstrate that the recorded movements were closely linked to the combination of two complementary functions related to mechanical energy expenditure and joint-level smoothness.

  6. Testing the Birth Unit Design Spatial Evaluation Tool (BUDSET) in Australia: a pilot study.

    PubMed

    Foureur, Maralyn J; Leap, Nicky; Davis, Deborah L; Forbes, Ian F; Homer, Caroline E S

    2011-01-01

    To pilot test the Birth Unit Design Spatial Evaluation Tool (BUDSET) in an Australian maternity care setting to determine whether such an instrument can measure the optimality of different birth settings. Optimally designed spaces to give birth are likely to influence a woman's ability to experience physiologically normal labor and birth. This is important in the current industrialized environment, where increased caesarean section rates are causing concerns. The measurement of an optimal birth space is currently impossible, because there are limited tools available. A quantitative study was undertaken to pilot test the discriminant ability of the BUDSET in eight maternity units in New South Wales, Australia. Five auditors trained in the use of the BUDSET assessed the birth units using the BUDSET, which is based on 18 design principles and is divided into four domains (Fear Cascade, Facility, Aesthetics, and Support) with three to eight assessable items in each. Data were independently collected in eight birth units. Values for each of the domains were aggregated to provide an overall Optimality Score for each birth unit. A range of Optimality Scores was derived for each of the birth units (from 51 to 77 out of a possible 100 points). The BUDSET identified units with low-scoring domains. Essentially these were older units and conventional labor ward settings. The BUDSET provides a way to assess the optimality of birth units and determine which domain areas may need improvement. There is potential for improvements to existing birth spaces, and considerable improvement can be made with simple low-cost modifications. Further research is needed to validate the tool.

  7. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  8. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  9. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  10. Cost Overrun Optimism: Fact or Fiction

    DTIC Science & Technology

    2016-02-29

    Base, OH. Homgren, C. T. (1990). In G. Foster (Ed.), Cost accounting : A managerial emphasis (7th ed.). Englewood Cliffs, NJ: Prentice Hall. Morrison... Accounting Office. Gansler, J. S. (1989). Affording defense. Cambridge, MA: The MIT Press. Heise, S. R. (1991). A review of cost performance index...Image designed by Diane Fleischer Cost Overrun Optimism: FACT or FICTION? Maj David D. Christensen, USAF Program managers are advocates by

  11. Fireworks Algorithm with Enhanced Fireworks Interaction.

    PubMed

    Zhang, Bei; Zheng, Yu-Jun; Zhang, Min-Xia; Chen, Sheng-Yong

    2017-01-01

    As a relatively new metaheuristic in swarm intelligence, fireworks algorithm (FWA) has exhibited promising performance on a wide range of optimization problems. This paper aims to improve FWA by enhancing fireworks interaction in three aspects: 1) Developing a new Gaussian mutation operator to make sparks learn from more exemplars; 2) Integrating the regular explosion operator of FWA with the migration operator of biogeography-based optimization (BBO) to increase information sharing; 3) Adopting a new population selection strategy that enables high-quality solutions to have high probabilities of entering the next generation without incurring high computational cost. The combination of the three strategies can significantly enhance fireworks interaction and thus improve solution diversity and suppress premature convergence. Numerical experiments on the CEC 2015 single-objective optimization test problems show the effectiveness of the proposed algorithm. The application to a high-speed train scheduling problem also demonstrates its feasibility in real-world optimization problems.

  12. Particle swarm optimization and gravitational wave data analysis: Performance on a binary inspiral testbed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Yan; Mohanty, Soumya D.; Center for Gravitational Wave Astronomy, Department of Physics and Astronomy, University of Texas at Brownsville, 80 Fort Brown, Brownsville, Texas 78520

    2010-03-15

    The detection and estimation of gravitational wave signals belonging to a parameterized family of waveforms requires, in general, the numerical maximization of a data-dependent function of the signal parameters. Because of noise in the data, the function to be maximized is often highly multimodal with numerous local maxima. Searching for the global maximum then becomes computationally expensive, which in turn can limit the scientific scope of the search. Stochastic optimization is one possible approach to reducing computational costs in such applications. We report results from a first investigation of the particle swarm optimization method in this context. The method ismore » applied to a test bed motivated by the problem of detection and estimation of a binary inspiral signal. Our results show that particle swarm optimization works well in the presence of high multimodality, making it a viable candidate method for further applications in gravitational wave data analysis.« less

  13. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  14. Coordinated and uncoordinated optimization of networks

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-06-01

    In this paper, we consider spatial networks that realize a balance between an infrastructure cost (the cost of wire needed to connect the network in space) and communication efficiency, measured by average shortest path length. A global optimization procedure yields network topologies in which this balance is optimized. These are compared with network topologies generated by a competitive process in which each node strives to optimize its own cost-communication balance. Three phases are observed in globally optimal configurations for different cost-communication trade offs: (i) regular small worlds, (ii) starlike networks, and (iii) trees with a center of interconnected hubs. In the latter regime, i.e., for very expensive wire, power laws in the link length distributions P(w)∝w-α are found, which can be explained by a hierarchical organization of the networks. In contrast, in the local optimization process the presence of sharp transitions between different network regimes depends on the dimension of the underlying space. Whereas for d=∞ sharp transitions between fully connected networks, regular small worlds, and highly cliquish periphery-core networks are found, for d=1 sharp transitions are absent and the power law behavior in the link length distribution persists over a much wider range of link cost parameters. The measured power law exponents are in agreement with the hypothesis that the locally optimized networks consist of multiple overlapping suboptimal hierarchical trees.

  15. Competing Air Quality and Water Conservation Co-benefits from Power Sector Decarbonization

    NASA Astrophysics Data System (ADS)

    Peng, W.; Wagner, F.; Mauzerall, D. L.; Ramana, M. V.; Zhai, H.; Small, M.; Zhang, X.; Dalin, C.

    2016-12-01

    Decarbonizing the power sector can reduce fossil-based generation and associated air pollution and water use. However, power sector configurations that prioritize air quality benefits can be different from those that maximize water conservation benefits. Despite extensive work to optimize the generation mix under an air pollution or water constraint, little research has examined electricity transmission networks and the choice of which fossil fuel units to displace in order to achieve both environmental objectives simultaneously. When air pollution and water stress occur in different regions, the optimal transmission and displacement decisions still depend on priorities placed on air quality and water conservation benefits even if low-carbon generation planning is fixed. Here we use China as a test case, and develop a new optimization framework to study transmission and displacement decisions and the resulting air quality and water use impacts for six power sector decarbonization scenarios in 2030 ( 50% of national generation is low carbon). We fix low-carbon generation in each scenario (e.g. type, location, quantity) and vary technology choices and deployment patterns across scenarios. The objective is to minimize the total physical costs (transmission costs and coal power generation costs) and the estimated environmental costs. Environmental costs are estimated by multiplying effective air pollutant emissions (EMeff, emissions weighted by population density) and effective water use (Weff, water use weighted by a local water stress index) by their unit economic values, Vem and Vw. We are hence able to examine the effect of varying policy priorities by imposing different combinations of Vem and Vw. In all six scenarios, we find that increasing the priority on air quality co-benefits (higher Vem) reduces air pollution impacts (lower EMeff) at the expense of lower water conservation (higher Weff); and vice versa. Such results can largely be explained by differences in optimal transmission decisions due to different locations of air pollution and water stress in China (severe in the east and north respectively). To achieve both co-benefits simultaneously, it is therefore critical to coordinate policies that reduce air pollution (pollution tax) and water use (water pricing) with power sector planning.

  16. Control of African swine fever epidemics in industrialized swine populations.

    PubMed

    Halasa, Tariq; Bøtner, Anette; Mortensen, Sten; Christensen, Hanne; Toft, Nils; Boklund, Anette

    2016-12-25

    African swine fever (ASF) is a notifiable infectious disease with a high impact on swine health. The disease is endemic in certain regions in the Baltic countries and has spread to Poland constituting a risk of ASF spread toward Western Europe. Therefore, as part of contingency planning, it is important to explore strategies that can effectively control an epidemic of ASF. In this study, the epidemiological and economic effects of strategies to control the spread of ASF between domestic swine herds were examined using a published model (DTU-DADS-ASF). The control strategies were the basic EU and national strategy (Basic), the basic strategy plus pre-emptive depopulation of neighboring swine herds, and intensive surveillance of herds in the control zones, including testing live or dead animals. Virus spread via wild boar was not modelled. Under the basic control strategy, the median epidemic duration was predicted to be 21days (5th and 95th percentiles; 1-55days), the median number of infected herds was predicted to be 3 herds (1-8), and the total costs were predicted to be €326 million (€256-€442 million). Adding pre-emptive depopulation or intensive surveillance by testing live animals resulted in marginal improvements to the control of the epidemics. However, adding testing of dead animals in the protection and surveillance zones was predicted to be the optimal control scenario for an ASF epidemic in industrialized swine populations without contact to wild boar. This optimal scenario reduced the epidemic duration to 9days (1-38) and the total costs to €294 million (€257-€392 million). Export losses were the driving force of the total costs of the epidemics. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Sizing a rainwater harvesting cistern by minimizing costs

    NASA Astrophysics Data System (ADS)

    Pelak, Norman; Porporato, Amilcare

    2016-10-01

    Rainwater harvesting (RWH) has the potential to reduce water-related costs by providing an alternate source of water, in addition to relieving pressure on public water sources and reducing stormwater runoff. Existing methods for determining the optimal size of the cistern component of a RWH system have various drawbacks, such as specificity to a particular region, dependence on numerical optimization, and/or failure to consider the costs of the system. In this paper a formulation is developed for the optimal cistern volume which incorporates the fixed and distributed costs of a RWH system while also taking into account the random nature of the depth and timing of rainfall, with a focus on RWH to supply domestic, nonpotable uses. With rainfall inputs modeled as a marked Poisson process, and by comparing the costs associated with building a cistern with the costs of externally supplied water, an expression for the optimal cistern volume is found which minimizes the water-related costs. The volume is a function of the roof area, water use rate, climate parameters, and costs of the cistern and of the external water source. This analytically tractable expression makes clear the dependence of the optimal volume on the input parameters. An analysis of the rainfall partitioning also characterizes the efficiency of a particular RWH system configuration and its potential for runoff reduction. The results are compared to the RWH system at the Duke Smart Home in Durham, NC, USA to show how the method could be used in practice.

  18. Comprehensive Evaluation of Biological Growth Control by Chlorine-Based Biocides in Power Plant Cooling Systems Using Tertiary Effluent

    PubMed Central

    Chien, Shih-Hsiang; Dzombak, David A.; Vidic, Radisav D.

    2013-01-01

    Abstract Recent studies have shown that treated municipal wastewater can be a reliable cooling water alternative to fresh water. However, elevated nutrient concentration and microbial population in wastewater lead to aggressive biological proliferation in the cooling system. Three chlorine-based biocides were evaluated for the control of biological growth in cooling systems using tertiary treated wastewater as makeup, based on their biocidal efficiency and cost-effectiveness. Optimal chemical regimens for achieving successful biological growth control were elucidated based on batch-, bench-, and pilot-scale experiments. Biocide usage and biological activity in planktonic and sessile phases were carefully monitored to understand biological growth potential and biocidal efficiency of the three disinfectants in this particular environment. Water parameters, such as temperature, cycles of concentration, and ammonia concentration in recirculating water, critically affected the biocide performance in recirculating cooling systems. Bench-scale recirculating tests were shown to adequately predict the biocide residual required for a pilot-scale cooling system. Optimal residuals needed for proper biological growth control were 1, 2–3, and 0.5–1 mg/L as Cl2 for NaOCl, preformed NH2Cl, and ClO2, respectively. Pilot-scale tests also revealed that Legionella pneumophila was absent from these cooling systems when using the disinfectants evaluated in this study. Cost analysis showed that NaOCl is the most cost-effective for controlling biological growth in power plant recirculating cooling systems using tertiary-treated wastewater as makeup. PMID:23781129

  19. Comprehensive Evaluation of Biological Growth Control by Chlorine-Based Biocides in Power Plant Cooling Systems Using Tertiary Effluent.

    PubMed

    Chien, Shih-Hsiang; Dzombak, David A; Vidic, Radisav D

    2013-06-01

    Recent studies have shown that treated municipal wastewater can be a reliable cooling water alternative to fresh water. However, elevated nutrient concentration and microbial population in wastewater lead to aggressive biological proliferation in the cooling system. Three chlorine-based biocides were evaluated for the control of biological growth in cooling systems using tertiary treated wastewater as makeup, based on their biocidal efficiency and cost-effectiveness. Optimal chemical regimens for achieving successful biological growth control were elucidated based on batch-, bench-, and pilot-scale experiments. Biocide usage and biological activity in planktonic and sessile phases were carefully monitored to understand biological growth potential and biocidal efficiency of the three disinfectants in this particular environment. Water parameters, such as temperature, cycles of concentration, and ammonia concentration in recirculating water, critically affected the biocide performance in recirculating cooling systems. Bench-scale recirculating tests were shown to adequately predict the biocide residual required for a pilot-scale cooling system. Optimal residuals needed for proper biological growth control were 1, 2-3, and 0.5-1 mg/L as Cl 2 for NaOCl, preformed NH 2 Cl, and ClO 2 , respectively. Pilot-scale tests also revealed that Legionella pneumophila was absent from these cooling systems when using the disinfectants evaluated in this study. Cost analysis showed that NaOCl is the most cost-effective for controlling biological growth in power plant recirculating cooling systems using tertiary-treated wastewater as makeup.

  20. Costo-Efectividad de la Proteína C Reactiva, Procalcitonina y Escala de Rochester: Tres Estrategias Diagnosticas para la Identificación de Infección Bacteriana Severa en Lactantes Febriles sin Foco.

    PubMed

    Antonio Buendía, Jefferson; Colantonio, Lisandro

    2013-12-01

    The optimal practice management of highly febrile 1- to 3-month-old children without a focal source has been controversial. The release of a conjugate pneumococcal vaccine may reduce the rate of occult bacteremia and alter the utility of empiric testing. The objective of this study was to determine the cost-effectiveness of 3 different screening strategies of Serious Bacterial Infections (SBI) in Children Presenting with Fever without Source in Argentina. Cost-effectiveness (CE) analysis was performed to compare the strategies of procalcitonin, C reactive protein and Rochester criteria. A hypothetical cohort of 10 000 children who were 1 to 3 months of age and had a fever of >39°C and no source of infection was modeled for each strategy. Our main outcome measure was incremental CE ratios. C reactive protein result in US$ 937 per correctly diagnosed cases of SBI. The additional cost per additional correct diagnosis using procalcitonin versus C reactive protein was U$6127 while Rochester criteria resulted dominated. C reactive protein is the strategy more cost-effective to detect SBI in children with Fever without Source in Argentina. Due to low proportion of correctly diagnosed cases (< 80%) of three tests in the literature and our study, however; an individualized approach for children with fever is still necessary to optimize diagnostic investigations and treatment in the different emergency care settings. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Published by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) All rights reserved.

  1. Optimization of solid-state fermentation conditions for Trichoderma harzianum using an orthogonal test.

    PubMed

    Zhang, J D; Yang, Q

    2015-03-13

    The aim of this study was to develop a protocol for the production of fungal bio-pesticides with high efficiency, low cost, and non-polluting fermentation, while also increasing their survival rate under field conditions. This is the first study to develop biocontrol Trichoderma harzianum transformants TS1 that are resistant to benzimidazole fungicides. Agricultural corn stover and wheat bran waste were used as a medium and inducing carbon source for solid fermentation. Spore production was observed, and the method was optimized using single-factor tests with 4 factors at 3 levels in an orthogonal experimental design to determine the optimal culture conditions for T. harzianum TS1. In this step, we determined the best conditions for fermenting the biocontrol fungi. The optimal culture conditions for T. harzianum TS1 were cultivated for 8 days, a ratio of straw to wheat bran of 1:3, ammonium persulfate as the nitrogen source, and a water content of 30 mL. Under optimal culture conditions, the sporulation of T. harzianum TS1 reached 1.49 x 10(10) CFU/g, which was 1.46-fold higher than that achieved before optimization. Increased sporulation of T. harzianum TS1 results in better utilization of space and nutrients to achieve control of plant pathogens. This method allows for the recycling of agricultural waste straw.

  2. Optimal inventories for overhaul of repairable redundant systems - A Markov decision model

    NASA Technical Reports Server (NTRS)

    Schaefer, M. K.

    1984-01-01

    A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.

  3. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    PubMed

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  4. A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems.

    PubMed

    Li, Xuejun; Xu, Jia; Yang, Yun

    2015-01-01

    Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.

  5. Rethinking FCV/BEV Vehicle Range: A Consumer Value Trade-off Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhenhong; Greene, David L

    2010-01-01

    The driving range of FCV and BEV is often analyzed by simple analogy to conventional vehicles without proper consideration of differences in energy storage technology, infrastructure, and market context. This study proposes a coherent framework to optimize the driving range by minimizing costs associated with range, including upfront storage cost, fuel availability cost for FCV and range anxiety cost for BEV. It is shown that the conventional assumption of FCV range can lead to overestimation of FCV market barrier by over $8000 per vehicle in the near-term market. Such exaggeration of FCV market barrier can be avoided with range optimization.more » Compared to the optimal BEV range, the 100-mile range chosen by automakers appears to be near optimal for modest drivers, but far less than optimal for frequent drivers. With range optimization, the probability that the BEV is unable to serve a long-trip day is generally less than 5%, depending on driving intensity. Range optimization can help diversify BEV products for different consumers. It is also demonstrated and argued that the FCV/BEV range should adapt to the technology and infrastructure developments.« less

  6. Analysis and optimization of hybrid electric vehicle thermal management systems

    NASA Astrophysics Data System (ADS)

    Hamut, H. S.; Dincer, I.; Naterer, G. F.

    2014-02-01

    In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.

  7. A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems

    PubMed Central

    Li, Xuejun; Xu, Jia; Yang, Yun

    2015-01-01

    Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510

  8. DOUBLE SHELL TANK (DST) INTEGRITY PROJECT HIGH LEVEL WASTE CHEMISTRY OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WASHENFELDER DJ

    2008-01-22

    The U.S. Department of Energy's Office (DOE) of River Protection (ORP) has a continuing program for chemical optimization to better characterize corrosion behavior of High-Level Waste (HLW). The DOE controls the chemistry in its HLW to minimize the propensity of localized corrosion, such as pitting, and stress corrosion cracking (SCC) in nitrate-containing solutions. By improving the control of localized corrosion and SCC, the ORP can increase the life of the Double-Shell Tank (DST) carbon steel structural components and reduce overall mission costs. The carbon steel tanks at the Hanford Site are critical to the mission of safely managing stored HLWmore » until it can be treated for disposal. The DOE has historically used additions of sodium hydroxide to retard corrosion processes in HLW tanks. This also increases the amount of waste to be treated. The reactions with carbon dioxide from the air and solid chemical species in the tank continually deplete the hydroxide ion concentration, which then requires continued additions. The DOE can reduce overall costs for caustic addition and treatment of waste, and more effectively utilize waste storage capacity by minimizing these chemical additions. Hydroxide addition is a means to control localized and stress corrosion cracking in carbon steel by providing a passive environment. The exact mechanism that causes nitrate to drive the corrosion process is not yet clear. The SCC is less of a concern in the newer stress relieved double shell tanks due to reduced residual stress. The optimization of waste chemistry will further reduce the propensity for SCC. The corrosion testing performed to optimize waste chemistry included cyclic potentiodynamic volarization studies. slow strain rate tests. and stress intensity factor/crack growth rate determinations. Laboratory experimental evidence suggests that nitrite is a highly effective:inhibitor for pitting and SCC in alkaline nitrate environments. Revision of the corrosion control strategies to a nitrite-based control, where there is no constant depletion mechanism as with hydroxide, should greatly enhance tank lifetime, tank space availability, and reduce downstream reprocessing costs by reducing chemical addition to the tanks.« less

  9. Value of information analysis optimizing future trial design from a pilot study on catheter securement devices.

    PubMed

    Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A

    2014-12-01

    Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another limitation was that different follow-up durations for the larger trial were not evaluated. The value of information approach allows efficient trial design by maximizing the expected net benefit of additional research. This approach should be considered early in the design of randomized clinical trials. © The Author(s) 2014.

  10. Energy-efficient growth of phage Q Beta in Escherichia coli.

    PubMed

    Kim, Hwijin; Yin, John

    2004-10-20

    The role of natural selection in the optimal design of organisms is controversial. Optimal forms, functions, or behaviors of organisms have long been claimed without knowledge of how genotype contributes to phenotype, delineation of design constraints, or reference to alternative designs. Moreover, arguments for optimal designs have been often based on models that were difficult, if not impossible, to test. Here, we begin to address these issues by developing and probing a kinetic model for the intracellular growth of bacteriophage Q beta in Escherichia coli. The model accounts for the energetic costs of all template-dependent polymerization reactions, in ATP equivalents, including RNA-dependent RNA elongation by the phage replicase and synthesis of all phage proteins by the translation machinery of the E. coli host cell. We found that translation dominated phage growth, requiring 85% of the total energy expenditure. Only 10% of the total energy was applied to activities other than the direct synthesis of progeny phage components, reflecting primarily the cost of making the negative-strand RNA template that is needed for replication of phage genomic RNA. Further, we defined an energy efficiency of phage growth and showed its direct relationship to the yield of phage progeny. Finally, we performed a sensitivity analysis and found that the growth of wild-type phage was optimized for progeny yield or energy efficiency, suggesting that phage Q beta has evolved to optimally utilize the finite resources of its host cells.

  11. Method for Household Refrigerators Efficiency Increasing

    NASA Astrophysics Data System (ADS)

    Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.

    2017-11-01

    The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.

  12. Recursive Optimization of Digital Circuits

    DTIC Science & Technology

    1990-12-14

    Obverse- Specification . . . A-23 A.14 Non-MDS Optimization of SAMPLE .. .. .. .. .. .. ..... A-24 Appendix B . BORIS Recursive Optimization System...Software ...... B -i B .1 DESIGN.S File . .... .. .. .. .. .. .. .. .. .. ... ... B -2 B .2 PARSE.S File. .. .. .. .. .. .. .. .. ... .. ... .... B -1i B .3...TABULAR.S File. .. .. .. .. .. .. ... .. ... .. ... B -22 B .4 MDS.S File. .. .. .. .. .. .. .. ... .. ... .. ...... B -28 B .5 COST.S File

  13. "Optimal" Size and Schooling: A Relative Concept.

    ERIC Educational Resources Information Center

    Swanson, Austin D.

    Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…

  14. Optimal design of the satellite constellation arrangement reconfiguration process

    NASA Astrophysics Data System (ADS)

    Fakoor, Mahdi; Bakhtiari, Majid; Soleymani, Mahshid

    2016-08-01

    In this article, a novel approach is introduced for the satellite constellation reconfiguration based on Lambert's theorem. Some critical problems are raised in reconfiguration phase, such as overall fuel cost minimization, collision avoidance between the satellites on the final orbital pattern, and necessary maneuvers for the satellites in order to be deployed in the desired position on the target constellation. To implement the reconfiguration phase of the satellite constellation arrangement at minimal cost, the hybrid Invasive Weed Optimization/Particle Swarm Optimization (IWO/PSO) algorithm is used to design sub-optimal transfer orbits for the satellites existing in the constellation. Also, the dynamic model of the problem will be modeled in such a way that, optimal assignment of the satellites to the initial and target orbits and optimal orbital transfer are combined in one step. Finally, we claim that our presented idea i.e. coupled non-simultaneous flight of satellites from the initial orbital pattern will lead to minimal cost. The obtained results show that by employing the presented method, the cost of reconfiguration process is reduced obviously.

  15. Cost model relationships between textile manufacturing processes and design details for transport fuselage elements

    NASA Technical Reports Server (NTRS)

    Metschan, Stephen L.; Wilden, Kurtis S.; Sharpless, Garrett C.; Andelman, Rich M.

    1993-01-01

    Textile manufacturing processes offer potential cost and weight advantages over traditional composite materials and processes for transport fuselage elements. In the current study, design cost modeling relationships between textile processes and element design details were developed. Such relationships are expected to help future aircraft designers to make timely decisions on the effect of design details and overall configurations on textile fabrication costs. The fundamental advantage of a design cost model is to insure that the element design is cost effective for the intended process. Trade studies on the effects of processing parameters also help to optimize the manufacturing steps for a particular structural element. Two methods of analyzing design detail/process cost relationships developed for the design cost model were pursued in the current study. The first makes use of existing databases and alternative cost modeling methods (e.g. detailed estimating). The second compares design cost model predictions with data collected during the fabrication of seven foot circumferential frames for ATCAS crown test panels. The process used in this case involves 2D dry braiding and resin transfer molding of curved 'J' cross section frame members having design details characteristic of the baseline ATCAS crown design.

  16. Automated parameterization of intermolecular pair potentials using global optimization techniques

    NASA Astrophysics Data System (ADS)

    Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk

    2014-12-01

    In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.

  17. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tao; Li, Cheng; Huang, Can

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  18. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  19. Research on Operation Strategy for Bundled Wind-thermal Generation Power Systems Based on Two-Stage Optimization Model

    NASA Astrophysics Data System (ADS)

    Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu

    2017-05-01

    Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.

  20. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE PAGES

    Ding, Tao; Li, Cheng; Huang, Can; ...

    2017-01-09

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  1. A bi-objective model for optimizing replacement time of age and block policies with consideration of spare parts’ availability

    NASA Astrophysics Data System (ADS)

    Alsyouf, Imad

    2018-05-01

    Reliability and availability of critical systems play an important role in achieving the stated objectives of engineering assets. Preventive replacement time affects the reliability of the components, thus the number of system failures encountered and its downtime expenses. On the other hand, spare parts inventory level is a very critical factor that affects the availability of the system. Usually, the decision maker has many conflicting objectives that should be considered simultaneously for the selection of the optimal maintenance policy. The purpose of this research was to develop a bi-objective model that will be used to determine the preventive replacement time for three maintenance policies (age, block good as new, block bad as old) with consideration of spare parts’ availability. It was suggested to use a weighted comprehensive criterion method with two objectives, i.e. cost and availability. The model was tested with a typical numerical example. The results of the model demonstrated its effectiveness in enabling the decision maker to select the optimal maintenance policy under different scenarios and taking into account preferences with respect to contradicting objectives such as cost and availability.

  2. Cure Cycle Optimization of Rapidly Cured Out-Of-Autoclave Composites.

    PubMed

    Dong, Anqi; Zhao, Yan; Zhao, Xinqing; Yu, Qiyong

    2018-03-13

    Out-of-autoclave prepreg typically needs a long cure cycle to guarantee good properties as the result of low processing pressure applied. It is essential to reduce the manufacturing time, achieve real cost reduction, and take full advantage of out-of-autoclave process. The focus of this paper is to reduce the cure cycle time and production cost while maintaining high laminate quality. A rapidly cured out-of-autoclave resin and relative prepreg were independently developed. To determine a suitable rapid cure procedure for the developed prepreg, the effect of heating rate, initial cure temperature, dwelling time, and post-cure time on the final laminate quality were evaluated and the factors were then optimized. As a result, a rapid cure procedure was determined. The results showed that the resin infiltration could be completed at the end of the initial cure stage and no obvious void could be seen in the laminate at this time. The laminate could achieve good internal quality using the optimized cure procedure. The mechanical test results showed that the laminates had a fiber volume fraction of 59-60% with a final glass transition temperature of 205 °C and excellent mechanical strength especially the flexural properties.

  3. Cure Cycle Optimization of Rapidly Cured Out-Of-Autoclave Composites

    PubMed Central

    Dong, Anqi; Zhao, Yan; Zhao, Xinqing; Yu, Qiyong

    2018-01-01

    Out-of-autoclave prepreg typically needs a long cure cycle to guarantee good properties as the result of low processing pressure applied. It is essential to reduce the manufacturing time, achieve real cost reduction, and take full advantage of out-of-autoclave process. The focus of this paper is to reduce the cure cycle time and production cost while maintaining high laminate quality. A rapidly cured out-of-autoclave resin and relative prepreg were independently developed. To determine a suitable rapid cure procedure for the developed prepreg, the effect of heating rate, initial cure temperature, dwelling time, and post-cure time on the final laminate quality were evaluated and the factors were then optimized. As a result, a rapid cure procedure was determined. The results showed that the resin infiltration could be completed at the end of the initial cure stage and no obvious void could be seen in the laminate at this time. The laminate could achieve good internal quality using the optimized cure procedure. The mechanical test results showed that the laminates had a fiber volume fraction of 59–60% with a final glass transition temperature of 205 °C and excellent mechanical strength especially the flexural properties. PMID:29534048

  4. Seismic waveform inversion best practices: regional, global and exploration test cases

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan; Tromp, Jeroen

    2016-09-01

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.

  5. Lessons Learned from Testing the Quality Cost Model of Advanced Practice Nursing (APN) Transitional Care

    PubMed Central

    Brooten, Dorothy; Naylor, Mary D.; York, Ruth; Brown, Linda P.; Munro, Barbara Hazard; Hollingsworth, Andrea O.; Cohen, Susan M.; Finkler, Steven; Deatrick, Janet; Youngblut, JoAnne M.

    2013-01-01

    Purpose To describe the development, testing, modification, and results of the Quality Cost Model of Advanced Practice Nurses (APNs) Transitional Care on patient outcomes and health care costs in the United States over 22 years, and to delineate what has been learned for nursing education, practice, and further research. Organizing Construct The Quality Cost Model of APN Transitional Care. Methods Review of published results of seven randomized clinical trials with very low birth-weight (VLBW) infants; women with unplanned cesarean births, high risk pregnancies, and hysterectomy surgery; elders with cardiac medical and surgical diagnoses and common diagnostic related groups (DRGs); and women with high risk pregnancies in which half of physician prenatal care was substituted with APN care. Ongoing work with the model is linking the process of APN care with the outcomes and costs of care. Findings APN intervention has consistently resulted in improved patient outcomes and reduced health care costs across groups. Groups with APN providers were rehospitalized for less time at less cost, reflecting early detection and intervention. Optimal number and timing of postdischarge home visits and telephone contacts by the APNs and patterns of rehospitalizations and acute care visits varied by group. Conclusions To keep people well over time, APNs must have depth of knowledge and excellent clinical and interpersonal skills that are the hallmark of specialist practice, an in-depth understanding of systems and how to work within them, and sufficient patient contact to effect positive outcomes at low cost. PMID:12501741

  6. Optimal seeding depth of five forb species from the Great Basin

    Treesearch

    Jennifer K. Rawlins; Val J. Anderson; Robert Johnson; Thomas Krebs

    2009-01-01

    Use of forbs in revegetation projects in the Great Basin is limited due to high seed cost and insufficient understanding of their germination and establishment requirements. We tested the effects of seeding depth from 0 to 25.4 mm (1 in) on emergence and survival in clay and sandy loam soils of 5 ecologically important forbs. Significantly less emergence occurred of...

  7. LENGTH-HETEROGENEITY POLYMERASE CHAIN REACTION (LH-PCR) AS AN INDICATOR OF STREAM SANITARY AND ECOLOGICAL CONDITION: OPTIMAL SAMPLE SIZE AND HOLDING CONDITIONS

    EPA Science Inventory

    The use of coliform plate count data to assess stream sanitary and ecological condition is limited by the need to store samples at 4oC and analyze them within a 24-hour period. We are testing LH-PCR as an alternative tool to assess the bacterial load of streams, offering a cost ...

  8. Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization

    NASA Astrophysics Data System (ADS)

    Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar

    2017-04-01

    Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.

  9. Optimal control of malaria: combining vector interventions and drug therapies.

    PubMed

    Khamis, Doran; El Mouden, Claire; Kura, Klodeta; Bonsall, Michael B

    2018-04-24

    The sterile insect technique and transgenic equivalents are considered promising tools for controlling vector-borne disease in an age of increasing insecticide and drug-resistance. Combining vector interventions with artemisinin-based therapies may achieve the twin goals of suppressing malaria endemicity while managing artemisinin resistance. While the cost-effectiveness of these controls has been investigated independently, their combined usage has not been dynamically optimized in response to ecological and epidemiological processes. An optimal control framework based on coupled models of mosquito population dynamics and malaria epidemiology is used to investigate the cost-effectiveness of combining vector control with drug therapies in homogeneous environments with and without vector migration. The costs of endemic malaria are weighed against the costs of administering artemisinin therapies and releasing modified mosquitoes using various cost structures. Larval density dependence is shown to reduce the cost-effectiveness of conventional sterile insect releases compared with transgenic mosquitoes with a late-acting lethal gene. Using drug treatments can reduce the critical vector control release ratio necessary to cause disease fadeout. Combining vector control and drug therapies is the most effective and efficient use of resources, and using optimized implementation strategies can substantially reduce costs.

  10. National Energy with Weather System Simultator (NEWS) Sets Bounds on Cost Effective Wind and Solar PV Deployment in the USA without the Use of Storage.

    NASA Astrophysics Data System (ADS)

    Clack, C.; MacDonald, A. E.; Alexander, A.; Dunbar, A. D.; Xie, Y.; Wilczak, J. M.

    2014-12-01

    The importance of weather-driven renewable energies for the United States energy portfolio is growing. The main perceived problems with weather-driven renewable energies are their intermittent nature, low power density, and high costs. In 2009, we began a large-scale investigation into the characteristics of weather-driven renewables. The project utilized the best available weather data assimilation model to compute high spatial and temporal resolution power datasets for the renewable resources of wind and solar PV. The weather model used is the Rapid Update Cycle for the years of 2006-2008. The team also collated a detailed electrical load dataset for the contiguous USA from the Federal Energy Regulatory Commission for the same three-year period. The coincident time series of electrical load and weather data allows the possibility of temporally correlated computations for optimal design over large geographic areas. The past two years have seen the development of a cost optimization mathematic model that designs electric power systems. The model plans the system and dispatches it on an hourly timescale. The system is designed to be reliable, reduce carbon, reduce variability of renewable resources and move the electricity about the whole domain. The system built would create the infrastructure needed to reduce carbon emissions to 0 by 2050. The advantages of the system is reduced water demain, dual incomes for farmers, jobs for construction of the infrastructure, and price stability for energy. One important simplified test that was run included existing US carbon free power sources, natural gas power when needed, and a High Voltage Direct Current power transmission network. This study shows that the costs and carbon emissions from an optimally designed national system decrease with geographic size. It shows that with achievable estimates of wind and solar generation costs, that the US could decrease its carbon emissions by up to 80% by the early 2030s, without an increase in electric costs. The key requirement would be a 48 state network of HVDC transmission, creating a national market for electricity not possible in the current AC grid. The study also showed how the price of natural gas fuel influenced the optimal system designed.

  11. Optimal short-range trajectories for helicopters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, G.L.; Erzberger, H.

    1982-12-01

    An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less

  12. Edge grouping combining boundary and region information.

    PubMed

    Stahl, Joachim S; Wang, Song

    2007-10-01

    This paper introduces a new edge-grouping method to detect perceptually salient structures in noisy images. Specifically, we define a new grouping cost function in a ratio form, where the numerator measures the boundary proximity of the resulting structure and the denominator measures the area of the resulting structure. This area term introduces a preference towards detecting larger-size structures and, therefore, makes the resulting edge grouping more robust to image noise. To find the optimal edge grouping with the minimum grouping cost, we develop a special graph model with two different kinds of edges and then reduce the grouping problem to finding a special kind of cycle in this graph with a minimum cost in ratio form. This optimal cycle-finding problem can be solved in polynomial time by a previously developed graph algorithm. We implement this edge-grouping method, test it on both synthetic data and real images, and compare its performance against several available edge-grouping and edge-linking methods. Furthermore, we discuss several extensions of the proposed method, including the incorporation of the well-known grouping cues of continuity and intensity homogeneity, introducing a factor to balance the contributions from the boundary and region information, and the prevention of detecting self-intersecting boundaries.

  13. A Gradient Taguchi Method for Engineering Optimization

    NASA Astrophysics Data System (ADS)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  14. A performance-oriented power transformer design methodology using multi-objective evolutionary optimization

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2014-01-01

    Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper. PMID:26257939

  15. A performance-oriented power transformer design methodology using multi-objective evolutionary optimization.

    PubMed

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2015-05-01

    Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper.

  16. Comparison of primary optics in amonix CPV arrays

    NASA Astrophysics Data System (ADS)

    Nayak, Aditya; Kinsey, Geoffrey S.; Liu, Mingguo; Bagienski, William; Garboushian, Vahan

    2012-10-01

    The Amonix CPV system utilizes an acrylic Fresnel lens Primary Optical Element (POE) and a reflective Secondary Optical Element (SOE). Improvements in the optical design have contributed to more than 10% increase in rated power last year. In order to further optimize the optical power path, Amonix is looking at various trade-offs in optics, including, concentration, optical materials, reliability, and cost. A comparison of optical materials used for manufacturing the primary optical element and optical design trade off's used to maximize power output will be presented. Optimization of the power path has led to the demonstration of a module lens-area efficiency of 35% in outdoor testing at Amonix.

  17. CATO: a CAD tool for intelligent design of optical networks and interconnects

    NASA Astrophysics Data System (ADS)

    Chlamtac, Imrich; Ciesielski, Maciej; Fumagalli, Andrea F.; Ruszczyk, Chester; Wedzinga, Gosse

    1997-10-01

    Increasing communication speed requirements have created a great interest in very high speed optical and all-optical networks and interconnects. The design of these optical systems is a highly complex task, requiring the simultaneous optimization of various parts of the system, ranging from optical components' characteristics to access protocol techniques. Currently there are no computer aided design (CAD) tools on the market to support the interrelated design of all parts of optical communication systems, thus the designer has to rely on costly and time consuming testbed evaluations. The objective of the CATO (CAD tool for optical networks and interconnects) project is to develop a prototype of an intelligent CAD tool for the specification, design, simulation and optimization of optical communication networks. CATO allows the user to build an abstract, possible incomplete, model of the system, and determine its expected performance. Based on design constraints provided by the user, CATO will automatically complete an optimum design, using mathematical programming techniques, intelligent search methods and artificial intelligence (AI). Initial design and testing of a CATO prototype (CATO-1) has been completed recently. The objective was to prove the feasibility of combining AI techniques, simulation techniques, an optical device library and a graphical user interface into a flexible CAD tool for obtaining optimal communication network designs in terms of system cost and performance. CATO-1 is an experimental tool for designing packet-switching wavelength division multiplexing all-optical communication systems using a LAN/MAN ring topology as the underlying network. The two specific AI algorithms incorporated are simulated annealing and a genetic algorithm. CATO-1 finds the optimal number of transceivers for each network node, using an objective function that includes the cost of the devices and the overall system performance.

  18. Exoskeleton plantarflexion assistance for elderly.

    PubMed

    Galle, S; Derave, W; Bossuyt, F; Calders, P; Malcolm, P; De Clercq, D

    2017-02-01

    Elderly are confronted with reduced physical capabilities and increased metabolic energy cost of walking. Exoskeletons that assist walking have the potential to restore walking capacity by reducing the metabolic cost of walking. However, it is unclear if current exoskeletons can reduce energy cost in elderly. Our goal was to study the effect of an exoskeleton that assists plantarflexion during push-off on the metabolic energy cost of walking in physically active and healthy elderly. Seven elderly (age 69.3±3.5y) walked on treadmill (1.11ms 2 ) with normal shoes and with the exoskeleton both powered (with assistance) and powered-off (without assistance). After 20min of habituation on a prior day and 5min on the test day, subjects were able to walk with the exoskeleton and assistance of the exoskeleton resulted in a reduction in metabolic cost of 12% versus walking with the exoskeleton powered-off. Walking with the exoskeleton was perceived less fatiguing for the muscles compared to normal walking. Assistance resulted in a statistically nonsignificant reduction in metabolic cost of 4% versus walking with normal shoes, likely due to the penalty of wearing the exoskeleton powered-off. Also, exoskeleton mechanical power was relatively low compared to previously identified optimal assistance magnitude in young adults. Future exoskeleton research should focus on further optimizing exoskeleton assistance for specific populations and on considerate integration of exoskeletons in rehabilitation or in daily life. As such, exoskeletons should allow people to walk longer or faster than without assistance and could result in an increase in physical activity and resulting health benefits. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Technical and economical optimization of a full-scale poultry manure treatment process: total ammonia nitrogen balance.

    PubMed

    Alejo-Alvarez, Luz; Guzmán-Fierro, Víctor; Fernández, Katherina; Roeckel, Marlene

    2016-11-01

    A full-scale process for the treatment of 80 tons per day of poultry manure was designed and optimized. A total ammonia nitrogen (TAN) balance was performed at steady state, considering the stoichiometry and the kinetic data from the anaerobic digestion and the anaerobic ammonia oxidation. The equipment, reactor design, investment costs, and operational costs were considered. The volume and cost objective functions optimized the process in terms of three variables: the water recycle ratio, the protein conversion during AD, and the TAN conversion in the process. The processes were compared with and without water recycle; savings of 70% and 43% in the annual fresh water consumption and the heating costs, respectively, were achieved. The optimal process complies with the Chilean environmental legislation limit of 0.05 g total nitrogen/L.

  20. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  1. Framework to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas

    NASA Astrophysics Data System (ADS)

    Feyen, Luc; Gorelick, Steven M.

    2005-03-01

    We propose a framework that combines simulation optimization with Bayesian decision analysis to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas. A stochastic simulation optimization management model is employed to plan regionally distributed groundwater pumping while preserving the hydroecological balance in wetland areas. Because predictions made by an aquifer model are uncertain, groundwater supply systems operate below maximum yield. Collecting data from the groundwater system can potentially reduce predictive uncertainty and increase safe water production. The price paid for improvement in water management is the cost of collecting the additional data. Efficient data collection using Bayesian decision analysis proceeds in three stages: (1) The prior analysis determines the optimal pumping scheme and profit from water sales on the basis of known information. (2) The preposterior analysis estimates the optimal measurement locations and evaluates whether each sequential measurement will be cost-effective before it is taken. (3) The posterior analysis then revises the prior optimal pumping scheme and consequent profit, given the new information. Stochastic simulation optimization employing a multiple-realization approach is used to determine the optimal pumping scheme in each of the three stages. The cost of new data must not exceed the expected increase in benefit obtained in optimal groundwater exploitation. An example based on groundwater management practices in Florida aimed at wetland protection showed that the cost of data collection more than paid for itself by enabling a safe and reliable increase in production.

  2. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  3. Multiobjective optimization of low impact development stormwater controls

    NASA Astrophysics Data System (ADS)

    Eckart, Kyle; McPhee, Zach; Bolisetti, Tirupati

    2018-07-01

    Green infrastructure such as Low Impact Development (LID) controls are being employed to manage the urban stormwater and restore the predevelopment hydrological conditions besides improving the stormwater runoff water quality. Since runoff generation and infiltration processes are nonlinear, there is a need for identifying optimal combination of LID controls. A coupled optimization-simulation model was developed by linking the U.S. EPA Stormwater Management Model (SWMM) to the Borg Multiobjective Evolutionary Algorithm (Borg MOEA). The coupled model is capable of performing multiobjective optimization which uses SWMM simulations as a tool to evaluate potential solutions to the optimization problem. The optimization-simulation tool was used to evaluate low impact development (LID) stormwater controls. A SWMM model was developed, calibrated, and validated for a sewershed in Windsor, Ontario and LID stormwater controls were tested for three different return periods. LID implementation strategies were optimized using the optimization-simulation model for five different implementation scenarios for each of the three storm events with the objectives of minimizing peak flow in the stormsewers, reducing total runoff, and minimizing cost. For the sewershed in Windsor, Ontario, the peak run off and total volume of the runoff were found to reduce by 13% and 29%, respectively.

  4. Optimal joint management of a coastal aquifer and a substitute resource

    NASA Astrophysics Data System (ADS)

    Moreaux, M.; Reynaud, A.

    2004-06-01

    This article characterizes the optimal joint management of a coastal aquifer and a costly water substitute. For this purpose we use a mathematical representation of the aquifer that incorporates the displacement of the interface between the seawater and the freshwater of the aquifer. We identify the spatial cost externalities created by users on each other and we show that the optimal water supply depends on the location of users. Users located in the coastal zone exclusively use the costly substitute. Those located in the more upstream area are supplied from the aquifer. At the optimum their withdrawal must take into account the cost externalities they generate on users located downstream. Last, users located in a median zone use the aquifer with a surface transportation cost. We show that the optimum can be implemented in a decentralized economy through a very simple Pigouvian tax. Finally, the optimal and decentralized extraction policies are simulated on a very simple example.

  5. Optimal policies for aggregate recycling from decommissioned forest roads.

    PubMed

    Thompson, Matthew; Sessions, John

    2008-08-01

    To mitigate the adverse environmental impact of forest roads, especially degradation of endangered salmonid habitat, many public and private land managers in the western United States are actively decommissioning roads where practical and affordable. Road decommissioning is associated with reduced long-term environmental impact. When decommissioning a road, it may be possible to recover some aggregate (crushed rock) from the road surface. Aggregate is used on many low volume forest roads to reduce wheel stresses transferred to the subgrade, reduce erosion, reduce maintenance costs, and improve driver comfort. Previous studies have demonstrated the potential for aggregate to be recovered and used elsewhere on the road network, at a reduced cost compared to purchasing aggregate from a quarry. This article investigates the potential for aggregate recycling to provide an economic incentive to decommission additional roads by reducing transport distance and aggregate procurement costs for other actively used roads. Decommissioning additional roads may, in turn, result in improved aquatic habitat. We present real-world examples of aggregate recycling and discuss the advantages of doing so. Further, we present mixed integer formulations to determine optimal levels of aggregate recycling under economic and environmental objectives. Tested on an example road network, incorporation of aggregate recycling demonstrates substantial cost-savings relative to a baseline scenario without recycling, increasing the likelihood of road decommissioning and reduced habitat degradation. We find that aggregate recycling can result in up to 24% in cost savings (economic objective) and up to 890% in additional length of roads decommissioned (environmental objective).

  6. An air-liquid contactor for large-scale capture of CO2 from air.

    PubMed

    Holmes, Geoffrey; Keith, David W

    2012-09-13

    We present a conceptually simple method for optimizing the design of a gas-liquid contactor for capture of carbon dioxide from ambient air, or 'air capture'. We apply the method to a slab geometry contactor that uses components, design and fabrication methods derived from cooling towers. We use mass transfer data appropriate for capture using a strong NaOH solution, combined with engineering and cost data derived from engineering studies performed by Carbon Engineering Ltd, and find that the total costs for air contacting alone-no regeneration-can be of the order of $60 per tonne CO(2). We analyse the reasons why our cost estimate diverges from that of other recent reports and conclude that the divergence arises from fundamental design choices rather than from differences in costing methodology. Finally, we review the technology risks and conclude that they can be readily addressed by prototype testing.

  7. Preparation and properties of low-cost graphene counter electrodes for dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Wu, Qishuang; Shen, Yue; Wang, Qiandi; Gu, Feng; Cao, Meng; Wang, Linjun

    2013-12-01

    With the advantages of excellent electrical properties, high catalytic activity and low-cost preparation, Graphene is one of the most expected carbon materials to replace the expensive Pt as counter electrodes for dye-sensitized solar cells (DSSCs). In this paper, graphene counter electrodes were obtained by simple doctor-blade coating method on fluorine tin oxides (FTOs). The samples were investigated by X-ray diffraction (XRD), Raman spectroscopy and scanning electron microscope (SEM). Then the low-cost graphene electrodes were applied in typical sandwich-type DSSCs with TiO2 or ZnO as photoanodes, and their photoelectric conversion efficiency (η) were about 4.34% and 2.28%, respectively, which were a little lower than those of Pt electrodes but much higher than those of graphite electrodes. This law was consistent with the test results of electrochemical impedance spectroscopy (EIS). Low-cost graphene electrodes can be applied in DSSCs by process optimization.

  8. Impact of Capital and Current Costs Changes of the Incineration Process of the Medical Waste on System Management Cost

    NASA Astrophysics Data System (ADS)

    Jolanta Walery, Maria

    2017-12-01

    The article describes optimization studies aimed at analysing the impact of capital and current costs changes of medical waste incineration on the cost of the system management and its structure. The study was conducted on the example of an analysis of the system of medical waste management in the Podlaskie Province, in north-eastern Poland. The scope of operational research carried out under the optimization study was divided into two stages of optimization calculations with assumed technical and economic parameters of the system. In the first stage, the lowest cost of functioning of the analysed system was generated, whereas in the second one the influence of the input parameter of the system, i.e. capital and current costs of medical waste incineration on economic efficiency index (E) and the spatial structure of the system was determined. Optimization studies were conducted for the following cases: with a 25% increase in capital and current costs of incineration process, followed by 50%, 75% and 100% increase. As a result of the calculations, the highest cost of system operation was achieved at the level of 3143.70 PLN/t with the assumption of 100% increase in capital and current costs of incineration process. There was an increase in the economic efficiency index (E) by about 97% in relation to run 1.

  9. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  10. Optimized design and control of an off grid solar PV/hydrogen fuel cell power system for green buildings

    NASA Astrophysics Data System (ADS)

    Ghenai, C.; Bettayeb, M.

    2017-11-01

    Modelling, simulation, optimization and control strategies are used in this study to design a stand-alone solar PV/Fuel Cell/Battery/Generator hybrid power system to serve the electrical load of a commercial building. The main objective is to design an off grid energy system to meet the desired electric load of the commercial building with high renewable fraction, low emissions and low cost of energy. The goal is to manage the energy consumption of the building, reduce the associate cost and to switch from grid-tied fossil fuel power system to an off grid renewable and cleaner power system. Energy audit was performed in this study to determine the energy consumption of the building. Hourly simulations, modelling and optimization were performed to determine the performance and cost of the hybrid power configurations using different control strategies. The results show that the hybrid off grid solar PV/Fuel Cell/Generator/Battery/Inverter power system offers the best performance for the tested system architectures. From the total energy generated from the off grid hybrid power system, 73% is produced from the solar PV, 24% from the fuel cell and 3% from the backup Diesel generator. The produced power is used to meet all the AC load of the building without power shortage (<0.1%). The hybrid power system produces 18.2% excess power that can be used to serve the thermal load of the building. The proposed hybrid power system is sustainable, economically viable and environmentally friendly: High renewable fraction (66.1%), low levelized cost of energy (92 /MWh), and low carbon dioxide emissions (24 kg CO2/MWh) are achieved.

  11. New catalysts for coal liquefaction and new nanocrystalline catalysts synthesis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linehan, J.C.; Matson, D.W.; Darab, J.G.

    1994-09-01

    The use of coal as a source of transportation fuel is currently economically unfavorable due to an abundant world petroleum supply and the relatively high cost of coal liquefaction. Consequently, a reduction in the cost of coal liquefaction, for example by using less and/or less costly catalysts or lower liquefaction temperatures, must be accomplished if coal is to play an significant role as a source of liquid feedstock for the petrochemical industry. The authors and others have investigated the applicability of using inexpensive iron-based catalysts in place of more costly and environmentally hazardous metal catalysts for direct coal liquefaction. Iron-basedmore » catalysts can be effective in liquefying coal and in promoting carbon-carbon bond cleavage in model compounds. The authors have been involved in an ongoing effort to develop and optimize iron-based powders for use in coal liquefaction and related petrochemical applications. Research efforts in this area have been directed at three general areas. The authors have explored ways to optimize the effectiveness of catalyst precursor species through use of nanocrystalline materials and/or finely divided powders. In this effort, the authors have developed two new nanophase material production techniques, Modified Reverse Micelle (MRM) and the Rapid Thermal Decomposition of precursors in Solution (RTDS). A second effort has been aimed at optimizing the effectiveness of catalysts by variations in other factors. To this, the authors have investigated the effect that the crystalline phase has on the capacity of iron-based oxide and oxyhydroxide powders to be effectively converted to an active catalyst phase under liquefaction conditions. And finally, the authors have developed methods to produce active catalyst precursor powders in quantities sufficient for pilot-scale testing. Major results in these three areas are summarized.« less

  12. Optimizing conjunctive use of surface water and groundwater resources with stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2014-05-01

    Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.

  13. Cost-effectiveness of prostate cancer screening: a simulation study based on ERSPC data.

    PubMed

    Heijnsdijk, E A M; de Carvalho, T M; Auvinen, A; Zappa, M; Nelen, V; Kwiatkowski, M; Villers, A; Páez, A; Moss, S M; Tammela, T L J; Recker, F; Denis, L; Carlsson, S V; Wever, E M; Bangma, C H; Schröder, F H; Roobol, M J; Hugosson, J; de Koning, H J

    2015-01-01

    The results of the European Randomized Study of Screening for Prostate Cancer (ERSPC) trial showed a statistically significant 29% prostate cancer mortality reduction for the men screened in the intervention arm and a 23% negative impact on the life-years gained because of quality of life. However, alternative prostate-specific antigen (PSA) screening strategies for the population may exist, optimizing the effects on mortality reduction, quality of life, overdiagnosis, and costs. Based on data of the ERSPC trial, we predicted the numbers of prostate cancers diagnosed, prostate cancer deaths averted, life-years and quality-adjusted life-years (QALY) gained, and cost-effectiveness of 68 screening strategies starting at age 55 years, with a PSA threshold of 3, using microsimulation modeling. The screening strategies varied by age to stop screening and screening interval (one to 14 years or once in a lifetime screens), and therefore number of tests. Screening at short intervals of three years or less was more cost-effective than using longer intervals. Screening at ages 55 to 59 years with two-year intervals had an incremental cost-effectiveness ratio of $73000 per QALY gained and was considered optimal. With this strategy, lifetime prostate cancer mortality reduction was predicted as 13%, and 33% of the screen-detected cancers were overdiagnosed. When better quality of life for the post-treatment period could be achieved, an older age of 65 to 72 years for ending screening was obtained. Prostate cancer screening can be cost-effective when it is limited to two or three screens between ages 55 to 59 years. Screening above age 63 years is less cost-effective because of loss of QALYs because of overdiagnosis. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process.

    PubMed

    Golkarnarenji, Gelayol; Naebe, Minoo; Badii, Khashayar; Milani, Abbas S; Jazar, Reza N; Khayyam, Hamid

    2018-03-05

    To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR) and Artificial Neural Network (ANN), were studied and compared, with a limited dataset obtained to predict physical property (density) of oxidative stabilized PAN fiber (OPF) in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large.

  15. Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process

    PubMed Central

    Golkarnarenji, Gelayol; Naebe, Minoo; Badii, Khashayar; Milani, Abbas S.; Jazar, Reza N.; Khayyam, Hamid

    2018-01-01

    To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR) and Artificial Neural Network (ANN), were studied and compared, with a limited dataset obtained to predict physical property (density) of oxidative stabilized PAN fiber (OPF) in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large. PMID:29510592

  16. Optimal power flow with optimal placement TCSC device on 500 kV Java-Bali electrical power system using genetic Algorithm-Taguchi method

    NASA Astrophysics Data System (ADS)

    Apribowo, Chico Hermanu Brillianto; Ibrahim, Muhammad Hamka; Wicaksono, F. X. Rian

    2018-02-01

    The growing burden of the load and the complexity of the power system has had an impact on the need for optimization of power system operation. Optimal power flow (OPF) with optimal location placement and rating of thyristor controlled series capacitor (TCSC) is an effective solution used to determine the economic cost of operating the plant and regulate the power flow in the power system. The purpose of this study is to minimize the total cost of generation by placing the location and the optimal rating of TCSC using genetic algorithm-design of experiment techniques (GA-DOE). Simulation on Java-Bali system 500 kV with the amount of TCSC used by 5 compensator, the proposed method can reduce the generation cost by 0.89% compared to OPF without using TCSC.

  17. Search for a new economic optimum in the management of household waste in Tiaret city (western Algeria).

    PubMed

    Asnoune, M; Abdelmalek, F; Djelloul, A; Mesghouni, K; Addou, A

    2016-11-01

    In household waste matters, the objective is always to conceive an optimal integrated system of management, where the terms 'optimal' and 'integrated' refer generally to a combination between the waste and the techniques of treatment, valorization and elimination, which often aim at the lowest possible cost. The management optimization of household waste using operational methodologies has not yet been applied in any Algerian district. We proposed an optimization of the valorization of household waste in Tiaret city in order to lower the total management cost. The methodology is modelled by non-linear mathematical equations using 28 variables of decision and aims to assign optimally the seven components of household waste (i.e. plastic, cardboard paper, glass, metals, textiles, organic matter and others) among four centres of treatment [i.e. waste to energy (WTE) or incineration, composting (CM), anaerobic digestion (ANB) or methanization and landfilling (LF)]. The analysis of the obtained results shows that the variation of total cost is mainly due to the assignment of waste among the treatment centres and that certain treatment cannot be applied to household waste in Tiaret city. On the other hand, certain techniques of valorization have been favoured by the optimization. In this work, four scenarios have been proposed to optimize the system cost, where the modelling shows that the mixed scenario (the three treatment centres CM, ANB, LF) suggests a better combination of technologies of waste treatment, with an optimal solution for the system (cost and profit). © The Author(s) 2016.

  18. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  19. Emailing Drones: From Design to Test Range to ARS Offices and into the Field

    NASA Astrophysics Data System (ADS)

    Fuka, D. R.; Singer, S.; Rodriguez, R., III; Collick, A.; Cunningham, A.; Kleinman, P. J. A.; Manoukis, N. C.; Matthews, B.; Ralston, T.; Easton, Z. M.

    2017-12-01

    Unmanned aerial vehicles (UAVs or `drones') are one of the newest tools available for collecting geo- and biological-science data in the field, though today's commercial drones only come in a small range of options. While scientific research has benefitted from the enhanced topographic and surface characterization data that UAVs can provide through traditional image based remote sensing techniques, drones have significantly greater mission-specific potential than are currently utilized. The reasons for this under-utilization are twofold, 1) because with their broad capabilities comes the need to be careful in implementation, and as such, FAA and other regulatory agencies around the world have blanket regulations that can inhibit new designs from being implemented, and 2) current multi-mission-multi-payload commercial drones have to be over-designed to compensate for the fact that they are very difficult to stabilize for multiple payloads, leading to a much higher cost than necessary. For this project, we explore and demonstrate a workflow to optimize the design, testing, approval, and implementation of embarrassingly inexpensive mission specific drones, with two use cases. The first will follow the process from design (at VTech and UH Hilo) to field implementation (by USDA-ARS in PA and Extension in VA) of several custom water quality monitoring drones, printed on demand at ARS and Extension offices after testing at the Pan-Pacific UAS Test Range Complex (PPUTRC). This type of customized drone can allow for an increased understanding in the transition from non-point source to point source agri-chemical and pollutant transport in watershed systems. The second use case will follow the same process, resulting in customized drones with pest specific traps built into the design. This class of customized drone can facilitate IPM pest monitoring programs nationwide, decreasing the intensive and costly quarantine and population elimination measures that currently exist. This multi-institutional project works toward an optimized workflow where scientists can quickly 1) customize drones to meet specific purposes, 2) have them tested in FAA Test Ranges, and 3) get them certified and working in the field, while 4) cutting their cost to significantly less than what is currently available.

  20. Reliability Study of Solder Paste Alloy for the Improvement of Solder Joint at Surface Mount Fine-Pitch Components.

    PubMed

    Rahman, Mohd Nizam Ab; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A; Mahmood, Wan Mohd Faizal Wan

    2014-12-02

    The significant increase in metal costs has forced the electronics industry to provide new materials and methods to reduce costs, while maintaining customers' high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and Taguchi method after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the Taguchi method is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s).

  1. Reliability Study of Solder Paste Alloy for the Improvement of Solder Joint at Surface Mount Fine-Pitch Components

    PubMed Central

    Rahman, Mohd Nizam Ab.; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A.; Mahmood, Wan Mohd Faizal Wan

    2014-01-01

    The significant increase in metal costs has forced the electronics industry to provide new materials and methods to reduce costs, while maintaining customers’ high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and Taguchi method after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the Taguchi method is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s). PMID:28788270

  2. Meta-Analysis and Cost Comparison of Empirical versus Pre-Emptive Antifungal Strategies in Hematologic Malignancy Patients with High-Risk Febrile Neutropenia.

    PubMed

    Fung, Monica; Kim, Jane; Marty, Francisco M; Schwarzinger, Michaël; Koo, Sophia

    2015-01-01

    Invasive fungal disease (IFD) causes significant morbidity and mortality in hematologic malignancy patients with high-risk febrile neutropenia (FN). These patients therefore often receive empirical antifungal therapy. Diagnostic test-guided pre-emptive antifungal therapy has been evaluated as an alternative treatment strategy in these patients. We conducted an electronic search for literature comparing empirical versus pre-emptive antifungal strategies in FN among adult hematologic malignancy patients. We systematically reviewed 9 studies, including randomized-controlled trials, cohort studies, and feasibility studies. Random and fixed-effect models were used to generate pooled relative risk estimates of IFD detection, IFD-related mortality, overall mortality, and rates and duration of antifungal therapy. Heterogeneity was measured via Cochran's Q test, I2 statistic, and between study τ2. Incorporating these parameters and direct costs of drugs and diagnostic testing, we constructed a comparative costing model for the two strategies. We conducted probabilistic sensitivity analysis on pooled estimates and one-way sensitivity analyses on other key parameters with uncertain estimates. Nine published studies met inclusion criteria. Compared to empirical antifungal therapy, pre-emptive strategies were associated with significantly lower antifungal exposure (RR 0.48, 95% CI 0.27-0.85) and duration without an increase in IFD-related mortality (RR 0.82, 95% CI 0.36-1.87) or overall mortality (RR 0.95, 95% CI 0.46-1.99). The pre-emptive strategy cost $324 less (95% credible interval -$291.88 to $418.65 pre-emptive compared to empirical) than the empirical approach per FN episode. However, the cost difference was influenced by relatively small changes in costs of antifungal therapy and diagnostic testing. Compared to empirical antifungal therapy, pre-emptive antifungal therapy in patients with high-risk FN may decrease antifungal use without increasing mortality. We demonstrate a state of economic equipoise between empirical and diagnostic-directed pre-emptive antifungal treatment strategies, influenced by small changes in cost of antifungal therapy and diagnostic testing, in the current literature. This work emphasizes the need for optimization of existing fungal diagnostic strategies, development of more efficient diagnostic strategies, and less toxic and more cost-effective antifungals.

  3. Soaring energetics and glide performance in a moving atmosphere

    PubMed Central

    Reynolds, Kate V.; Thomas, Adrian L. R.

    2016-01-01

    Here, we analyse the energetics, performance and optimization of flight in a moving atmosphere. We begin by deriving a succinct expression describing all of the mechanical energy flows associated with gliding, dynamic soaring and thermal soaring, which we use to explore the optimization of gliding in an arbitrary wind. We use this optimization to revisit the classical theory of the glide polar, which we expand upon in two significant ways. First, we compare the predictions of the glide polar for different species under the various published models. Second, we derive a glide optimization chart that maps every combination of headwind and updraft speed to the unique combination of airspeed and inertial sink rate at which the aerodynamic cost of transport is expected to be minimized. With these theoretical tools in hand, we test their predictions using empirical data collected from a captive steppe eagle (Aquila nipalensis) carrying an inertial measurement unit, global positioning system, barometer and pitot tube. We show that the bird adjusts airspeed in relation to headwind speed as expected if it were seeking to minimize its aerodynamic cost of transport, but find only weak evidence to suggest that it adjusts airspeed similarly in response to updrafts during straight and interthermal glides. This article is part of the themed issue ‘Moving in a moving medium: new perspectives on flight’. PMID:27528788

  4. An EGO-like optimization framework for sensor placement optimization in modal analysis

    NASA Astrophysics Data System (ADS)

    Morlier, Joseph; Basile, Aniello; Chiplunkar, Ankit; Charlotte, Miguel

    2018-07-01

    In aircraft design, ground/flight vibration tests are conducted to extract aircraft’s modal parameters (natural frequencies, damping ratios and mode shapes) also known as the modal basis. The main problem in aircraft modal identification is the large number of sensors needed, which increases operational time and costs. The goal of this paper is to minimize the number of sensors by optimizing their locations in order to reconstruct a truncated modal basis of N mode shapes with a high level of accuracy in the reconstruction. There are several methods to solve sensors placement optimization (SPO) problems, but for this case an original approach has been established based on an iterative process for mode shapes reconstruction through an adaptive Kriging metamodeling approach so called efficient global optimization (EGO)-SPO. The main idea in this publication is to solve an optimization problem where the sensors locations are variables and the objective function is defined by maximizing the trace of criteria so called AutoMAC. The results on a 2D wing demonstrate a reduction of sensors by 30% using our EGO-SPO strategy.

  5. Application of Multi-Objective Human Learning Optimization Method to Solve AC/DC Multi-Objective Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Cao, Jia; Yan, Zheng; He, Guangyu

    2016-06-01

    This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.

  6. A stochastic discrete optimization model for designing container terminal facilities

    NASA Astrophysics Data System (ADS)

    Zukhruf, Febri; Frazila, Russ Bona; Burhani, Jzolanda Tsavalista

    2017-11-01

    As uncertainty essentially affect the total transportation cost, it remains important in the container terminal that incorporates several modes and transshipments process. This paper then presents a stochastic discrete optimization model for designing the container terminal, which involves the decision of facilities improvement action. The container terminal operation model is constructed by accounting the variation of demand and facilities performance. In addition, for illustrating the conflicting issue that practically raises in the terminal operation, the model also takes into account the possible increment delay of facilities due to the increasing number of equipment, especially the container truck. Those variations expectantly reflect the uncertainty issue in the container terminal operation. A Monte Carlo simulation is invoked to propagate the variations by following the observed distribution. The problem is constructed within the framework of the combinatorial optimization problem for investigating the optimal decision of facilities improvement. A new variant of glow-worm swarm optimization (GSO) is thus proposed for solving the optimization, which is rarely explored in the transportation field. The model applicability is tested by considering the actual characteristics of the container terminal.

  7. Costs, effectiveness, and workload impact of management strategies for women with an adnexal mass.

    PubMed

    Havrilesky, Laura J; Dinan, Michaela; Sfakianos, Gregory P; Curtis, Lesley H; Barnett, Jason C; Van Gorp, Toon; Myers, Evan R

    2015-01-01

    We compared the estimated clinical outcomes, costs, and physician workload resulting from available strategies for deciding which women with an adnexal mass should be referred to a gynecologic oncologist. We used a microsimulation model to compare five referral strategies: 1) American Congress of Obstetricians and Gynecologists (ACOG) guidelines, 2) Multivariate Index Assay (MIA) algorithm, 3) Risk of Malignancy Algorithm (ROMA), 4) CA125 alone with lowered cutoff values to prioritize test sensitivity over specificity, 5) referral of all women (Refer All). Test characteristics and relative survival were obtained from the literature and data from a biomarker validation study. Medical costs were estimated using Medicare reimbursements. Travel costs were estimated using discharge data from Surveillance, Epidemiology and End Results-Medicare and State Inpatient Databases. Analyses were performed separately for pre- and postmenopausal women (60 000 "subjects" in each), repeated 10 000 times. Refer All was cost-effective compared with less expensive strategies in both postmenopausal (incremental cost-effectiveness ratio [ICER] $9423/year of life saved (LYS) compared with CA125) and premenopausal women (ICER $10 644/YLS compared with CA125), but would result in an additional 73 cases/year/subspecialist. MIA was more expensive and less effective than Refer All in pre- and postmenopausal women. If Refer All is not a viable option, CA125 is an optimal strategy in postmenopausal women. Referral of all women to a subspecialist is an efficient strategy for managing women with adnexal masses requiring surgery, assuming sufficient capacity for additional surgical volume. If a test-based triage strategy is needed, CA125 with lowered cutoff values is a cost-effective strategy. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Cost effectiveness of OptiMal® rapid diagnostic test for malaria in remote areas of the Amazon Region, Brazil

    PubMed Central

    2010-01-01

    Background In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method The cost-effectiveness of the Optimal® and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U.S. dollars. Sensitivity analysis was performed considering key model parameters. Results In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion Microscopy is more cost-effective than OptiMal® in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field. PMID:20937094

  9. Application of a New Integrated Decision Support Tool (i-DST) for Urban Water Infrastructure: Analyzing Water Quality Compliance Pathways for Three Los Angeles Watersheds

    NASA Astrophysics Data System (ADS)

    Gallo, E. M.; Hogue, T. S.; Bell, C. D.; Spahr, K.; McCray, J. E.

    2017-12-01

    The water quality of receiving streams and waterbodies in urban watersheds are increasingly polluted from stormwater runoff. The implementation of Green Infrastructure (GI), which includes Low Impact Developments (LIDs) and Best Management Practices (BMPs), within a watershed aim to mitigate the effects of urbanization by reducing pollutant loads, runoff volume, and storm peak flow. Stormwater modeling is generally used to assess the impact of GIs implemented within a watershed. These modeling tools are useful for determining the optimal suite of GIs to maximize pollutant load reduction and minimize cost. However, stormwater management for most resource managers and communities also includes the implementation of grey and hybrid stormwater infrastructure. An integrated decision support tool, called i-DST, that allows for the optimization and comprehensive life-cycle cost assessment of grey, green, and hybrid stormwater infrastructure, is currently being developed. The i-DST tool will evaluate optimal stormwater runoff management by taking into account the diverse economic, environmental, and societal needs associated with watersheds across the United States. Three watersheds from southern California will act as a test site and assist in the development and initial application of the i-DST tool. The Ballona Creek, Dominguez Channel, and Los Angeles River Watersheds are located in highly urbanized Los Angeles County. The water quality of the river channels flowing through each are impaired by heavy metals, including copper, lead, and zinc. However, despite being adjacent to one another within the same county, modeling results, using EPA System for Urban Stormwater Treatment and Analysis INtegration (SUSTAIN), found that the optimal path to compliance in each watershed differs significantly. The differences include varied costs, suites of BMPs, and ancillary benefits. This research analyzes how the economic, physical, and hydrological differences between the three watersheds shape the optimal plan for stormwater management.

  10. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  11. Optimization of a yeast RNA interference system for controlling gene expression and enabling rapid metabolic engineering.

    PubMed

    Crook, Nathan C; Schmitz, Alexander C; Alper, Hal S

    2014-05-16

    Reduction of endogenous gene expression is a fundamental operation of metabolic engineering, yet current methods for gene knockdown (i.e., genome editing) remain laborious and slow, especially in yeast. In contrast, RNA interference allows facile and tunable gene knockdown via a simple plasmid transformation step, enabling metabolic engineers to rapidly prototype knockdown strategies in multiple strains before expending significant cost to undertake genome editing. Although RNAi is naturally present in a myriad of eukaryotes, it has only been recently implemented in Saccharomyces cerevisiae as a heterologous pathway and so has not yet been optimized as a metabolic engineering tool. In this study, we elucidate a set of design principles for the construction of hairpin RNA expression cassettes in yeast and implement RNA interference to quickly identify routes for improvement of itaconic acid production in this organism. The approach developed here enables rapid prototyping of knockdown strategies and thus accelerates and reduces the cost of the design-build-test cycle in yeast.

  12. Finite element analyses of continuous filament ties for masonry applications : final report for the Arquin Corporation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinones, Armando, Sr.; Bibeau, Tiffany A.; Ho, Clifford Kuofei

    2008-08-01

    Finite-element analyses were performed to simulate the response of a hypothetical vertical masonry wall subject to different lateral loads with and without continuous horizontal filament ties laid between rows of concrete blocks. A static loading analysis and cost comparison were also performed to evaluate optimal materials and designs for the spacers affixed to the filaments. Results showed that polypropylene, ABS, and polyethylene (high density) were suitable materials for the spacers based on performance and cost, and the short T-spacer design was optimal based on its performance and functionality. Simulations of vertical walls subject to static loads representing 100 mph windsmore » (0.2 psi) and a seismic event (0.66 psi) showed that the simulated walls performed similarly and adequately when subject to these loads with and without the ties. Additional simulations and tests are required to assess the performance of actual walls with and without the ties under greater loads and more realistic conditions (e.g., cracks, non-linear response).« less

  13. Adaptive partially hidden Markov models with application to bilevel image coding.

    PubMed

    Forchhammer, S; Rasmussen, T S

    1999-01-01

    Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.

  14. SeHCAT [tauroselcholic (selenium-75) acid] for the investigation of bile acid malabsorption and measurement of bile acid pool loss: a systematic review and cost-effectiveness analysis.

    PubMed

    Riemsma, R; Al, M; Corro Ramos, I; Deshpande, S N; Armstrong, N; Lee, Y-C; Ryder, S; Noake, C; Krol, M; Oppe, M; Kleijnen, J; Severens, H

    2013-12-01

    The principal diagnosis/indication for this assessment is chronic diarrhoea due to bile acid malabsorption (BAM). Diarrhoea can be defined as the abnormal passage of loose or liquid stools more than three times daily and/or a daily stool weight > 200 g per day and is considered to be chronic if it persists for more than 4 weeks. The cause of chronic diarrhoea in adults is often difficult to ascertain and patients may undergo several investigations without a definitive cause being identified. BAM is one of several causes of chronic diarrhoea and results from failure to absorb bile acids (which are required for the absorption of dietary fats and sterols in the intestine) in the distal ileum. For people with chronic diarrhoea with unknown cause and in people with Crohn's disease and chronic diarrhoea with unknown cause (i.e. before resection): (1) What are the effects of selenium-75-homocholic acid taurine (SeHCAT) compared with no SeHCAT in terms of chronic diarrhoea, other health outcomes and costs? (2) What are the effects of bile acid sequestrants (BASs) compared with no BASs in people with a positive or negative SeHCAT test? (3) Does a positive or negative SeHCAT test predict improvement in terms of chronic diarrhoea, other health outcomes and costs? A systematic review was conducted to summarise the evidence on the clinical effectiveness of SeHCAT for the assessment of BAM and the measurement of bile acid pool loss. Search strategies were based on target condition and intervention, as recommended in the Centre for Reviews and Dissemination (CRD) guidance for undertaking reviews in health care and the Cochrane Handbook for Diagnostic Test Accuracy Reviews. The following databases were searched up to April 2012: MEDLINE; MEDLINE In-Process & Other Non-Indexed Citations; EMBASE; the Cochrane Databases; Database of Abstracts of Reviews of Effects; Health Technology Assessment (HTA) Database; and Science Citation Index. Research registers and conference proceedings were also searched. Systematic review methods followed the principles outlined in the CRD guidance for undertaking reviews in health care and the National Institute for Health and Care Excellence (NICE) Diagnostic Assessment Programme interim methods statement. In the health economic analysis, the cost-effectiveness of SeHCAT for the assessment of BAM, in patients with chronic diarrhoea, was estimated in two different populations. The first is the population of patients with chronic diarrhoea with unknown cause and symptoms suggestive of diarrhoea-predominant irritable bowel syndrome (IBS-D) and the second population concerns patients with Crohn's disease without ileal resection with chronic diarrhoea. For each population, three models were combined: (1) a short-term decision tree that models the diagnostic pathway and initial response to treatment (first 6 months); (2) a long-term Markov model that estimates the lifetime costs and effects for patients initially receiving BAS; and (3) a long-term Markov model that estimates the lifetime costs and effects for patients initially receiving regular treatment (IBS-D treatment in the first population and Crohn's treatment in the second population). Incremental cost-effectiveness ratios were estimated as additional cost per additional responder in the short term (first 6 months) and per additional quality-adjusted life-year (QALY) in the long term (lifetime). We found three studies assessing the relationship between the SeHCAT test and response to treatment with cholestyramine. However, the studies had small numbers of patients with unknown cause chronic diarrhoea, and they used different cut-offs to define BAM. For the short term (first 6 months), when trial of treatment is not considered as a comparator, the optimal choice depends on the willingness to pay for an additional responder. For lower values (between £1500 and £4600) the choice will be no SeHCAT in all scenarios; for higher values either SeHCAT 10% or SeHCAT 15% becomes cost-effective. For the lifetime perspective, the various scenarios showed widely differing results: in the threshold range of £20,000-30,000 per QALY gained we found as optimal choice either no SeHCAT, SeHCAT 5% (only IBS-D) or SeHCAT 15%. When trial of treatment is considered a comparator, the analysis showed that for the short term, trial of treatment is the optimal choice across a range of scenarios. For the lifetime perspective with trial of treatment, again the various scenarios show widely differing results. Depending on the scenario, in the threshold range of £20,000-30,000 per QALY gained, we found as optimal choice either trial of treatment, no SeHCAT or SeHCAT 15%. In conclusion, the various analyses show that for both populations considerable decision uncertainty exists and that no firm conclusions can be formulated about which strategy is optimal. Standardisation of the definition of a positive SeHCAT test should be the first step in assessing the usefulness of this test. As there is no reference standard for the diagnosis of BAM and SeHCAT testing provides a continuous measure of metabolic function, diagnostic test accuracy (DTA) studies are not the most appropriate study design. However, in studies where all patients are tested with SeHCAT and all patients are treated with BASs, response to treatment can provide a surrogate reference standard; further DTA studies of this type may provide information on the ability of SeHCAT to predict response to BASs. A potentially more informative option would be multivariate regression modelling of treatment response (dependent variable), with SeHCAT result and other candidate clinical predictors as covariates. Such a study design could also inform the definition of a positive SeHCAT result. The study is registered as PROSPERO CRD42012001911. The National Institute for Health Research Health Technology Assessment programme.

  15. Hydrothermal Liquefaction and Upgrading of Municipal Wastewater Treatment Plant Sludge: A Preliminary Techno-Economic Analysis, Rev.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snowden-Swan, Lesley J.; Zhu, Yunhua; Jones, Susanne B.

    A preliminary process model and techno-economic analysis (TEA) was completed for fuel produced from hydrothermal liquefaction (HTL) of sludge waste from a municipal wastewater treatment plant (WWTP) and subsequent biocrude upgrading. The model is adapted from previous work by Jones et al. (2014) for algae HTL, using experimental data generated in fiscal year 2015 (FY15) bench-scale HTL testing of sludge waste streams. Testing was performed on sludge samples received from Metro Vancouver’s Annacis Island WWTP (Vancouver, B.C.) as part of a collaborative project with the Water Environment and Reuse Foundation (WERF). The full set of sludge HTL testing data frommore » this effort will be documented in a separate report to be issued by WERF. This analysis is based on limited testing data and therefore should be considered preliminary. In addition, the testing was conducted with the goal of successful operation, and therefore does not represent an optimized process. Future refinements are necessary to improve the robustness of the model, including a cross-check of modeled biocrude components with the experimental GCMS data and investigation of equipment costs most appropriate at the relatively small scales used here. Environmental sustainability metrics analysis is also needed to understand the broader impact of this technology pathway. The base case scenario for the analysis consists of 10 HTL plants, each processing 100 dry U.S. ton/day (92.4 ton/day on a dry, ash-free basis) of sludge waste and producing 234 barrel per stream day (BPSD) biocrude, feeding into a centralized biocrude upgrading facility that produces 2,020 barrel per standard day of final fuel. This scale was chosen based upon initial wastewater treatment plant data collected by PNNL’s resource assessment team from the EPA’s Clean Watersheds Needs Survey database (EPA 2015a) and a rough estimate of what the potential sludge availability might be within a 100-mile radius. In addition, we received valuable feedback from the wastewater treatment industry as part of the WERF collaboration that helped form the basis for the selected HTL and upgrading plant scales and feedstock credit (current cost of disposal). It is assumed that the sludge is currently disposed of at $16.20/wet ton ($46/dry ton at 35% solids; $50/ton dry, ash-free basis) and this is included as a feedstock credit in the operating costs. The base case assumptions result in a minimum biocrude selling price of $3.8/gge and a minimum final upgraded fuel selling price of $4.9/gge. Several areas of process improvement and refinements to the analysis have the potential to significantly improve economics relative to the base case: •Optimization of HTL sludge feed solids content •Optimization of HTL biocrude yield •Optimization of HTL reactor liquid hourly space velocity (LHSV) •Optimization of fuel yield from hydrotreating •Combined large and small HTL scales specific to regions (e.g., metropolitan and suburban plants) Combined improvements believed to be achievable in these areas can potentially reduce the minimum selling price of biocrude and final upgraded fuel by about 50%. Further improvements may be possible through recovery of higher value components from the HTL aqueous phase, as being investigated under separate PNNL projects. Upgrading the biocrude at an existing petroleum refinery could also reduce the MFSP, although this option requires further testing to ensure compatibility and mitigation of risks to a refinery. And finally, recycling the HTL aqueous phase product stream back to the headworks of the WWTP (with no catalytic hydrothermal gasification treatment) can significantly reduce cost. This option is uniquely appropriate for application at a water treatment facility but also requires further investigation to determine any technical and economic challenges related to the extra chemical oxygen demand (COD) associated with the recycled water.« less

  16. Cost-effectiveness analysis of pharmacogenetic-guided warfarin dosing in Thailand.

    PubMed

    Chong, Huey Yi; Saokaew, Surasak; Dumrongprat, Kuntika; Permsuwan, Unchalee; Wu, David Bin-Chia; Sritara, Piyamitr; Chaiyakunapruk, Nathorn

    2014-12-01

    Pharmacogenetic (PGx) test is a useful tool for guiding physician on an initiation of an optimal warfarin dose. To implement of such strategy, the evidence on the economic value is needed. This study aimed to determine the cost-effectiveness of PGx-guided warfarin dosing compared with usual care (UC). A decision analytic model was used to compare projected lifetime costs and quality-adjusted life years (QALYs) accrued to warfarin users through PGx or UC for a hypothetical cohort of 1,000 patients. The model was populated with relevant information from systematic review, and electronic hospital-database. Incremental cost-effectiveness ratios (ICERs) were calculated based on healthcare system and societal perspectives. All costs were presented at year 2013. A series of sensitivity analyses were performed to determine the robustness of the findings. From healthcare system perspective, PGx increases QALY by 0.002 and cost by 2,959 THB (99 USD) compared with UC. Thus, the ICER is 1,477,042 THB (49,234 USD) per QALY gained. From societal perspective, PGx results in 0.002 QALY gained, and increases costs by 2,953 THB (98 USD) compared with UC (ICER 1,473,852 THB [49,128 USD] per QALY gained). Results are sensitive to the risk ratio (RR) of major bleeding in VKORC1 variant, the efficacy of PGx-guided dosing, and the cost of PGx test. Our finding suggests that PGx-guided warfarin dosing is unlikely to be a cost-effective intervention in Thailand. This evidence assists policy makers and clinicians in efficiently allocating scarce resources. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A duality framework for stochastic optimal control of complex systems

    DOE PAGES

    Malikopoulos, Andreas A.

    2016-01-01

    In this study, we address the problem of minimizing the long-run expected average cost of a complex system consisting of interactive subsystems. We formulate a multiobjective optimization problem of the one-stage expected costs of the subsystems and provide a duality framework to prove that the control policy yielding the Pareto optimal solution minimizes the average cost criterion of the system. We provide the conditions of existence and a geometric interpretation of the solution. For practical situations having constraints consistent with those studied here, our results imply that the Pareto control policy may be of value when we seek to derivemore » online the optimal control policy in complex systems.« less

  18. New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program

    NASA Technical Reports Server (NTRS)

    Strain, D.; Levy, R.

    1986-01-01

    The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.

  19. Optimizing sterilization logistics in hospitals.

    PubMed

    van de Klundert, Joris; Muls, Philippe; Schadd, Maarten

    2008-03-01

    This paper deals with the optimization of the flow of sterile instruments in hospitals which takes place between the sterilization department and the operating theatre. This topic is especially of interest in view of the current attempts of hospitals to cut cost by outsourcing sterilization tasks. Oftentimes, outsourcing implies placing the sterilization unit at a larger distance, hence introducing a longer logistic loop, which may result in lower instrument availability, and higher cost. This paper discusses the optimization problems that have to be solved when redesigning processes so as to improve material availability and reduce cost. We consider changing the logistic management principles, use of visibility information, and optimizing the composition of the nets of sterile materials.

  20. Optimal Design and Operation of Permanent Irrigation Systems

    NASA Astrophysics Data System (ADS)

    Oron, Gideon; Walker, Wynn R.

    1981-01-01

    Solid-set pressurized irrigation system design and operation are studied with optimization techniques to determine the minimum cost distribution system. The principle of the analysis is to divide the irrigation system into subunits in such a manner that the trade-offs among energy, piping, and equipment costs are selected at the minimum cost point. The optimization procedure involves a nonlinear, mixed integer approach capable of achieving a variety of optimal solutions leading to significant conclusions with regard to the design and operation of the system. Factors investigated include field geometry, the effect of the pressure head, consumptive use rates, a smaller flow rate in the pipe system, and outlet (sprinkler or emitter) discharge.

Top