Quality and Growth Implications of Incremental Costing Models for Distance Education Units
ERIC Educational Resources Information Center
Crawford, C. B.; Gould, Lawrence V.; King, Dennis; Parker, Carl
2010-01-01
The purpose of this article is to explore quality and growth implications emergent from various incremental costing models applied to distance education units. Prior research relative to costing models and three competing costing models useful in the current distance education environment are discussed. Specifically, the simple costing model, unit…
A case-mix classification system for explaining healthcare costs using administrative data in Italy.
Corti, Maria Chiara; Avossa, Francesco; Schievano, Elena; Gallina, Pietro; Ferroni, Eliana; Alba, Natalia; Dotto, Matilde; Basso, Cristina; Netti, Silvia Tiozzo; Fedeli, Ugo; Mantoan, Domenico
2018-03-04
The Italian National Health Service (NHS) provides universal coverage to all citizens, granting primary and hospital care with a copayment system for outpatient and drug services. Financing of Local Health Trusts (LHTs) is based on a capitation system adjusted only for age, gender and area of residence. We applied a risk-adjustment system (Johns Hopkins Adjusted Clinical Groups System, ACG® System) in order to explain health care costs using routinely collected administrative data in the Veneto Region (North-eastern Italy). All residents in the Veneto Region were included in the study. The ACG system was applied to classify the regional population based on the following information sources for the year 2015: Hospital Discharges, Emergency Room visits, Chronic disease registry for copayment exemptions, ambulatory visits, medications, the Home care database, and drug prescriptions. Simple linear regressions were used to contrast an age-gender model to models incorporating more comprehensive risk measures aimed at predicting health care costs. A simple age-gender model explained only 8% of the variance of 2015 total costs. Adding diagnoses-related variables provided a 23% increase, while pharmacy based variables provided an additional 17% increase in explained variance. The adjusted R-squared of the comprehensive model was 6 times that of the simple age-gender model. ACG System provides substantial improvement in predicting health care costs when compared to simple age-gender adjustments. Aging itself is not the main determinant of the increase of health care costs, which is better explained by the accumulation of chronic conditions and the resulting multimorbidity. Copyright © 2018. Published by Elsevier B.V.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
Simple Elasticity Modeling and Failure Prediction for Composite Flexbeams
NASA Technical Reports Server (NTRS)
Makeev, Andrew; Armanios, Erian; OBrien, T. Kevin (Technical Monitor)
2001-01-01
A simple 2D boundary element analysis, suitable for developing cost effective models for tapered composite laminates, is presented. Constant stress and displacement elements are used. Closed-form fundamental solutions are derived. Numerical results are provided for several configurations to illustrate the accuracy of the model.
A comprehensive cost model for NASA data archiving
NASA Technical Reports Server (NTRS)
Green, J. L.; Klenk, K. F.; Treinish, L. A.
1990-01-01
A simple archive cost model has been developed to help predict NASA's archiving costs. The model covers data management activities from the beginning of the mission through launch, acquisition, and support of retrospective users by the long-term archive; it is capable of determining the life cycle costs for archived data depending on how the data need to be managed to meet user requirements. The model, which currently contains 48 equations with a menu-driven user interface, is available for use on an IBM PC or AT.
Software Development Cost Estimation Executive Summary
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
Modal cost analysis for simple continua
NASA Technical Reports Server (NTRS)
Hu, A.; Skelton, R. E.; Yang, T. Y.
1988-01-01
The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations, it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for simple continua such as beam-like structures. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
The use of cluster analysis techniques in spaceflight project cost risk estimation
NASA Technical Reports Server (NTRS)
Fox, G.; Ebbeler, D.; Jorgensen, E.
2003-01-01
Project cost risk is the uncertainty in final project cost, contingent on initial budget, requirements and schedule. For a proposed mission, a dynamic simulation model relying for some of its input on a simple risk elicitation is used to identify and quantify systemic cost risk.
Long-range planning cost model for support of future space missions by the deep space network
NASA Technical Reports Server (NTRS)
Sherif, J. S.; Remer, D. S.; Buchanan, H. R.
1990-01-01
A simple model is suggested to do long-range planning cost estimates for Deep Space Network (DSP) support of future space missions. The model estimates total DSN preparation costs and the annual distribution of these costs for long-range budgetary planning. The cost model is based on actual DSN preparation costs from four space missions: Galileo, Voyager (Uranus), Voyager (Neptune), and Magellan. The model was tested against the four projects and gave cost estimates that range from 18 percent above the actual total preparation costs of the projects to 25 percent below. The model was also compared to two other independent projects: Viking and Mariner Jupiter/Saturn (MJS later became Voyager). The model gave cost estimates that range from 2 percent (for Viking) to 10 percent (for MJS) below the actual total preparation costs of these missions.
ERIC Educational Resources Information Center
Ehrmann, Stephen C.; Milam, John H., Jr.
2003-01-01
This volume describes for educators how to create simple models of the full costs of educational innovations, including the costs for time devoted to the activity, space needed for the activity, etc. Examples come from educational uses of technology in higher education in the United States and China. Real case studies illustrate the method in use:…
The Location of Sales Offices and the Attraction of Cities.
ERIC Educational Resources Information Center
Holmes, Thomas J.
2005-01-01
This paper examines how manufacturers locate sales offices across cities. Sales office costs are assumed to have four components: a fixed cost, a frictional cost for out-of-town sales, a cost-reducing knowledge spillover related to city size, and an idiosyncratic match quality for each firm-city pair. A simple theoretical model is developed and is…
An Inexpensive Robotics Laboratory.
ERIC Educational Resources Information Center
Inigo, R. M.; Angulo, J. M.
1985-01-01
Describes the design and implementation of a simple robot manipulator. The manipulator has three degrees of freedom and is controlled by a general purpose microcomputer. The basis for the manipulator (which costs under $100) is a simple working model of a crane. (Author/JN)
2017-02-08
cost benefit of the technology. 7.1 COST MODEL A simple cost model for the technology is presented so that a remediation professional can understand...reporting costs . The benefit of the qPCR analyses is that they allow the user to determine if aerobic cometabolism is possible. Because the PHE and...of Chlorinated Ethylenes February 2017 This document has been cleared for public release; Distribution Statement A Page Intentionally Left
Input-Output Modeling and Control of the Departure Process of Congested Airports
NASA Technical Reports Server (NTRS)
Pujet, Nicolas; Delcaire, Bertrand; Feron, Eric
2003-01-01
A simple queueing model of busy airport departure operations is proposed. This model is calibrated and validated using available runway configuration and traffic data. The model is then used to evaluate preliminary control schemes aimed at alleviating departure traffic congestion on the airport surface. The potential impact of these control strategies on direct operating costs, environmental costs and overall delay is quantified and discussed.
Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E
2016-12-01
As reimbursement programs shift to value-based payment models emphasizing quality and efficient healthcare delivery, there exists a need to better understand process management to unearth true costs of patient care. We sought to identify cost-reduction opportunities in simple appendicitis management by applying a time-driven activity-based costing (TDABC) methodology to this high-volume surgical condition. Process maps were created using medical record time stamps. Labor capacity cost rates were calculated using national median physician salaries, weighted nurse-patient ratios, and hospital cost data. Consumable costs for supplies, pharmacy, laboratory, and food were derived from the hospital general ledger. Time-driven activity-based costing resulted in precise per-minute calculation of personnel costs. Highest costs were in the operating room ($747.07), hospital floor ($388.20), and emergency department ($296.21). Major contributors to length of stay were emergency department evaluation (270min), operating room availability (395min), and post-operative monitoring (1128min). The TDABC model led to $1712.16 in personnel costs and $1041.23 in consumable costs for a total appendicitis cost of $2753.39. Inefficiencies in healthcare delivery can be identified through TDABC. Triage-based standing delegation orders, advanced practice providers, and same day discharge protocols are proposed cost-reducing interventions to optimize value-based care for simple appendicitis. II. Copyright © 2016 Elsevier Inc. All rights reserved.
Oda, Hitomi; Miyauchi, Akira; Ito, Yasuhiro; Sasai, Hisanori; Masuoka, Hiroo; Yabuta, Tomonori; Fukushima, Mitsuhiro; Higashiyama, Takuya; Kihara, Minoru; Kobayashi, Kaoru; Miya, Akihiro
2017-01-30
The incidence of thyroid cancer is increasing rapidly in many countries, resulting in rising societal costs of the care of thyroid cancer. We reported that the active surveillance of low-risk papillary microcarcinoma had less unfavorable events than immediate surgery, while the oncological outcomes of these managements were similarly excellent. Here we calculated the medical costs of these two managements. We created a model of the flow of these managements, based on our previous study. The flow and costs include the step of diagnosis, surgery, prescription of medicine, recurrence, salvage surgery for recurrence, and care for 10 years after the diagnosis. The costs were calculated according to the typical clinical practices at Kuma Hospital performed under the Japanese Health Care Insurance System. If conversion surgeries were not considered, the 'simple cost' of active surveillance for 10 years was 167,780 yen/patient. If there were no recurrences, the 'simple cost' of immediate surgery was calculated as 794,770 yen/patient to 1,086,070 yen/patient, depending on the type of surgery and postoperative medication. The 'simple cost' of surgery was 4.7 to 6.5 times the 'simple cost' of surveillance. When conversion surgeries and recurrence were considered, the 'total cost' of active surveillance for 10 years became 225,695 yen/patient. When recurrence were considered, the 'total cost' of immediate surgery was 928,094 yen/patient, which was 4.1 times the 'total cost' of the active surveillance. At Kuma Hospital in Japan, the 10-year total cost of immediate surgery was 4.1 times expensive than active surveillance.
ERIC Educational Resources Information Center
Norris, Robert G.
A cost-effectiveness model is presented for academic administrators to use in making evaluation and planning decisions related directly to the instructional activities of academic departments. The advantages seen in the model are that it is simple and flexible, concentrates on balancing income generated by the department to expenses incurred, and…
A simple rule for the costs of vigilance: empirical evidence from a social forager.
Cowlishaw, Guy; Lawes, Michael J.; Lightbody, Margaret; Martin, Alison; Pettifor, Richard; Rowcliffe, J. Marcus
2004-01-01
It is commonly assumed that anti-predator vigilance by foraging animals is costly because it interrupts food searching and handling time, leading to a reduction in feeding rate. When food handling does not require visual attention, however, a forager may handle food while simultaneously searching for the next food item or scanning for predators. We present a simple model of this process, showing that when the length of such compatible handling time Hc is long relative to search time S, specifically Hc/S > 1, it is possible to perform vigilance without a reduction in feeding rate. We test three predictions of this model regarding the relationships between feeding rate, vigilance and the Hc/S ratio, with data collected from a wild population of social foragers (samango monkeys, Cercopithecus mitis erythrarchus). These analyses consistently support our model, including our key prediction: as Hc/S increases, the negative relationship between feeding rate and the proportion of time spent scanning becomes progressively shallower. This pattern is more strongly driven by changes in median scan duration than scan frequency. Our study thus provides a simple rule that describes the extent to which vigilance can be expected to incur a feeding rate cost. PMID:15002768
Courville, Xan F; Tomek, Ivan M; Kirkland, Kathryn B; Birhle, Marian; Kantor, Stephen R; Finlayson, Samuel R G
2012-02-01
To perform a cost-effectiveness analysis to evaluate preoperative use of mupirocin in patients with total joint arthroplasty (TJA). Simple decision tree model. Outpatient TJA clinical setting. Hypothetical cohort of patients with TJA. A simple decision tree model compared 3 strategies in a hypothetical cohort of patients with TJA: (1) obtaining preoperative screening cultures for all patients, followed by administration of mupirocin to patients with cultures positive for Staphylococcus aureus; (2) providing empirical preoperative treatment with mupirocin for all patients without screening; and (3) providing no preoperative treatment or screening. We assessed the costs and benefits over a 1-year period. Data inputs were obtained from a literature review and from our institution's internal data. Utilities were measured in quality-adjusted life-years, and costs were measured in 2005 US dollars. Incremental cost-effectiveness ratio. The treat-all and screen-and-treat strategies both had lower costs and greater benefits, compared with the no-treatment strategy. Sensitivity analysis revealed that this result is stable even if the cost of mupirocin was over $100 and the cost of SSI ranged between $26,000 and $250,000. Treating all patients remains the best strategy when the prevalence of S. aureus carriers and surgical site infection is varied across plausible values as well as when the prevalence of mupirocin-resistant strains is high. Empirical treatment with mupirocin ointment or use of a screen-and-treat strategy before TJA is performed is a simple, safe, and cost-effective intervention that can reduce the risk of SSI. S. aureus decolonization with nasal mupirocin for patients undergoing TJA should be considered. Level II, economic and decision analysis.
Economic decision making and the application of nonparametric prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2007-01-01
Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.
Simple Levelized Cost of Energy (LCOE) Calculator Documentation | Energy
Analysis | NREL Simple Levelized Cost of Energy (LCOE) Calculator Documentation Simple Levelized Cost of Energy (LCOE) Calculator Documentation Transparent Cost Database Button This is a simple : 1). Cost and Performance Adjust the sliders to suitable values for each of the cost and performance
Renewable Energy Resources Portfolio Optimization in the Presence of Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behboodi, Sahand; Chassin, David P.; Crawford, Curran
In this paper we introduce a simple cost model of renewable integration and demand response that can be used to determine the optimal mix of generation and demand response resources. The model includes production cost, demand elasticity, uncertainty costs, capacity expansion costs, retirement and mothballing costs, and wind variability impacts to determine the hourly cost and revenue of electricity delivery. The model is tested on the 2024 planning case for British Columbia and we find that cost is minimized with about 31% renewable generation. We also find that demand responsive does not have a significant impact on cost at themore » hourly level. The results suggest that the optimal level of renewable resource is not sensitive to a carbon tax or demand elasticity, but it is highly sensitive to the renewable resource installation cost.« less
Longevity suppresses conflict in animal societies
Port, Markus; Cant, Michael A.
2013-01-01
Models of social conflict in animal societies generally assume that within-group conflict reduces the value of a communal resource. For many animals, however, the primary cost of conflict is increased mortality. We develop a simple inclusive fitness model of social conflict that takes this cost into account. We show that longevity substantially reduces the level of within-group conflict, which can lead to the evolution of peaceful animal societies if relatedness among group members is high. By contrast, peaceful outcomes are never possible in models where the primary cost of social conflict is resource depletion. Incorporating mortality costs into models of social conflict can explain why many animal societies are so remarkably peaceful despite great potential for conflict. PMID:24088564
NASA Astrophysics Data System (ADS)
Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik
2018-05-01
Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.
A-Priori Tuning of Modified Magnussen Combustion Model
NASA Technical Reports Server (NTRS)
Norris, A. T.
2016-01-01
In the application of CFD to turbulent reacting flows, one of the main limitations to predictive accuracy is the chemistry model. Using a full or skeletal kinetics model may provide good predictive ability, however, at considerable computational cost. Adding the ability to account for the interaction between turbulence and chemistry improves the overall fidelity of a simulation but adds to this cost. An alternative is the use of simple models, such as the Magnussen model, which has negligible computational overhead, but lacks general predictive ability except for cases that can be tuned to the flow being solved. In this paper, a technique will be described that allows the tuning of the Magnussen model for an arbitrary fuel and flow geometry without the need to have experimental data for that particular case. The tuning is based on comparing the results of the Magnussen model and full finite-rate chemistry when applied to perfectly and partially stirred reactor simulations. In addition, a modification to the Magnussen model is proposed that allows the upper kinetic limit for the reaction rate to be set, giving better physical agreement with full kinetic mechanisms. This procedure allows a simple reacting model to be used in a predictive manner, and affords significant savings in computational costs for simulations.
[Cost variation in care groups?
Mohnen, S M; Molema, C C M; Steenbeek, W; van den Berg, M J; de Bruin, S R; Baan, C A; Struijs, J N
2017-01-01
Is the simple mean of the costs per diabetes patient a suitable tool with which to compare care groups? Do the total costs of care per diabetes patient really give the best insight into care group performance? Cross-sectional, multi-level study. The 2009 insurance claims of 104,544 diabetes patients managed by care groups in the Netherlands were analysed. The data were obtained from Vektis care information centre. For each care group we determined the mean costs per patient of all the curative care and diabetes-specific hospital care using the simple mean method, then repeated it using the 'generalized linear mixed model'. We also calculated for which proportion the differences found could be attributed to the care groups themselves. The mean costs of the total curative care per patient were €3,092 - €6,546; there were no significant differences between care groups. The mixed model method resulted in less variation (€2,884 - €3,511), and there were a few significant differences. We found a similar result for diabetes-specific hospital care and the ranking position of the care groups proved to be dependent on the method used. The care group effect was limited, although it was greater in the diabetes-specific hospital costs than in the total costs of curative care (6.7% vs. 0.4%). The method used to benchmark care groups carries considerable weight. Simply stated, determining the mean costs of care (still often done) leads to an overestimation of the differences between care groups. The generalized linear mixed model is more accurate and yields better comparisons. However, the fact remains that 'total costs of care' is a faulty indicator since care groups have little impact on them. A more informative indicator is 'costs of diabetes-specific hospital care' as these costs are more influenced by care groups.
Forecasting Pell Program Applications Using Structural Aggregate Models.
ERIC Educational Resources Information Center
Cavin, Edward S.
1995-01-01
Demand for Pell Grant financial aid has become difficult to predict when using the current microsimulation model. This paper proposes an alternative model that uses aggregate data (based on individuals' microlevel decisions and macrodata on family incomes, college costs, and opportunity wages) and avoids some limitations of simple linear models.…
Proton facility economics: the importance of "simple" treatments.
Johnstone, Peter A S; Kerstiens, John; Richard, Helsper
2012-08-01
Given the cost and debt incurred to build a modern proton facility, impetus exists to minimize treatment of patients with complex setups because of their slower throughput. The aim of this study was to determine how many "simple" cases are necessary given different patient loads simply to recoup construction costs and debt service, without beginning to cover salaries, utilities, beam costs, and so on. Simple cases are ones that can be performed quickly because of an easy setup for the patient or because the patient is to receive treatment to just one or two fields. A "standard" construction cost and debt for 1, 3, and 4 gantry facilities were calculated from public documents of facilities built in the United States, with 100% of the construction funded through standard 15-year financing at 5% interest. Clinical best case (that each room was completely scheduled with patients over a 14-hour workday) was assumed, and a statistical analysis was modeled with debt, case mix, and payer mix moving independently. Treatment times and reimbursement data from the investigators' facility for varying complexities of patients were extrapolated for varying numbers treated daily. Revenue assumptions of $X per treatment were assumed both for pediatric cases (a mix of Medicaid and private payer) and state Medicare simple case rates. Private payer reimbursement averages $1.75X per treatment. The number of simple patients required daily to cover construction and debt service costs was then derived. A single gantry treating only complex or pediatric patients would need to apply 85% of its treatment slots simply to service debt. However, that same room could cover its debt treating 4 hours of simple patients, thus opening more slots for complex and pediatric patients. A 3-gantry facility treating only complex and pediatric cases would not have enough treatment slots to recoup construction and debt service costs at all. For a 4-gantry center, focusing on complex and pediatric cases alone, there would not be enough treatment slots to cover even 60% of debt service. Personnel and recurring costs and profit further reduce the business case for performing more complex patients. Debt is not variable with capacity. Absent philanthropy, financing a modern proton center requires treating a case load emphasizing simple patients even before operating costs and any profit are achieved. Copyright © 2012 American College of Radiology. Published by Elsevier Inc. All rights reserved.
A Simple "in Vitro" Culture of Freshwater Prawn Embryos for Laboratory Investigations
ERIC Educational Resources Information Center
Porntrai, Supaporn; Damrongphol, Praneet
2008-01-01
Giant freshwater prawn ("Macrobrachium rosenbergii" De Man) embryos can be cultured "in vitro" to hatching in 15% (v/v) artificial seawater (ASW). This technique can be applied as a bioassay for testing toxicity or for the effects of various substances on embryo development and can be used as a simple and low-cost model for…
Development of a solar-powered residential air conditioner: Economic analysis
NASA Technical Reports Server (NTRS)
1975-01-01
The results of investigations aimed at the development of cost models to be used in the economic assessment of Rankine-powered air conditioning systems for residential application are summarized. The rationale used in the development of the cost model was to: (1) collect cost data on complete systems and on the major equipment used in these systems; (2) reduce these data and establish relationships between cost and other engineering parameters such as weight, size, power level, etc; and (3) derive simple correlations from which cost-to-the-user can be calculated from performance requirements. The equipment considered in the survey included heat exchangers, fans, motors, and turbocompressors. This kind of hardware represents more than 2/3 of the total cost of conventional air conditioners.
Evaluating the cost effectiveness of environmental projects: Case studies in aerospace and defense
NASA Technical Reports Server (NTRS)
Shunk, James F.
1995-01-01
Using the replacement technology of high pressure waterjet decoating systems as an example, a simple methodology is presented for developing a cost effectiveness model. The model uses a four-step process to formulate an economic justification designed for presentation to decision makers as an assessment of the value of the replacement technology over conventional methods. Three case studies from major U.S. and international airlines are used to illustrate the methodology and resulting model. Tax and depreciation impacts are also presented as potential additions to the model.
The Impact of Uncertainty and Irreversibility on Investments in Online Learning
ERIC Educational Resources Information Center
Oslington, Paul
2004-01-01
Uncertainty and irreversibility are central to online learning projects, but have been neglected in the existing educational cost-benefit analysis literature. This paper builds some simple illustrative models of the impact of irreversibility and uncertainty, and shows how different types of cost and demand uncertainty can have substantial impacts…
Webcam camera as a detector for a simple lab-on-chip time based approach.
Wongwilai, Wasin; Lapanantnoppakhun, Somchai; Grudpan, Supara; Grudpan, Kate
2010-05-15
A modification of a webcam camera for use as a small and low cost detector was demonstrated with a simple lab-on-chip reactor. Real time continuous monitoring of the reaction zone could be done. Acid-base neutralization with phenolphthalein indicator was used as a model reaction. The fading of pink color of the indicator when the acidic solution diffused into the basic solution zone was recorded as the change of red, blue and green colors (%RBG.) The change was related to acid concentration. A low cost portable semi-automation analysis system was achieved.
Applications of a stump-to-mill computer model to cable logging planning
Chris B. LeDoux
1986-01-01
Logging cost simulators and data from logging cost studies have been assembled and converted into a series of simple equations that can be used to estimate the stump-to-mill cost of cable logging in mountainous terrain of the Eastern United States. These equations are based on the use of two small and four medium-sized cable yarders and are applicable for harvests of...
Algebraic Turbulence-Chemistry Interaction Model
NASA Technical Reports Server (NTRS)
Norris, Andrew T.
2012-01-01
The results of a series of Perfectly Stirred Reactor (PSR) and Partially Stirred Reactor (PaSR) simulations are compared to each other over a wide range of operating conditions. It is found that the PaSR results can be simulated by a PSR solution with just an adjusted chemical reaction rate. A simple expression has been developed that gives the required change in reaction rate for a PSR solution to simulate the PaSR results. This expression is the basis of a simple turbulence-chemistry interaction model. The interaction model that has been developed is intended for use with simple one-step global reaction mechanisms and for steady-state flow simulations. Due to the simplicity of the model there is very little additional computational cost in adding it to existing CFD codes.
Market frictions: A unified model of search costs and switching costs
Wilson, Chris M.
2012-01-01
It is well known that search costs and switching costs can create market power by constraining the ability of consumers to change suppliers. While previous research has examined each cost in isolation, this paper demonstrates the benefits of examining the two types of friction in unison. The paper shows how subtle distinctions between the two costs can provide important differences in their effects upon consumer behaviour, competition and welfare. In addition, the paper also illustrates a simple empirical methodology for estimating separate measures of both costs, while demonstrating a potential bias that can arise if only one cost is considered. PMID:25550674
Using a crowdsourced approach for monitoring water level in a remote Kenyan catchment
NASA Astrophysics Data System (ADS)
Weeser, Björn; Jacobs, Suzanne; Rufino, Mariana; Breuer, Lutz
2017-04-01
Hydrological models or effective water management strategies only succeed if they are based on reliable data. Decreasing costs of technical equipment lower the barrier to create comprehensive monitoring networks and allow both spatial and temporal high-resolution measurements. However, these networks depend on specialised equipment, supervision, and maintenance producing high running expenses. This becomes particularly challenging for remote areas. Low income countries often do not have the capacity to run such networks. Delegating simple measurements to citizens living close to relevant monitoring points may reduce costs and increase the public awareness. Here we present our experiences of using a crowdsourced approach for monitoring water levels in remote catchments in Kenya. We established a low-cost system consisting of thirteen simple water level gauges and a Raspberry Pi based SMS-Server for data handling. Volunteers determine the water level and transmit their records using a simple text message. These messages are automatically processed and real-time feedback on the data quality is given. During the first year, more than 1200 valid records with high quality have been collected. In summary, the simple techniques for data collecting, transmitting and processing created an open platform that has the potential for reaching volunteers without the need for special equipment. Even though the temporal resolution of measurements cannot be controlled and peak flows might be missed, this data can still be considered as a valuable enhancement for developing management strategies or for hydrological modelling.
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets.
Bradley, Stuart
2015-11-20
Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an "actuator" interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator "firings") to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a "cost function" is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs.
Economic decision making and the application of nonparametric prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2008-01-01
Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.
ERIC Educational Resources Information Center
Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel
2015-01-01
A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…
Development of a simple and low cost microbioreactor for high-throughput bioprocessing.
Rahman, Pattanathu K S M; Pasirayi, Godfrey; Auger, Vincent; Ali, Zulfiqur
2009-02-01
A simple microbioreactor for high-throughput bioprocessing made from low cost polymer polytetrafluoroethylene (PTFE) tubes with a working volume of 1.5 ml is described. We have developed a microfluidic system that handles a small population of cells of a model microorganism, Pseudomonas aeruginosa DS10-129. Under the conditions of the microbioreactor, the organism produced extracellular secondary metabolites by using nutrient broth modified with glycerol. Pyocyanins were isolated from the fermented medium as a metabolite of interest. Antibiotic properties of pyocyanin were effective against a number of microorganisms such as Staphylococcus aureus, S. epidermis, Bacillus subtilis, Micrococcus luteus and Saccharomyces cerevisiae. Batch fermentation of the model organism in the microbioreactor was compared to shake-flask and conventional bench fermenter methods. Results obtained from the microbioreactor compared favourably with the conventional processes.
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
Meningomyelocele Simulation Model: Pre-surgical Management–Technical Report
Angert, Robert M
2018-01-01
This technical report describes the creation of a myelomeningocele model of a newborn baby. This is a simple, low-cost, and easy-to-assemble model that allows the medical team to practice the delivery room management of a newborn with myelomeningocele. The report includes scenarios and a suggested checklist with which the model can be employed. PMID:29713576
Meningomyelocele Simulation Model: Pre-surgical Management-Technical Report.
Rosen, Orna; Angert, Robert M
2018-02-26
This technical report describes the creation of a myelomeningocele model of a newborn baby. This is a simple, low-cost, and easy-to-assemble model that allows the medical team to practice the delivery room management of a newborn with myelomeningocele. The report includes scenarios and a suggested checklist with which the model can be employed.
Modification Propagation in Complex Networks
NASA Astrophysics Data System (ADS)
Mouronte, Mary Luz; Vargas, María Luisa; Moyano, Luis Gregorio; Algarra, Francisco Javier García; Del Pozo, Luis Salvador
To keep up with rapidly changing conditions, business systems and their associated networks are growing increasingly intricate as never before. By doing this, network management and operation costs not only rise, but are difficult even to measure. This fact must be regarded as a major constraint to system optimization initiatives, as well as a setback to derived economic benefits. In this work we introduce a simple model in order to estimate the relative cost associated to modification propagation in complex architectures. Our model can be used to anticipate costs caused by network evolution, as well as for planning and evaluating future architecture development while providing benefit optimization.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.
A Linear City Model with Asymmetric Consumer Distribution
Azar, Ofer H.
2015-01-01
The article analyzes a linear-city model where the consumer distribution can be asymmetric, which is important because in real markets this distribution is often asymmetric. The model yields equilibrium price differences, even though the firms’ costs are equal and their locations are symmetric (at the two endpoints of the city). The equilibrium price difference is proportional to the transportation cost parameter and does not depend on the good's cost. The firms' markups are also proportional to the transportation cost. The two firms’ prices will be equal in equilibrium if and only if half of the consumers are located to the left of the city’s midpoint, even if other characteristics of the consumer distribution are highly asymmetric. An extension analyzes what happens when the firms have different costs and how the two sources of asymmetry – the consumer distribution and the cost per unit – interact together. The model can be useful as a tool for further development by other researchers interested in applying this simple yet flexible framework for the analysis of various topics. PMID:26034984
The Productivity Dilemma in Workplace Health Promotion.
Cherniack, Martin
2015-01-01
Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. METHODS/PROCEDURES: Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility.
Optimal Government Subsidies to Universities in the Face of Tuition and Enrollment Constraints
ERIC Educational Resources Information Center
Easton, Stephen T.; Rockerbie, Duane W.
2008-01-01
This paper develops a simple static model of an imperfectly competitive university operating under government-imposed constraints on the ability to raise tuition fees and increase enrollments. The model has particular applicability to Canadian universities. Assuming an average cost pricing rule, rules for adequate government subsidies (operating…
Heat Transfer Modeling of Jet Vane Thrust Vector Control (TVC) Systems.
1987-12-01
Cost and complexity, to include materials, labor , design and fabrication. b. Effectiveness and ability to perform two and three axis control. c...8217 ESTR ’) CALL ESTRGR C C.... SCRS contains the simple-chemical-reaction-model of C combustion, the theoretical basis of which is found in the C book
Chen, Ingrid T; Aung, Tin; Thant, Hnin Nwe Nwe; Sudhinaraset, May; Kahn, James G
2015-02-05
The emergence of artemisinin-resistant Plasmodium falciparum parasites in Southeast Asia threatens global malaria control efforts. One strategy to counter this problem is a subsidy of malaria rapid diagnostic tests (RDTs) and artemisinin-based combination therapy (ACT) within the informal private sector, where the majority of malaria care in Myanmar is provided. A study in Myanmar evaluated the effectiveness of financial incentives vs information, education and counselling (IEC) in driving the proper use of subsidized malaria RDTs among informal private providers. This cost-effectiveness analysis compares intervention options. A decision tree was constructed in a spreadsheet to estimate the incremental cost-effectiveness ratios (ICERs) among four strategies: no intervention, simple subsidy, subsidy with financial incentives, and subsidy with IEC. Model inputs included programmatic costs (in dollars), malaria epidemiology and observed study outcomes. Data sources included expenditure records, study data and scientific literature. Model outcomes included the proportion of properly and improperly treated individuals with and without P. falciparum malaria, and associated disability-adjusted life years (DALYs). Results are reported as ICERs in US dollars per DALY averted. One-way sensitivity analysis assessed how outcomes depend on uncertainty in inputs. ICERs from the least to most expensive intervention are: $1,169/DALY averted for simple subsidy vs no intervention, $185/DALY averted for subsidy with financial incentives vs simple subsidy, and $200/DALY averted for a subsidy with IEC vs subsidy with financial incentives. Due to decreasing ICERs, each strategy was also compared to no intervention. The subsidy with IEC was the most favourable, costing $639/DALY averted compared with no intervention. One-way sensitivity analysis shows that ICERs are most affected by programme costs, RDT uptake, treatment-seeking behaviour, and the prevalence and virulence of non-malarial fevers. In conclusion, private provider subsidies with IEC or a combination of IEC and financial incentives may be a good investment for malaria control.
Levelized Cost of Energy Calculator | Energy Analysis | NREL
Levelized Cost of Energy Calculator Levelized Cost of Energy Calculator Transparent Cost Database Button The levelized cost of energy (LCOE) calculator provides a simple calculator for both utility-scale need to be included for a thorough analysis. To estimate simple cost of energy, use the slider controls
Compact divided-pupil line-scanning confocal microscope for investigation of human tissues
NASA Astrophysics Data System (ADS)
Glazowski, Christopher; Peterson, Gary; Rajadhyaksha, Milind
2013-03-01
Divided-pupil line-scanning confocal microscopy (DPLSCM) can provide a simple and low-cost approach for imaging of human tissues with pathology-like nuclear and cellular detail. Using results from a multidimensional numerical model of DPLSCM, we found optimal pupil configurations for improved axial sectioning, as well as control of speckle noise in the case of reflectance imaging. The modeling results guided the design and construction of a simple (10 component) microscope, packaged within the footprint of an iPhone, and capable of cellular resolution. We present the optical design with experimental video-images of in-vivo human tissues.
Nitrogen in the Baltic Sea--policy implications of stock effects.
Hart, Rob; Brady, Mark
2002-09-01
We develop an optimal control model for cost-effective management of pollution, including two state variables, pollution stock and ecosystem quality. We apply it to Baltic Sea pollution by nitrogen leachates from agriculture. We present a sophisticated, non-linear model of leaching abatement costs, and a simple model of nitrogen stocks. We find that significant abatement is achievable at reasonable cost, despite the countervailing effects of existing agricultural policies such as price supports. Successful abatement should lead to lower nitrogen stocks in the sea in 5 years or less. However, the rate of ecosystem recovery is less certain. The results are highly dependent on the rate of self-cleaning of the Baltic Sea, and less so on the discount rate. Choice of target has a radical effect on the abatement path chosen. Cost-effectiveness demands such a choice, and should therefore be used with care when stock effects are present.
Predicting Cost/Performance Trade-Offs for Whitney: A Commodity Computing Cluster
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Nitzberg, Bill; VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)
1997-01-01
Recent advances in low-end processor and network technology have made it possible to build a "supercomputer" out of commodity components. We develop simple models of the NAS Parallel Benchmarks version 2 (NPB 2) to explore the cost/performance trade-offs involved in building a balanced parallel computer supporting a scientific workload. We develop closed form expressions detailing the number and size of messages sent by each benchmark. Coupling these with measured single processor performance, network latency, and network bandwidth, our models predict benchmark performance to within 30%. A comparison based on total system cost reveals that current commodity technology (200 MHz Pentium Pros with 100baseT Ethernet) is well balanced for the NPBs up to a total system cost of around $1,000,000.
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Sail Plan Configuration Optimization for a Modern Clipper Ship
NASA Astrophysics Data System (ADS)
Gerritsen, Margot; Doyle, Tyler; Iaccarino, Gianluca; Moin, Parviz
2002-11-01
We investigate the use of gradient-based and evolutionary algorithms for sail shape optimization. We present preliminary results for the optimization of sheeting angles for the rig of the future three-masted clipper yacht Maltese Falcon. This yacht will be equipped with square-rigged masts made up of yards of circular arc cross sections. This design is especially attractive for megayachts because it provides a large sail area while maintaining aerodynamic and structural efficiency. The rig remains almost rigid in a large range of wind conditions and therefore a simple geometrical model can be constructed without accounting for the true flying shape. The sheeting angle optimization studies are performed using both gradient-based cost function minimization and evolutionary algorithms. The fluid flow is modeled by the Reynolds-averaged Navier-Stokes equations with the Spallart-Allmaras turbulence model. Unstructured non-conforming grids are used to increase robustness and computational efficiency. The optimization process is automated by integrating the system components (geometry construction, grid generation, flow solver, force calculator, optimization). We compare the optimization results to those done previously by user-controlled parametric studies using simple cost functions and user intuition. We also investigate the effectiveness of various cost functions in the optimization (driving force maximization, ratio of driving force to heeling force maximization).
Terakado, Shingo; Glass, Thomas R; Sasaki, Kazuhiro; Ohmura, Naoya
2014-01-01
A simple new model for estimating the screening performance (false positive and false negative rates) of a given test for a specific sample population is presented. The model is shown to give good results on a test population, and is used to estimate the performance on a sampled population. Using the model developed in conjunction with regulatory requirements and the relative costs of the confirmatory and screening tests allows evaluation of the screening test's utility in terms of cost savings. Testers can use the methods developed to estimate the utility of a screening program using available screening tests with their own sample populations.
System cost/performance analysis (study 2.3). Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Kazangey, T.
1973-01-01
The relationships between performance, safety, cost, and schedule parameters were identified and quantified in support of an overall effort to generate program models and methodology that provide insight into a total space vehicle program. A specific space vehicle system, the attitude control system (ACS), was used, and a modeling methodology was selected that develops a consistent set of quantitative relationships among performance, safety, cost, and schedule, based on the characteristics of the components utilized in candidate mechanisms. These descriptive equations were developed for a three-axis, earth-pointing, mass expulsion ACS. A data base describing typical candidate ACS components was implemented, along with a computer program to perform sample calculations. This approach, implemented on a computer, is capable of determining the effect of a change in functional requirements to the ACS mechanization and the resulting cost and schedule. By a simple extension of this modeling methodology to the other systems in a space vehicle, a complete space vehicle model can be developed. Study results and recommendations are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Northrop, G.M.
1975-06-01
Societal consequences of the availability, under Title II, Public Law 92-513, of information on crashworthiness, crash repair cost, routine maintenance and repair cost, and insurance cost are investigated. Surveys of small groups of private passenger car buyers and fleet buyers were conducted, and the results were analyzed. Three simple computer models were prepared: (1) an Accident Model to compare the number of occupants suffering fatal or serious injuries under assumed car-buying behavior with and without the availability of Title II information and changes made by car manufacturers that modify crashworthiness and car weight; (2) a New Car Sales Model tomore » determine the impact of car-buying behavior on 22 societal elements involving consumer expenditures and employment, sales margin, and value added for dealers, car manufacturers, and industrial suppliers; and (3) a Car Operations Model to determine the impact of car-buying behavior on the total gasoline consumption cost, crash repair cost, routine maintenance, repair cost, and insurance cost. Projections of car-buying behavior over a 10-year period (1976-1985) were made and results presented in the form of 10-year average values of the percent difference between results under 'With Title II' and 'Without Title II' information.« less
Opportunity Cost: A Reexamination
ERIC Educational Resources Information Center
Parkin, Michael
2016-01-01
Is opportunity cost an ambiguous and arbitrary concept or a simple, straightforward, and fruitful one? This reexamination of opportunity cost addresses this question, and shows that opportunity cost is an ambiguous concept because "two" definitions are in widespread use. One of the definitions is indeed simple, fruitful, and one that…
USDA-ARS?s Scientific Manuscript database
Soil moisture monitoring with in situ technology is a time consuming and costly endeavor for which a method of increasing the resolution of spatial estimates across in situ networks is necessary. Using a simple hydrologic model, the resolution of an in situ watershed network can be increased beyond...
Su, Bin-Guang; Chen, Shao-Fen; Yeh, Shu-Hsing; Shih, Po-Wen; Lin, Ching-Chiang
2016-11-01
To cope with the government's policies to reduce medical costs, Taiwan's healthcare service providers are striving to survive by pursuing profit maximization through cost control. This article aimed to present the results of cost evaluation using activity-based costing performed in the laboratory in order to throw light on the differences between costs and the payment system of National Health Insurance (NHI). This study analyzed the data of costs and income of the clinical laboratory. Direct costs belong to their respective sections of the department. The department's shared costs, including public expenses and administrative assigned costs, were allocated to the department's respective sections. A simple regression equation was created to predict profit and loss, and evaluate the department's break-even point, fixed cost, and contribution margin ratio. In clinical chemistry and seroimmunology sections, the cost per test was lower than the NHI payment and their major laboratory tests had revenues with the profitability ratio of 8.7%, while the other sections had a higher cost per test than the NHI payment and their major tests were in deficit. The study found a simple linear regression model as follows: "Balance=-84,995+0.543×income (R2=0.544)". In order to avoid deficit, laboratories are suggested to increase test volumes, enhance laboratory test specialization, and become marginal scale. A hospital could integrate with regional medical institutions through alliances or OEM methods to increase volumes to reach marginal scale and reduce laboratory costs, enhancing the level and quality of laboratory medicine.
Time-driven activity-based costing: A dynamic value assessment model in pediatric appendicitis.
Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E
2017-06-01
Healthcare reform policies are emphasizing value-based healthcare delivery. We hypothesize that time-driven activity-based costing (TDABC) can be used to appraise healthcare interventions in pediatric appendicitis. Triage-based standing delegation orders, surgical advanced practice providers, and a same-day discharge protocol were implemented to target deficiencies identified in our initial TDABC model. Post-intervention process maps for a hospital episode were created using electronic time stamp data for simple appendicitis cases during February to March 2016. Total personnel and consumable costs were determined using TDABC methodology. The post-intervention TDABC model featured 6 phases of care, 33 processes, and 19 personnel types. Our interventions reduced duration and costs in the emergency department (-41min, -$23) and pre-operative floor (-57min, -$18). While post-anesthesia care unit duration and costs increased (+224min, +$41), the same-day discharge protocol eliminated post-operative floor costs (-$306). Our model incorporating all three interventions reduced total direct costs by 11% ($2753.39 to $2447.68) and duration of hospitalization by 51% (1984min to 966min). Time-driven activity-based costing can dynamically model changes in our healthcare delivery as a result of process improvement interventions. It is an effective tool to continuously assess the impact of these interventions on the value of appendicitis care. II, Type of study: Economic Analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Hospitalization Cost Model of Pediatric Surgical Treatment of Chiari Type 1 Malformation.
Lam, Sandi K; Mayer, Rory R; Luerssen, Thomas G; Pan, I Wen
2016-12-01
To develop a cost model for hospitalization costs of surgery among children with Chiari malformation type 1 (CM-1) and to examine risk factors for increased costs. Data were extracted from the US National Healthcare Cost and Utilization Project 2009 Kids' Inpatient Database. The study cohort was comprised of patients aged 0-20 years who underwent CM-1 surgery. Patient charges were converted to costs by cost-to-charge ratios. Simple and multivariable generalized linear models were used to construct cost models and to determine factors associated with increased hospital costs of CM-1 surgery. A total of 1075 patients were included. Median age was 11 years (IQR 5-16 years). Payers included public (32.9%) and private (61.5%) insurers. Median wage-adjusted cost and length-of-stay for CM-1 surgery were US $13 598 (IQR $10 475-$18 266) and 3 days (IQR 3-4 days). Higher costs were found at freestanding children's hospitals: average incremental-increased cost (AIIC) was US $5155 (95% CI $2067-$8749). Factors most associated with increased hospitalization costs were patients with device-dependent complex chronic conditions (AIIC $20 617, 95% CI $13 721-$29 026) and medical complications (AIIC $13 632, 95% CI $7163-$21 845). Neurologic and neuromuscular, metabolic, gastrointestinal, and other congenital genetic defect complex chronic conditions were also associated with higher hospital costs. This study examined cost drivers for surgery for CM-1; the results may serve as a starting point in informing the development of financial risk models, such as bundled payments or prospective payment systems for these procedures. Beyond financial implications, the study identified specific risk factors associated with increased costs. Copyright © 2016 Elsevier Inc. All rights reserved.
Liu, Nan; D'Aunno, Thomas
2012-04-01
To develop simple stylized models for evaluating the productivity and cost-efficiencies of different practice models to involve nurse practitioners (NPs) in primary care, and in particular to generate insights on what affects the performance of these models and how. The productivity of a practice model is defined as the maximum number of patients that can be accounted for by the model under a given timeliness-to-care requirement; cost-efficiency is measured by the corresponding annual cost per patient in that model. Appropriate queueing analysis is conducted to generate formulas and values for these two performance measures. Model parameters for the analysis are extracted from the previous literature and survey reports. Sensitivity analysis is conducted to investigate the model performance under different scenarios and to verify the robustness of findings. Employing an NP, whose salary is usually lower than a primary care physician, may not be cost-efficient, in particular when the NP's capacity is underutilized. Besides provider service rates, workload allocation among providers is one of the most important determinants for the cost-efficiency of a practice model involving NPs. Capacity pooling among providers could be a helpful strategy to improve efficiency in care delivery. The productivity and cost-efficiency of a practice model depend heavily on how providers organize their work and a variety of other factors related to the practice environment. Queueing theory provides useful tools to take into account these factors in making strategic decisions on staffing and panel size selection for a practice model. © Health Research and Educational Trust.
Keep it simple? Predicting primary health care costs with clinical morbidity measures
Brilleman, Samuel L.; Gravelle, Hugh; Hollinghurst, Sandra; Purdy, Sarah; Salisbury, Chris; Windmeijer, Frank
2014-01-01
Models of the determinants of individuals’ primary care costs can be used to set capitation payments to providers and to test for horizontal equity. We compare the ability of eight measures of patient morbidity and multimorbidity to predict future primary care costs and examine capitation payments based on them. The measures were derived from four morbidity descriptive systems: 17 chronic diseases in the Quality and Outcomes Framework (QOF); 17 chronic diseases in the Charlson scheme; 114 Expanded Diagnosis Clusters (EDCs); and 68 Adjusted Clinical Groups (ACGs). These were applied to patient records of 86,100 individuals in 174 English practices. For a given disease description system, counts of diseases and sets of disease dummy variables had similar explanatory power. The EDC measures performed best followed by the QOF and ACG measures. The Charlson measures had the worst performance but still improved markedly on models containing only age, gender, deprivation and practice effects. Comparisons of predictive power for different morbidity measures were similar for linear and exponential models, but the relative predictive power of the models varied with the morbidity measure. Capitation payments for an individual patient vary considerably with the different morbidity measures included in the cost model. Even for the best fitting model large differences between expected cost and capitation for some types of patient suggest incentives for patient selection. Models with any of the morbidity measures show higher cost for more deprived patients but the positive effect of deprivation on cost was smaller in better fitting models. PMID:24657375
DOT National Transportation Integrated Search
2017-06-01
The structural deterioration of aging infrastructure systems and the costs of repairing these systems is an increasingly important issue worldwide. Structural health monitoring (SHM), most commonly visual inspection and condition rating, has proven t...
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.
1993-01-01
The issues of how to avoid future surprise growth in Mission Operations and Data Analysis (MO&DA) costs and how to minimize total MO&DA costs for planetary missions are discussed within the context of JPL mission operations support. It is argued that there is no simple, single solution: the entire Project life-cycle must be addressed. It is concluded that cost models that can predict both MO&DA cost as well as Ground System development costs are needed. The first year MO&DA budget plotted against the total of ground and flight systems developments is shown. In order to better recognize changes and control costs in general, a modified funding line item breakdown is recommended to distinguish between development costs (prelaunch and postlaunch) and MO&DA costs.
Multibody model reduction by component mode synthesis and component cost analysis
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Mingori, D. L.
1990-01-01
The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.
A simplified financial model for automatic meter reading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, S.M.
1994-01-15
The financial model proposed here (which can be easily adapted for electric, gas, or water) combines aspects of [open quotes]life cycle,[close quotes] [open quotes]consumer value[close quotes] and [open quotes]revenue based[close quotes] approaches and addresses intangible benefits. A simple value tree of one-word descriptions clarifies the relationship between level of investment and level of value, visually relating increased value to increased cost. The model computes the numerical present values of capital costs, recurring costs, and revenue benefits over a 15-year period for the seven configurations: manual reading of existing or replacement standard meters (MMR), manual reading using electronic, hand-held retrievers (EMR),more » remote reading of inaccessible meters via hard-wired receptacles (RMR), remote reading of meters adapted with pulse generators (RMR-P), remote reading of meters adapted with absolute dial encoders (RMR-E), offsite reading over a few hundred feet with mobile radio (OMR), and fully automatic reading using telephone or an equivalent network (AMR). In the model, of course, the costs of installing the configurations are clearly listed under each column. The model requires only four annualized inputs and seven fixed-cost inputs that are rather easy to obtain.« less
The Productivity Dilemma in Workplace Health Promotion
Cherniack, Martin
2015-01-01
Background. Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. Methods/Procedures. Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. Principal Findings. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Significance. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility. PMID:26380374
Soldering to a single atomic layer
NASA Astrophysics Data System (ADS)
Girit, ćaǧlar Ö.; Zettl, A.
2007-11-01
The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.
Soldering to a single atomic layer
NASA Astrophysics Data System (ADS)
Girit, Caglar; Zettl, Alex
2008-03-01
The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.
Long, Keith R.; Singer, Donald A.
2001-01-01
Determining the economic viability of mineral deposits of various sizes and grades is a critical task in all phases of mineral supply, from land-use management to mine development. This study evaluates two simple tools for estimating the economic viability of porphyry copper deposits mined by open-pit, heap-leach methods when only limited information on these deposits is available. These two methods are useful for evaluating deposits that either (1) are undiscovered deposits predicted by a mineral resource assessment, or (2) have been discovered but for which little data has been collected or released. The first tool uses ordinary least-squared regression analysis of cost and operating data from selected deposits to estimate a predictive relationship between mining rate, itself estimated from deposit size, and capital and operating costs. The second method uses cost models developed by the U.S. Bureau of Mines (Camm, 1991) updated using appropriate cost indices. We find that the cost model method works best for estimating capital costs and the empirical model works best for estimating operating costs for mines to be developed in the United States.
On the measurement and valuation of travel time variability due to incidents on freeways
DOT National Transportation Integrated Search
1999-12-01
Incidents on freeways frequently cause long, unanticipated delays, increasing the economic cost of travel to motorists. This paper provides a simple model for estimating the mean and variance of time lost due to incidents on freeways. It also reviews...
O’Brien, J. Patrick; Malvankar, Nikhil S.
2017-01-01
Anaerobic microorganisms play a central role in several environmental processes and regulate global biogeochemical cycling of nutrients and minerals. Many anaerobic microorganisms are important for the production of bioenergy and biofuels. However, the major hurdle in studying anaerobic microorganisms in the laboratory is the requirement for sophisticated and expensive gassing stations and glove boxes to create and maintain the anaerobic environment. This appendix presents a simple design for a gassing station that can be used readily by an inexperienced investigator for cultivation of anaerobic microorganisms. In addition, this appendix also details the low-cost assembly of bioelectrochemical systems and outlines a simplified procedure for cultivating and analyzing bacterial cell cultures and biofilms that produce electric current, using Geobacter sulfurreducens as a model organism. PMID:27858972
Estimation of surface temperature in remote pollution measurement experiments
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.
A Simple Exoskeleton That Assists Plantarflexion Can Reduce the Metabolic Cost of Human Walking
Malcolm, Philippe; Derave, Wim; Galle, Samuel; De Clercq, Dirk
2013-01-01
Background Even though walking can be sustained for great distances, considerable energy is required for plantarflexion around the instant of opposite leg heel contact. Different groups attempted to reduce metabolic cost with exoskeletons but none could achieve a reduction beyond the level of walking without exoskeleton, possibly because there is no consensus on the optimal actuation timing. The main research question of our study was whether it is possible to obtain a higher reduction in metabolic cost by tuning the actuation timing. Methodology/Principal Findings We measured metabolic cost by means of respiratory gas analysis. Test subjects walked with a simple pneumatic exoskeleton that assists plantarflexion with different actuation timings. We found that the exoskeleton can reduce metabolic cost by 0.18±0.06 W kg−1 or 6±2% (standard error of the mean) (p = 0.019) below the cost of walking without exoskeleton if actuation starts just before opposite leg heel contact. Conclusions/Significance The optimum timing that we found concurs with the prediction from a mathematical model of walking. While the present exoskeleton was not ambulant, measurements of joint kinetics reveal that the required power could be recycled from knee extension deceleration work that occurs naturally during walking. This demonstrates that it is theoretically possible to build future ambulant exoskeletons that reduce metabolic cost, without power supply restrictions. PMID:23418524
Liu, Nan; D'Aunno, Thomas
2012-01-01
Objective To develop simple stylized models for evaluating the productivity and cost-efficiencies of different practice models to involve nurse practitioners (NPs) in primary care, and in particular to generate insights on what affects the performance of these models and how. Data Sources and Study Design The productivity of a practice model is defined as the maximum number of patients that can be accounted for by the model under a given timeliness-to-care requirement; cost-efficiency is measured by the corresponding annual cost per patient in that model. Appropriate queueing analysis is conducted to generate formulas and values for these two performance measures. Model parameters for the analysis are extracted from the previous literature and survey reports. Sensitivity analysis is conducted to investigate the model performance under different scenarios and to verify the robustness of findings. Principal Findings Employing an NP, whose salary is usually lower than a primary care physician, may not be cost-efficient, in particular when the NP's capacity is underutilized. Besides provider service rates, workload allocation among providers is one of the most important determinants for the cost-efficiency of a practice model involving NPs. Capacity pooling among providers could be a helpful strategy to improve efficiency in care delivery. Conclusions The productivity and cost-efficiency of a practice model depend heavily on how providers organize their work and a variety of other factors related to the practice environment. Queueing theory provides useful tools to take into account these factors in making strategic decisions on staffing and panel size selection for a practice model. PMID:22092009
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed
2016-12-01
Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.
HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.
Juusola, Jessie L; Brandeau, Margaret L
2016-04-01
To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.
Gastroschisis Simulation Model: Pre-surgical Management Technical Report.
Rosen, Orna; Angert, Robert M
2017-03-22
This technical report describes the creation of a gastroschisis model for a newborn. This is a simple, low-cost task trainer that provides the opportunity for Neonatology providers, including fellows, residents, nurse practitioners, physician assistants, and nurses, to practice the management of a baby with gastroschisis after birth and prior to surgery. Included is a suggested checklist with which the model can be employed. The details can be modified to suit different learning objectives.
NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.
2000-01-01
The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.
Vernon, John A; Hughen, W Keener; Johnson, Scott J
2005-05-01
In the face of significant real healthcare cost inflation, pressured budgets, and ongoing launches of myriad technology of uncertain value, payers have formalized new valuation techniques that represent a barrier to entry for drugs. Cost-effectiveness analysis predominates among these methods, which involves differencing a new technological intervention's marginal costs and benefits with a comparator's, and comparing the resulting ratio to a payer's willingness-to-pay threshold. In this paper we describe how firms are able to model the feasible range of future product prices when making in-licensing and developmental Go/No-Go decisions by considering payers' use of the cost-effectiveness method. We illustrate this analytic method with a simple deterministic example and then incorporate stochastic assumptions using both analytic and simulation methods. Using this strategic approach, firms may reduce product development and in-licensing risk.
Techno-economic assessment of novel vanadium redox flow batteries with large-area cells
NASA Astrophysics Data System (ADS)
Minke, Christine; Kunz, Ulrich; Turek, Thomas
2017-09-01
The vanadium redox flow battery (VRFB) is a promising electrochemical storage system for stationary megawatt-class applications. The currently limited cell area determined by the bipolar plate (BPP) could be enlarged significantly with a novel extruded large-area plate. For the first time a techno-economic assessment of VRFB in a power range of 1 MW-20 MW and energy capacities of up to 160 MWh is presented on the basis of the production cost model of large-area BPP. The economic model is based on the configuration of a 250 kW stack and the overall system including stacks, power electronics, electrolyte and auxiliaries. Final results include a simple function for the calculation of system costs within the above described scope. In addition, the impact of cost reduction potentials for key components (membrane, electrode, BPP, vanadium electrolyte) on stack and system costs is quantified and validated.
Adsorptive Removal of Toxic Chromium from Waste-Water Using Wheat Straw and Eupatorium adenophorum
Song, Dagang; Pan, Kaiwen; Tariq, Akash; Azizullah, Azizullah; Sun, Feng; Li, Zilong; Xiong, Qinli
2016-01-01
Environmental pollution with heavy metals is a serious issue worldwide posing threats to humans, animals and plants and to the stability of overall ecosystem. Chromium (Cr) is one of most hazardous heavy metals with a high carcinogenic and recalcitrant nature. Aim of the present study was to select low-cost biosorbent using wheat straw and Eupatorium adenophorum through simple carbonization process, capable of removing Cr (VI) efficiently from wastewater. From studied plants a low cost adsorbent was prepared for removing Cr (VI) from aqueous solution following very simple carbonization method excluding activation process. Several factors such as pH, contact time, sorbent dosage and temperature were investigated for attaining ideal condition. For analysis of adsorption equilibrium isotherm data, Langmuir, Freundlich and Temkin models were used while pseudo-first-order, pseudo-second-order, external diffusion and intra-particle diffusion models were used for the analysis of kinetic data. The obtained results revealed that 99.9% of Cr (VI) removal was observed in the solution with a pH of 1.0. Among all the tested models Langmuir model fitted more closely according to the data obtained. Increase in adsorption capacity was observed with increasing temperature revealing endothermic nature of Cr (VI). The maximum Cr (VI) adsorption potential of E. adenophorum and wheat straw was 89.22 mg per 1 gram adsorbent at 308K. Kinetic data of absorption precisely followed pseudo-second-order model. Present study revealed highest potential of E. adenophorum and wheat straw for producing low cost adsorbent and to remove Cr (VI) from contaminated water. PMID:27911906
Eckert, Kristen A; Carter, Marissa J; Lansingh, Van C; Wilson, David A; Furtado, João M; Frick, Kevin D; Resnikoff, Serge
2015-01-01
To estimate the annual loss of productivity from blindness and moderate to severe visual impairment (MSVI) using simple models (analogous to how a rapid assessment model relates to a comprehensive model) based on minimum wage (MW) and gross national income (GNI) per capita (US$, 2011). Cost of blindness (COB) was calculated for the age group ≥50 years in nine sample countries by assuming the loss of current MW and loss of GNI per capita. It was assumed that all individuals work until 65 years old and that half of visual impairment prevalent in the ≥50 years age group is prevalent in the 50-64 years age group. For cost of MSVI (COMSVI), individual wage and GNI loss of 30% was assumed. Results were compared with the values of the uncorrected refractive error (URE) model of productivity loss. COB (MW method) ranged from $0.1 billion in Honduras to $2.5 billion in the United States, and COMSVI ranged from $0.1 billion in Honduras to $5.3 billion in the US. COB (GNI method) ranged from $0.1 million in Honduras to $7.8 billion in the US, and COMSVI ranged from $0.1 billion in Honduras to $16.5 billion in the US. Most GNI method values were near equivalent to those of the URE model. Although most people with blindness and MSVI live in developing countries, the highest productivity losses are in high income countries. The global economy could improve if eye care were made more accessible and more affordable to all.
Adsorptive Removal of Toxic Chromium from Waste-Water Using Wheat Straw and Eupatorium adenophorum.
Song, Dagang; Pan, Kaiwen; Tariq, Akash; Azizullah, Azizullah; Sun, Feng; Li, Zilong; Xiong, Qinli
2016-01-01
Environmental pollution with heavy metals is a serious issue worldwide posing threats to humans, animals and plants and to the stability of overall ecosystem. Chromium (Cr) is one of most hazardous heavy metals with a high carcinogenic and recalcitrant nature. Aim of the present study was to select low-cost biosorbent using wheat straw and Eupatorium adenophorum through simple carbonization process, capable of removing Cr (VI) efficiently from wastewater. From studied plants a low cost adsorbent was prepared for removing Cr (VI) from aqueous solution following very simple carbonization method excluding activation process. Several factors such as pH, contact time, sorbent dosage and temperature were investigated for attaining ideal condition. For analysis of adsorption equilibrium isotherm data, Langmuir, Freundlich and Temkin models were used while pseudo-first-order, pseudo-second-order, external diffusion and intra-particle diffusion models were used for the analysis of kinetic data. The obtained results revealed that 99.9% of Cr (VI) removal was observed in the solution with a pH of 1.0. Among all the tested models Langmuir model fitted more closely according to the data obtained. Increase in adsorption capacity was observed with increasing temperature revealing endothermic nature of Cr (VI). The maximum Cr (VI) adsorption potential of E. adenophorum and wheat straw was 89.22 mg per 1 gram adsorbent at 308K. Kinetic data of absorption precisely followed pseudo-second-order model. Present study revealed highest potential of E. adenophorum and wheat straw for producing low cost adsorbent and to remove Cr (VI) from contaminated water.
A fast dynamic grid adaption scheme for meteorological flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiedler, B.H.; Trapp, R.J.
1993-10-01
The continuous dynamic grid adaption (CDGA) technique is applied to a compressible, three-dimensional model of a rising thermal. The computational cost, per grid point per time step, of using CDGA instead of a fixed, uniform Cartesian grid is about 53% of the total cost of the model with CDGA. The use of general curvilinear coordinates contributes 11.7% to this total, calculating and moving the grid 6.1%, and continually updating the transformation relations 20.7%. Costs due to calculations that involve the gridpoint velocities (as well as some substantial unexplained costs) contribute the remaining 14.5%. A simple way to limit the costmore » of calculating the grid is presented. The grid is adapted by solving an elliptic equation for gridpoint coordinates on a coarse grid and then interpolating the full finite-difference grid. In this application, the additional costs per grid point of CDGA are shown to be easily offset by the savings resulting from the reduction in the required number of grid points. In simulation of the thermal costs are reduced by a factor of 3, as compared with those of a companion model with a fixed, uniform Cartesian grid. 8 refs., 8 figs.« less
A Simple Low-Cost Lock-In Amplifier for the Laboratory
ERIC Educational Resources Information Center
Sengupta, Sandip K.; Farnham, Jessica M.; Whitten, James E.
2005-01-01
The creation of a simple, low-cost lock-in amplifier (LIA) suitable for use in the chemistry teaching laboratory is described. The use of integrated circuits and few components are necessary to adequately accomplish lock-in amplification limited the total cost of construction to under US$100.
Low cost charged-coupled device (CCD) based detectors for Shiga toxins activity analysis
USDA-ARS?s Scientific Manuscript database
To improve food safety there is a need to develop simple, low-cost sensitive devices for detection of foodborne pathogens and their toxins. We describe a simple and relatively low-cost webcam-based detector which can be used for various optical detection modalities, including fluorescence, chemilumi...
Variability in Costs across Hospital Wards. A Study of Chinese Hospitals
Adam, Taghreed; Evans, David B.; Ying, Bian; Murray, Christopher J. L.
2014-01-01
Introduction Analysts estimating the costs or cost-effectiveness of health interventions requiring hospitalization often cut corners because they lack data and the costs of undertaking full step-down costing studies are high. They sometimes use the costs taken from a single hospital, sometimes use simple rules of thumb for allocating total hospital costs between general inpatient care and the outpatient department, and sometimes use the average cost of an inpatient bed-day instead of a ward-specific cost. Purpose In this paper we explore for the first time the extent and the causes of variation in ward-specific costs across hospitals, using data from China. We then use the resulting model to show how ward-specific costs for hospitals outside the data set could be estimated using information on the determinants identified in the paper. Methodology Ward-specific costs estimated using step-down costing methods from 41 hospitals in 12 provinces of China were used. We used seemingly unrelated regressions to identify the determinants of variability in the ratio of the costs of specific wards to that of the outpatient department, and explain how this can be used to generate ward-specific unit costs. Findings Ward-specific unit costs varied considerably across hospitals, ranging from 1 to 24 times the unit cost in the outpatient department — average unit costs are not a good proxy for costs at specialty wards in general. The most important sources of variability were the number of staff and the level of capacity utilization. Practice Implications More careful hospital costing studies are clearly needed. In the meantime, we have shown that in China it is possible to estimate ward-specific unit costs taking into account key determinants of variability in costs across wards. This might well be a better alternative than using simple rules of thumb or using estimates from a single study. PMID:24874566
NASA Astrophysics Data System (ADS)
Asiedu, Mercy Nyamewaa; Simhal, Anish; Lam, Christopher T.; Mueller, Jenna; Chaudhary, Usamah; Schmitt, John W.; Sapiro, Guillermo; Ramanujam, Nimmi
2018-02-01
The world health organization recommends visual inspection with acetic acid (VIA) and/or Lugol's Iodine (VILI) for cervical cancer screening in low-resource settings. Human interpretation of diagnostic indicators for visual inspection is qualitative, subjective, and has high inter-observer discordance, which could lead both to adverse outcomes for the patient and unnecessary follow-ups. In this work, we a simple method for automatic feature extraction and classification for Lugol's Iodine cervigrams acquired with a low-cost, miniature, digital colposcope. Algorithms to preprocess expert physician-labelled cervigrams and to extract simple but powerful color-based features are introduced. The features are used to train a support vector machine model to classify cervigrams based on expert physician labels. The selected framework achieved a sensitivity, specificity, and accuracy of 89.2%, 66.7% and 80.6% with majority diagnosis of the expert physicians in discriminating cervical intraepithelial neoplasia (CIN +) relative to normal tissues. The proposed classifier also achieved an area under the curve of 84 when trained with majority diagnosis of the expert physicians. The results suggest that utilizing simple color-based features may enable unbiased automation of VILI cervigrams, opening the door to a full system of low-cost data acquisition complemented with automatic interpretation.
USDA-ARS?s Scientific Manuscript database
The advent of next-generation sequencing technologies has been a boon to the cost-effective development of molecular markers, particularly in non-model species. Here, we demonstrate the efficiency of microsatellite or simple sequence repeat (SSR) marker development from short-read sequences using th...
Russell, Heidi; Swint, J. Michael; Lal, Lincy; Meza, Jane; Walterhouse, David; Hawkins, Douglas S.; Okcu, M. Fatih
2015-01-01
Background Recent Children’s Oncology Group trials for low-risk rhabdomyosarcoma attempted to reduce therapy while maintaining excellent outcomes. D9602 delivered 45 weeks of outpatient vincristine and dactinomycin (VA) for patients in Subgroup A. ARST0331 reduced the duration of therapy to 22 weeks but added four doses of cyclophosphamide to VA for patients in Subset 1. Failure-free survival was similar. We undertook a cost minimization comparison to help guide future decision-making. Procedure Addressing the costs of treatment from the healthcare perspective we modeled a simple decision-analytic model from aggregate clinical trial data. Medical care inputs and probabilities were estimated from trial reports and focused chart review. Costs of radiation, surgery and off-therapy surveillance were excluded. Unit costs were obtained from literature and national reimbursement and inpatient utilization databases and converted to 2012 US dollars. Model uncertainty was assessed with first-order sensitivity analysis. Results Direct medical costs were $46,393 for D9602 and $43,261 for ARST0331 respectively, making ARST0331 the less costly strategy. Dactinomycin contributed the most to D9602 total costs but varied with age (42–69%). Chemotherapy administration costs accounted for the largest proportion of ARST0331 total costs (39–57%). ARST0331 incurred fewer costs than D9602 under most alternative distributive models and alternative clinical practice assumptions. Conclusions Cost analysis suggests that ARST0331 may incur fewer costs than D9602 from the healthcare system’s perspective. Attention to the services driving the costs provides directions for future efficiency improvements. Future studies should prospectively consider the patient and family’s perspective. PMID:24453105
Russell, Heidi; Swint, J Michael; Lal, Lincy; Meza, Jane; Walterhouse, David; Hawkins, Douglas S; Okcu, M Fatih
2014-06-01
Recent Children's Oncology Group trials for low-risk rhabdomyosarcoma attempted to reduce therapy while maintaining excellent outcomes. D9602 delivered 45 weeks of outpatient vincristine and dactinomycin (VA) for patients in Subgroup A. ARST0331 reduced the duration of therapy to 22 weeks but added four doses of cyclophosphamide to VA for patients in Subset 1. Failure-free survival was similar. We undertook a cost minimization comparison to help guide future decision-making. Addressing the costs of treatment from the healthcare perspective we modeled a simple decision-analytic model from aggregate clinical trial data. Medical care inputs and probabilities were estimated from trial reports and focused chart review. Costs of radiation, surgery and off-therapy surveillance were excluded. Unit costs were obtained from literature and national reimbursement and inpatient utilization databases and converted to 2012 US dollars. Model uncertainty was assessed with first-order sensitivity analysis. Direct medical costs were $46,393 for D9602 and $43,261 for ARST0331 respectively, making ARST0331 the less costly strategy. Dactinomycin contributed the most to D9602 total costs but varied with age (42-69%). Chemotherapy administration costs accounted for the largest proportion of ARST0331 total costs (39-57%). ARST0331 incurred fewer costs than D9602 under most alternative distributive models and alternative clinical practice assumptions. Cost analysis suggests that ARST0331 may incur fewer costs than D9602 from the healthcare system's perspective. Attention to the services driving the costs provides directions for future efficiency improvements. Future studies should prospectively consider the patient and family's perspective. © 2014 Wiley Periodicals, Inc.
Ranked set sampling: cost and optimal set size.
Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying
2002-12-01
McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.
Fortwaengler, Kurt; Parkin, Christopher G.; Neeser, Kurt; Neumann, Monika; Mast, Oliver
2017-01-01
The modeling approach described here is designed to support the development of spreadsheet-based simple predictive models. It is based on 3 pillars: association of the complications with HbA1c changes, incidence of the complications, and average cost per event of the complication. For each pillar, the goal of the analysis was (1) to find results for a large diversity of populations with a focus on countries/regions, diabetes type, age, diabetes duration, baseline HbA1c value, and gender; (2) to assess the range of incidences and associations previously reported. Unlike simple predictive models, which mostly are based on only 1 source of information for each of the pillars, we conducted a comprehensive, systematic literature review. Each source found was thoroughly reviewed and only sources meeting quality expectations were considered. The approach allows avoidance of unintended use of extreme data. The user can utilize (1) one of the found sources, (2) the found range as validation for the found figures, or (3) the average of all found publications for an expedited estimate. The modeling approach is intended for use in average insulin-treated diabetes populations in which the baseline HbA1c values are within an average range (6.5% to 11.5%); it is not intended for use in individuals or unique diabetes populations (eg, gestational diabetes). Because the modeling approach only considers diabetes-related complications that are positively associated with HbA1c decreases, the costs of negatively associated complications (eg, severe hypoglycemic events) must be calculated separately. PMID:27510441
Statistical methodologies for the control of dynamic remapping
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Nicol, D. M.
1986-01-01
Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.
Numerical modelling of CIGS/CdS solar cell
NASA Astrophysics Data System (ADS)
Devi, Nisha; Aziz, Anver; Datta, Shouvik
2018-05-01
In this work, we design and analyze the Cu(In,Ga)Se2 (CIGS) solar cell using simulation software "Solar Cell Capacitance Simulator in One Dimension (SCAPS-1D)". The conventional CIGS solar cell uses various layers, like intrinsic ZnO/Aluminium doped ZnO as transparent oxide, antireflection layer MgF2, and electron back reflection (EBR) layer at CIGS/Mo interface for good power conversion efficiency. We replace this conventional model by a simple model which is easy to fabricate and also reduces the cost of this cell because of use of lesser materials. The new designed model of CIGS solar cell is ITO/CIGS/OVC/CdS/Metal contact, where OVC is ordered vacancy compound. From this simple structure, even at very low illumination we are getting good results. We simulate this CIGS solar cell model by varying various physical parameters of CIGS like thickness, carrier density, band gap and temperature.
Economic modeling of HIV treatments.
Simpson, Kit N
2010-05-01
To review the general literature on microeconomic modeling and key points that must be considered in the general assessment of economic modeling reports, discuss the evolution of HIV economic models and identify models that illustrate this development over time, as well as examples of current studies. Recommend improvements in HIV economic modeling. Recent economic modeling studies of HIV include examinations of scaling up antiretroviral (ARV) in South Africa, screening prior to use of abacavir, preexposure prophylaxis, early start of ARV in developing countries and cost-effectiveness comparisons of specific ARV drugs using data from clinical trials. These studies all used extensively published second-generation Markov models in their analyses. There have been attempts to simplify approaches to cost-effectiveness estimates by using simple decision trees or cost-effectiveness calculations with short-time horizons. However, these approaches leave out important cumulative economic effects that will not appear early in a treatment. Many economic modeling studies were identified in the 'gray' literature, but limited descriptions precluded an assessment of their adherence to modeling guidelines, and thus to the validity of their findings. There is a need for developing third-generation models to accommodate new knowledge about adherence, adverse effects, and viral resistance.
Gastroschisis Simulation Model: Pre-surgical Management Technical Report
Angert, Robert M
2017-01-01
This technical report describes the creation of a gastroschisis model for a newborn. This is a simple, low-cost task trainer that provides the opportunity for Neonatology providers, including fellows, residents, nurse practitioners, physician assistants, and nurses, to practice the management of a baby with gastroschisis after birth and prior to surgery. Included is a suggested checklist with which the model can be employed. The details can be modified to suit different learning objectives. PMID:28439484
Dewa, Carolyn S; Hoch, Jeffrey S
2014-06-01
This article estimates the net benefit for a company incorporating a collaborative care model into its return-to-work program for workers on short-term disability related to a mental disorder. Employing a simple decision model, the net benefit and uncertainty were explored. The breakeven point occurs when the average short-term disability episode is reduced by at least 7 days. In addition, 85% of the time, benefits could outweigh costs. Model results and sensitivity analyses indicate that organizational benefits can be greater than the costs of incorporating a collaborative care model into a return-to-work program for workers on short-term disability related to a mental disorder. The results also demonstrate how the probability of a program's effectiveness and the magnitude of its effectiveness are key factors that determine whether the benefits of a program outweigh its costs.
Comparative study of disinfectants for use in low-cost gravity driven household water purifiers.
Patil, Rajshree A; Kausley, Shankar B; Balkunde, Pradeep L; Malhotra, Chetan P
2013-09-01
Point-of-use (POU) gravity-driven household water purifiers have been proven to be a simple, low-cost and effective intervention for reducing the impact of waterborne diseases in developing countries. The goal of this study was to compare commonly used water disinfectants for their feasibility of adoption in low-cost POU water purifiers. The potency of each candidate disinfectant was evaluated by conducting a batch disinfection study for estimating the concentration of disinfectant needed to inactivate a given concentration of the bacterial strain Escherichia coli ATCC 11229. Based on the concentration of disinfectant required, the size, weight and cost of a model purifier employing that disinfectant were estimated. Model purifiers based on different disinfectants were compared and disinfectants which resulted in the most safe, compact and inexpensive purifiers were identified. Purifiers based on bromine, tincture iodine, calcium hypochlorite and sodium dichloroisocyanurate were found to be most efficient, cost effective and compact with replacement parts costing US$3.60-6.00 for every 3,000 L of water purified and are thus expected to present the most attractive value proposition to end users.
Pricing and Welfare in Health Plan Choice.
Bundorf, M Kate; Levin, Jonathan; Mahoney, Neale
2012-12-01
Premiums in health insurance markets frequently do not reflect individual differences in costs, either because consumers have private information or because prices are not risk rated. This creates inefficiencies when consumers self-select into plans. We develop a simple econometric model to study this problem and estimate it using data on small employers. We find a welfare loss of 2-11 percent of coverage costs compared to what is feasible with risk rating. Only about one-quarter of this is due to inefficiently chosen uniform contribution levels. We also investigate the reclassification risk created by risk rating individual incremental premiums, finding only a modest welfare cost.
Measuring Function for Medicare Inpatient Rehabilitation Payment
Carter, Grace M.; Relies, Daniel A.; Ridgeway, Gregory K.; Rimes, Carolyn M.
2003-01-01
We studied 186,766 Medicare discharges to the community in 1999 from 694 inpatient rehabilitation facilities (IRF). Statistical models were used to examine the relationship of functional items and scales to accounting cost within impairment categories. For most items, more independence leads to lower costs. However, two items are not associated with cost in the expected way. The probable causes of these anomalies are discussed along with implications for payment policy. We present the rules used to construct administratively simple, homogeneous, resource use groups that provide reasonable incentives for access and quality care and that determine payments under the new IRF prospective payment system (PPS). PMID:12894633
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach
Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin
2014-01-01
Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
A Controlled Drug-Delivery Experiment Using Alginate Beads
ERIC Educational Resources Information Center
Farrell, Stephanie; Vernengo, Jennifer
2012-01-01
This paper describes a simple, cost-effective experiment which introduces students to drug delivery and modeling using alginate beads. Students produce calcium alginate beads loaded with drug and measure the rate of release from the beads for systems having different stir rates, geometries, extents of cross-linking, and drug molecular weight.…
Climate analyses to assess risks from invasive forest insects: Simple matching to advanced models
Robert C. Venette
2017-01-01
Purpose of Review. The number of invasive alien insects that adversely affect trees and forests continues to increase as do associated ecological, economic, and sociological impacts. Prevention strategies remain the most cost-effective approach to address the issue, but risk management decisions, particularly those affecting international trade,...
Social Trust and the Growth of Schooling
ERIC Educational Resources Information Center
Bjornskov, Christian
2009-01-01
The paper develops a simple model to examine how social trust might affect the growth of schooling through lowering transaction costs associated with employing educated individuals. In a sample of 52 countries, the paper thereafter provides empirical evidence that trust has led to faster growth of schooling in the period 1960-2000. The findings…
Essays on Experimental Economics and Education
ERIC Educational Resources Information Center
Ogawa, Scott Richard
2013-01-01
In Chapter 1 I consider three separate explanations for how price affects the usage rate of a purchased product: Screening, signaling, and sunk-cost bias. I propose an experimental design that disentangles the three effects. Furthermore, in order to quantify and compare these effects I introduce a simple structural model and show that the…
Lee, Jin-Woong; Chung, Jiyong; Cho, Min-Young; Timilsina, Suman; Sohn, Keemin; Kim, Ji Sik; Sohn, Kee-Sun
2018-06-20
An extremely simple bulk sheet made of a piezoresistive carbon nanotube (CNT)-Ecoflex composite can act as a smart keypad that is portable, disposable, and flexible enough to be carried crushed inside the pocket of a pair of trousers. Both a rigid-button-imbedded, rollable (or foldable) pad and a patterned flexible pad have been introduced for use as portable keyboards. Herein, we suggest a bare, bulk, macroscale piezoresistive sheet as a replacement for these complex devices that are achievable only through high-cost fabrication processes such as patterning-based coating, printing, deposition, and mounting. A deep-learning technique based on deep neural networks (DNN) enables this extremely simple bulk sheet to play the role of a smart keypad without the use of complicated fabrication processes. To develop this keypad, instantaneous electrical resistance change was recorded at several locations on the edge of the sheet along with the exact information on the touch position and pressure for a huge number of random touches. The recorded data were used for training a DNN model that could eventually act as a brain for a simple sheet-type keypad. This simple sheet-type keypad worked perfectly and outperformed all of the existing portable keypads in terms of functionality, flexibility, disposability, and cost.
Cost versus control: Understanding ownership through outsourcing in hospitals.
Dalton, Christina Marsh; Warren, Patrick L
2016-07-01
For-profit hospitals in California contract out services much more intensely than either private nonprofit or public hospitals. To explain why, we build a model in which the outsourcing decision is a trade-off between cost and control. Since nonprofit firms are more restricted in how they consume net revenues, they experience more rapidly diminishing value of a dollar saved, and they are less attracted to a low-cost but low-control outsourcing opportunity than a for-profit firm is. This difference is exaggerated in services where the benefits of controlling the details of production are particularly important but minimized when a fixed-cost shock raises the marginal value of a dollar of cost savings. We test these predictions in a panel of California hospitals, finding evidence for each and that the set of services that private non-profits are particularly interested in controlling (physician-intensive services) is very different from those than public hospitals are particularly interested in (labor-intensive services). These results suggest that a model of public or nonprofit make-or-buy decisions should be more than a simple relabeling of a model derived in the for-profit context. Copyright © 2016 Elsevier B.V. All rights reserved.
Alkoshi, Salem; Maimaiti, Namaitijiang; Dahlui, Maznah
2014-01-01
Background Rotavirus infection is a major cause of childhood diarrhea in Libya. The objective of this study is to evaluate the cost-effectiveness of rotavirus vaccination in that country. Methods We used a published decision tree model that has been adapted to the Libyan situation to analyze a birth cohort of 160,000 children. The evaluation of diarrhea events in three public hospitals helped to estimate the rotavirus burden. The economic analysis was done from two perspectives: health care provider and societal. Univariate sensitivity analyses were conducted to assess uncertainty in some values of the variables selected. Results The three hospitals received 545 diarrhea patients aged≤5 with 311 (57%) rotavirus positive test results during a 9-month period. The societal cost for treatment of a case of rotavirus diarrhea was estimated at US$ 661/event. The incremental cost-effectiveness ratio with a vaccine price of US$ 27 per course was US$ 8,972 per quality-adjusted life year gained from the health care perspective. From a societal perspective, the analysis shows cost savings of around US$ 16 per child. Conclusion The model shows that rotavirus vaccination could be economically a very attractive intervention in Libya. PMID:25499622
Partitioning the Metabolic Cost of Human Running: A Task-by-Task Approach
Arellano, Christopher J.; Kram, Rodger
2014-01-01
Compared with other species, humans can be very tractable and thus an ideal “model system” for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the “cost of generating force” hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be “individually” partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. PMID:24838747
Partitioning the metabolic cost of human running: a task-by-task approach.
Arellano, Christopher J; Kram, Rodger
2014-12-01
Compared with other species, humans can be very tractable and thus an ideal "model system" for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the "cost of generating force" hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be "individually" partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. © The Author 2014. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng
2016-11-01
Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geller, Drew Adam; Backhaus, Scott N.
Control of consumer electrical devices for providing electrical grid services is expanding in both the scope and the diversity of loads that are engaged in control, but there are few experimentally-based models of these devices suitable for control designs and for assessing the cost of control. A laboratory-scale test system is developed to experimentally evaluate the use of a simple window-mount air conditioner for electrical grid regulation services. The experimental test bed is a single, isolated air conditioner embedded in a test system that both emulates the thermodynamics of an air conditioned room and also isolates the air conditioner frommore » the real-world external environmental and human variables that perturb the careful measurements required to capture a model that fully characterizes both the control response functions and the cost of control. The control response functions and cost of control are measured using harmonic perturbation of the temperature set point and a test protocol that further isolates the air conditioner from low frequency environmental variability.« less
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
McLeod, Melissa; Blakely, Tony; Kvizhinadze, Giorgi; Harris, Ricci
2014-01-01
A critical first step toward incorporating equity into cost-effectiveness analyses is to appropriately model interventions by population subgroups. In this paper we use a standardized treatment intervention to examine the impact of using ethnic-specific (Māori and non-Māori) data in cost-utility analyses for three cancers. We estimate gains in health-adjusted life years (HALYs) for a simple intervention (20% reduction in excess cancer mortality) for lung, female breast, and colon cancers, using Markov modeling. Base models include ethnic-specific cancer incidence with other parameters either turned off or set to non-Māori levels for both groups. Subsequent models add ethnic-specific cancer survival, morbidity, and life expectancy. Costs include intervention and downstream health system costs. For the three cancers, including existing inequalities in background parameters (population mortality and comorbidities) for Māori attributes less value to a year of life saved compared to non-Māori and lowers the relative health gains for Māori. In contrast, ethnic inequalities in cancer parameters have less predictable effects. Despite Māori having higher excess mortality from all three cancers, modeled health gains for Māori were less from the lung cancer intervention than for non-Māori but higher for the breast and colon interventions. Cost-effectiveness modeling is a useful tool in the prioritization of health services. But there are important (and sometimes counterintuitive) implications of including ethnic-specific background and disease parameters. In order to avoid perpetuating existing ethnic inequalities in health, such analyses should be undertaken with care.
2014-01-01
Background A critical first step toward incorporating equity into cost-effectiveness analyses is to appropriately model interventions by population subgroups. In this paper we use a standardized treatment intervention to examine the impact of using ethnic-specific (Māori and non-Māori) data in cost-utility analyses for three cancers. Methods We estimate gains in health-adjusted life years (HALYs) for a simple intervention (20% reduction in excess cancer mortality) for lung, female breast, and colon cancers, using Markov modeling. Base models include ethnic-specific cancer incidence with other parameters either turned off or set to non-Māori levels for both groups. Subsequent models add ethnic-specific cancer survival, morbidity, and life expectancy. Costs include intervention and downstream health system costs. Results For the three cancers, including existing inequalities in background parameters (population mortality and comorbidities) for Māori attributes less value to a year of life saved compared to non-Māori and lowers the relative health gains for Māori. In contrast, ethnic inequalities in cancer parameters have less predictable effects. Despite Māori having higher excess mortality from all three cancers, modeled health gains for Māori were less from the lung cancer intervention than for non-Māori but higher for the breast and colon interventions. Conclusions Cost-effectiveness modeling is a useful tool in the prioritization of health services. But there are important (and sometimes counterintuitive) implications of including ethnic-specific background and disease parameters. In order to avoid perpetuating existing ethnic inequalities in health, such analyses should be undertaken with care. PMID:24910540
Optimum profit model considering production, quality and sale problem
NASA Astrophysics Data System (ADS)
Chen, Chung-Ho; Lu, Chih-Lun
2011-12-01
Chen and Liu ['Procurement Strategies in the Presence of the Spot Market-an Analytical Framework', Production Planning and Control, 18, 297-309] presented the optimum profit model between the producers and the purchasers for the supply chain system with a pure procurement policy. However, their model with a simple manufacturing cost did not consider the used cost of the customer. In this study, the modified Chen and Liu's model will be addressed for determining the optimum product and process parameters. The authors propose a modified Chen and Liu's model under the two-stage screening procedure. The surrogate variable having a high correlation with the measurable quality characteristic will be directly measured in the first stage. The measurable quality characteristic will be directly measured in the second stage when the product decision cannot be determined in the first stage. The used cost of the customer will be measured by adopting Taguchi's quadratic quality loss function. The optimum purchaser's order quantity, the producer's product price and the process quality level will be jointly determined by maximising the expected profit between them.
Artificial neural networks using complex numbers and phase encoded weights.
Michel, Howard E; Awwal, Abdul Ahad S
2010-04-01
The model of a simple perceptron using phase-encoded inputs and complex-valued weights is proposed. The aggregation function, activation function, and learning rule for the proposed neuron are derived and applied to Boolean logic functions and simple computer vision tasks. The complex-valued neuron (CVN) is shown to be superior to traditional perceptrons. An improvement of 135% over the theoretical maximum of 104 linearly separable problems (of three variables) solvable by conventional perceptrons is achieved without additional logic, neuron stages, or higher order terms such as those required in polynomial logic gates. The application of CVN in distortion invariant character recognition and image segmentation is demonstrated. Implementation details are discussed, and the CVN is shown to be very attractive for optical implementation since optical computations are naturally complex. The cost of the CVN is less in all cases than the traditional neuron when implemented optically. Therefore, all the benefits of the CVN can be obtained without additional cost. However, on those implementations dependent on standard serial computers, CVN will be more cost effective only in those applications where its increased power can offset the requirement for additional neurons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewani, Aliya A., E-mail: a.ashraf@griffith.edu.au; O’Keefe, Steven G.; Thiel, David V.
A novel 2D simple low cost frequency selective surface was screen printed on thin (0.21 mm), flexible transparent plastic substrate (relative permittivity 3.2). It was designed, fabricated and tested in the frequency range 10-20 GHz. The plane wave transmission and reflection coefficients agreed with numerical modelling. The effective permittivity and thickness of the backing sheet has a significant effect on the frequency characteristics. The stop band frequency reduced from 15GHz (no backing) to 12.5GHz with polycarbonate. The plastic substrate thickness beyond 1.8mm has minimal effect on the resonant frequency. While the inner element spacing controls the stop-band frequency, the substratemore » thickness controls the bandwidth. The screen printing technique provided a simple, low cost FSS fabrication method to produce flexible, conformal, optically transparent and bio-degradable FSS structures which can find their use in electromagnetic shielding and filtering applications in radomes, reflector antennas, beam splitters and polarizers.« less
Use of 3D Printing for Custom Wind Tunnel Fabrication
NASA Astrophysics Data System (ADS)
Gagorik, Paul; Bates, Zachary; Issakhanian, Emin
2016-11-01
Small-scale wind tunnels for the most part are fairly simple to produce with standard building equipment. However, the intricate bell housing and inlet shape of an Eiffel type wind tunnel, as well as the transition from diffuser to fan in a rectangular tunnel can present design and construction obstacles. With the help of 3D printing, these shapes can be custom designed in CAD models and printed in the lab at very low cost. The undergraduate team at Loyola Marymount University has built a custom benchtop tunnel for gas turbine film cooling experiments. 3D printing is combined with conventional construction methods to build the tunnel. 3D printing is also used to build the custom tunnel floor and interchangeable experimental pieces for various experimental shapes. This simple and low-cost tunnel is a custom solution for specific engineering experiments for gas turbine technology research.
Optically transparent frequency selective surfaces on flexible thin plastic substrates
NASA Astrophysics Data System (ADS)
Dewani, Aliya A.; O'Keefe, Steven G.; Thiel, David V.; Galehdar, Amir
2015-02-01
A novel 2D simple low cost frequency selective surface was screen printed on thin (0.21 mm), flexible transparent plastic substrate (relative permittivity 3.2). It was designed, fabricated and tested in the frequency range 10-20 GHz. The plane wave transmission and reflection coefficients agreed with numerical modelling. The effective permittivity and thickness of the backing sheet has a significant effect on the frequency characteristics. The stop band frequency reduced from 15GHz (no backing) to 12.5GHz with polycarbonate. The plastic substrate thickness beyond 1.8mm has minimal effect on the resonant frequency. While the inner element spacing controls the stop-band frequency, the substrate thickness controls the bandwidth. The screen printing technique provided a simple, low cost FSS fabrication method to produce flexible, conformal, optically transparent and bio-degradable FSS structures which can find their use in electromagnetic shielding and filtering applications in radomes, reflector antennas, beam splitters and polarizers.
ERIC Educational Resources Information Center
Pe´rez, Eduardo
2015-01-01
The procedure of a physical chemistry experiment for university students must be designed in a way that the accuracy and precision of the measurements is properly maintained. However, in many cases, that requires costly and sophisticated equipment not readily available in developing countries. A simple, low-cost experiment to determine isobaric…
Choisy, Marc; de Roode, Jacobus C
2014-08-01
Animal medication against parasites can occur either as a genetically fixed (constitutive) or phenotypically plastic (induced) behavior. Taking the tritrophic interaction between the monarch butterfly Danaus plexippus, its protozoan parasite Ophryocystis elektroscirrha, and its food plant Asclepias spp. as a test case, we develop a game-theory model to identify the epidemiological (parasite prevalence and virulence) and environmental (plant toxicity and abundance) conditions that predict the evolution of genetically fixed versus phenotypically plastic forms of medication. Our model shows that the relative benefits (the antiparasitic properties of medicinal food) and costs (side effects of medicine, the costs of searching for medicine, and the costs of plasticity itself) crucially determine whether medication is genetically fixed or phenotypically plastic. Our model suggests that animals evolve phenotypic plasticity when parasite risk (a combination of virulence and prevalence and thus a measure of the strength of parasite-mediated selection) is relatively low to moderately high and genetically fixed medication when parasite risk becomes very high. The latter occurs because at high parasite risk, the costs of plasticity are outweighed by the benefits of medication. Our model provides a simple and general framework to study the conditions that drive the evolution of alternative forms of animal medication.
Indirect reciprocity can overcome free-rider problems on costly moral assessment.
Sasaki, Tatsuya; Okada, Isamu; Nakai, Yutaka
2016-07-01
Indirect reciprocity is one of the major mechanisms of the evolution of cooperation. Because constant monitoring and accurate evaluation in moral assessments tend to be costly, indirect reciprocity can be exploited by cost evaders. A recent study crucially showed that a cooperative state achieved by indirect reciprocators is easily destabilized by cost evaders in the case with no supportive mechanism. Here, we present a simple and widely applicable solution that considers pre-assessment of cost evaders. In the pre-assessment, those who fail to pay for costly assessment systems are assigned a nasty image that leads to them being rejected by discriminators. We demonstrate that considering the pre-assessment can crucially stabilize reciprocal cooperation for a broad range of indirect reciprocity models. In particular for the most leading social norms, we analyse the conditions under which a prosocial state becomes locally stable. © 2016 The Authors.
Upstream solutions to coral reef conservation: The payoffs of smart and cooperative decision-making.
Oleson, Kirsten L L; Falinski, Kim A; Lecky, Joey; Rowe, Clara; Kappel, Carrie V; Selkoe, Kimberly A; White, Crow
2017-04-15
Land-based source pollutants (LBSP) actively threaten coral reef ecosystems globally. To achieve the greatest conservation outcome at the lowest cost, managers could benefit from appropriate tools that evaluate the benefits (in terms of LBSP reduction) and costs of implementing alternative land management strategies. Here we use a spatially explicit predictive model (InVEST-SDR) that quantifies change in sediment reaching the coast for evaluating the costs and benefits of alternative threat-abatement scenarios. We specifically use the model to examine trade-offs among possible agricultural road repair management actions (water bars to divert runoff and gravel to protect the road surface) across the landscape in West Maui, Hawaii, USA. We investigated changes in sediment delivery to coasts and costs incurred from management decision-making that is (1) cooperative or independent among landowners, and focused on (2) minimizing costs, reducing sediment, or both. The results illuminate which management scenarios most effectively minimize sediment while also minimizing the cost of mitigation efforts. We find targeting specific "hotspots" within all individual parcels is more cost-effective than targeting all road segments. The best outcomes are achieved when landowners cooperate and target cost-effective road repairs, however, a cooperative strategy can be counter-productive in some instances when cost-effectiveness is ignored. Simple models, such as the one developed here, have the potential to help managers make better choices about how to use limited resources. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Mehra, Tarun; Koljonen, Virve; Seifert, Burkhardt; Volbracht, Jörk; Giovanoli, Pietro; Plock, Jan; Moos, Rudolf Maria
2015-01-01
Reimbursement systems have difficulties depicting the actual cost of burn treatment, leaving care providers with a significant financial burden. Our aim was to establish a simple and accurate reimbursement model compatible with prospective payment systems. A total of 370 966 electronic medical records of patients discharged in 2012 to 2013 from Swiss university hospitals were reviewed. A total of 828 cases of burns including 109 cases of severe burns were retained. Costs, revenues and earnings for severe and nonsevere burns were analysed and a linear regression model predicting total inpatient treatment costs was established. The median total costs per case for severe burns was tenfold higher than for nonsevere burns (179 949 CHF [167 353 EUR] vs 11 312 CHF [10 520 EUR], interquartile ranges 96 782-328 618 CHF vs 4 874-27 783 CHF, p <0.001). The median of earnings per case for nonsevere burns was 588 CHF (547 EUR) (interquartile range -6 720 - 5 354 CHF) whereas severe burns incurred a large financial loss to care providers, with median earnings of -33 178 CHF (30 856 EUR) (interquartile range -95 533 - 23 662 CHF). Differences were highly significant (p <0.001). Our linear regression model predicting total costs per case with length of stay (LOS) as independent variable had an adjusted R2 of 0.67 (p <0.001 for LOS). Severe burns are systematically underfunded within the Swiss reimbursement system. Flat-rate DRG-based refunds poorly reflect the actual treatment costs. In conclusion, we suggest a reimbursement model based on a per diem rate for treatment of severe burns.
On the optimal sizing of batteries for electric vehicles and the influence of fast charge
NASA Astrophysics Data System (ADS)
Verbrugge, Mark W.; Wampler, Charles W.
2018-04-01
We provide a brief summary of advanced battery technologies and a framework (i.e., a simple model) for assessing electric-vehicle (EV) architectures and associated costs to the customer. The end result is a qualitative model that can be used to calculate the optimal EV range (which maps back to the battery size and performance), including the influence of fast charge. We are seeing two technological pathways emerging: fast-charge-capable batteries versus batteries with much higher energy densities (and specific energies) but without the capability to fast charge. How do we compare and contrast the two alternatives? This work seeks to shed light on the question. We consider costs associated with the cells, added mass due to the use of larger batteries, and charging, three factors common in such analyses. In addition, we consider a new cost input, namely, the cost of adaption, corresponding to the days a customer would need an alternative form of transportation, as the EV would not have sufficient range on those days.
NASA Astrophysics Data System (ADS)
Laramie, Sydney M.; Milshtein, Jarrod D.; Breault, Tanya M.; Brushett, Fikile R.; Thompson, Levi T.
2016-09-01
Non-aqueous redox flow batteries (NAqRFBs) have recently received considerable attention as promising high energy density, low cost grid-level energy storage technologies. Despite these attractive features, NAqRFBs are still at an early stage of development and innovative design techniques are necessary to improve performance and decrease costs. In this work, we investigate multi-electron transfer, common ion exchange NAqRFBs. Common ion systems decrease the supporting electrolyte requirement, which subsequently improves active material solubility and decreases electrolyte cost. Voltammetric and electrolytic techniques are used to study the electrochemical performance and chemical compatibility of model redox active materials, iron (II) tris(2,2‧-bipyridine) tetrafluoroborate (Fe(bpy)3(BF4)2) and ferrocenylmethyl dimethyl ethyl ammonium tetrafluoroborate (Fc1N112-BF4). These results help disentangle complex cycling behavior observed in flow cell experiments. Further, a simple techno-economic model demonstrates the cost benefits of employing common ion exchange NAqRFBs, afforded by decreasing the salt and solvent contributions to total chemical cost. This study highlights two new concepts, common ion exchange and multi-electron transfer, for NAqRFBs through a demonstration flow cell employing model active species. In addition, the compatibility analysis developed for asymmetric chemistries can apply to other promising species, including organics, metal coordination complexes (MCCs) and mixed MCC/organic systems, enabling the design of low cost NAqRFBs.
Cost-effectiveness on a local level: whether and when to adopt a new technology.
Woertman, Willem H; Van De Wetering, Gijs; Adang, Eddy M M
2014-04-01
Cost-effectiveness analysis has become a widely accepted tool for decision making in health care. The standard textbook cost-effectiveness analysis focuses on whether to make the switch from an old or common practice technology to an innovative technology, and in doing so, it takes a global perspective. In this article, we are interested in a local perspective, and we look at the questions of whether and when the switch from old to new should be made. A new approach to cost-effectiveness from a local (e.g., a hospital) perspective, by means of a mathematical model for cost-effectiveness that explicitly incorporates time, is proposed. A decision rule is derived for establishing whether a new technology should be adopted, as well as a general rule for establishing when it pays to postpone adoption by 1 more period, and a set of decision rules that can be used to determine the optimal timing of adoption. Finally, a simple example is presented to illustrate our model and how it leads to optimal decision making in a number of cases.
Shillcutt, Samuel D; LeFevre, Amnesty E; Fischer-Walker, Christa L; Taneja, Sunita; Black, Robert E; Mazumder, Sarmila
2017-01-01
This study evaluates the cost-effectiveness of the DAZT program for scaling up treatment of acute child diarrhea in Gujarat India using a net-benefit regression framework. Costs were calculated from societal and caregivers' perspectives and effectiveness was assessed in terms of coverage of zinc and both zinc and Oral Rehydration Salt. Regression models were tested in simple linear regression, with a specified set of covariates, and with a specified set of covariates and interaction terms using linear regression with endogenous treatment effects was used as the reference case. The DAZT program was cost-effective with over 95% certainty above $5.50 and $7.50 per appropriately treated child in the unadjusted and adjusted models respectively, with specifications including interaction terms being cost-effective with 85-97% certainty. Findings from this study should be combined with other evidence when considering decisions to scale up programs such as the DAZT program to promote the use of ORS and zinc to treat child diarrhea.
Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan
2015-10-01
. Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies. © The Author(s) 2015.
The effect of exchange rates on southern pine exports
H.W. Wisdom; James E. Granskog
2003-01-01
Changes in exchange rates affect southern pine exports by changing the cost of southern wood in foreign markets. A strong dollar discourages exports; a weak dollar encourages exports. A simple economic export market model is developed to determine whether changes in the exchange rates in foreign markets of southern pine products have, in fact, let to significant...
ERIC Educational Resources Information Center
NEUBERGER, HANS; NICHOLAS, GEORGE
INCLUDED IN THIS MANUAL WRITTEN FOR SECONDARY SCHOOL AND COLLEGE TEACHERS ARE DESCRIPTIONS OF DEMONSTRATION MODELS, EXPERIMENTS PERTAINING TO SOME OF THE FUNDAMENTAL AND APPLIED METEOROLOGICAL CONCEPTS, AND INSTRUCTIONS FOR MAKING SIMPLE WEATHER OBSERVATIONS. THE CRITERIA FOR SELECTION OF TOPICS WERE EASE AND COST OF CONSTRUCTING APPARATUS AS WELL…
Petrinco, Michele; Pagano, Eva; Desideri, Alessandro; Bigi, Riccardo; Ghidina, Marco; Ferrando, Alberto; Cortigiani, Lauro; Merletti, Franco; Gregori, Dario
2009-01-01
Several methodological problems arise when health outcomes and resource utilization are collected at different sites. To avoid misleading conclusions in multi-center economic evaluations the center effect needs to be taken into adequate consideration. The aim of this article is to compare several models, which make use of a different amount of information about the enrolling center. To model the association of total medical costs with the levels of two sets of covariates, one at patient and one at center level, we considered four statistical models, based on the Gamma model in the class of the Generalized Linear Models with a log link, which use different amount of information on the enrolling centers. Models were applied to Cost of Strategies after Myocardial Infarction data, an international randomized trial on costs of uncomplicated acute myocardial infarction (AMI). The simple center effect adjustment based on a single random effect results in a more conservative estimation of the parameters as compared with approaches which make use of deeper information on the centers characteristics. This study shows, with reference to a real multicenter trial, that center information cannot be neglected and should be collected and inserted in the analysis, better in combination with one or more random effect, taking into account in this way also the heterogeneity among centers because of unobserved centers characteristics.
Two-part payments for the reimbursement of investments in health technologies.
Levaggi, Rosella; Moretto, Michele; Pertile, Paolo
2014-04-01
The paper studies the impact of alternative reimbursement systems on two provider decisions: whether to adopt a technology whose provision requires a sunk investment cost and how many patients to treat with it. Using a simple economic model we show that the optimal pricing policy involves a two-part payment: a price equal to the marginal cost of the patient whose benefit of treatment equals the cost of provision, and a separate payment for the partial reimbursement of capital costs. Departures from this scheme, which are frequent in DRG tariff systems designed around the world, lead to a trade-off between the objective of making effective technologies available to patients and the need to ensure appropriateness in use. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
Analysis of multigrid methods on massively parallel computers: Architectural implications
NASA Technical Reports Server (NTRS)
Matheson, Lesley R.; Tarjan, Robert E.
1993-01-01
We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.
Pressure ulcers management: an economic evaluation.
Foglia, E; Restelli, U; Napoletano, A M; Coclite, D; Porazzi, E; Bonfanti, M; Croce, D
2012-03-01
Pressure ulcer management represents a growing problem for medical and social health care systems all over the world, particularly in European Union countries where the incidence of pressure ulcers in older persons (> 60 years of age) is predicted to rise. The aim of this study was to provide evidence for the lower impact on economic resources of using advanced dressings for the treatment of pressure ulcers with respect to conventional simple dressings. Two different models of analysis, derived from Activity Based Costing and Health Technology Assessment, were used to measure, over a 30-day period, the direct costs incurred by pressure ulcer treatment for community-residing patients receiving integrated home care. Although the mean cost per home care visit was higher in the advanced dressings patient group than in the simple dressings patient one (E 22.31 versus E 16.03), analysis of the data revealed that the cost of using advanced dressings was lower due to fewer home care visits (22 versus 11). The results underline the fact that decision-makers need to improve their understanding of the advantages of taking a long-term view with regards to the purchase and use of materials. This could produce considerable savings of resources in addition to improving treatment efficacy for the benefit of patients and the health care system.
NASA Astrophysics Data System (ADS)
Starosolski, Zbigniew; Ezon, David S.; Krishnamurthy, Rajesh; Dodd, Nicholas; Heinle, Jeffrey; Mckenzie, Dean E.; Annapragada, Ananth
2017-03-01
We developed a technology that allows a simple desktop 3D printer with dual extruder to fabricate 3D flexible models of Major AortoPulmonary Collateral Arteries. The study was designed to assess whether the flexible 3D printed models could help during surgical planning phase. Simple FDM 3D printers are inexpensive, versatile in use and easy to maintain, but complications arise when the designed model is complex and has tubular structures with small diameter less than 2mm. The advantages of FDM printers are cost and simplicity of use. We use precisely selected materials to overcome the obstacles listed above. Dual extruder allows to use two different materials while printing, which is especially important in the case of fragile structures like pulmonary vessels and its supporting structures. The latter should not be removed by hand to avoid a truncation of the model. We utilize the water soluble PVA as a supporting structure and Poro-Lay filament for flexible model of AortoPulmonary collateral arteries. Poro-Lay filament is different as compared to all the other flexible ones like polymer-based. Poro-Lay is rigid while printing and this allows printing of structures small in diameter. It achieves flexibility after washing out of printed model with water. It becomes soft in touch and gelatinous. Using both PVA and Poro-Lay gives a huge advantage allowing to wash out the supporting structures and achieve flexibility in one washing operation, saving time and avoiding human error with cleaning the model. We evaluated 6 models for MAPCAS surgical planning study. This approach is also cost-effective - an average cost of materials for print is less than $15; models are printed in facility without any delays. Flexibility of 3D printed models approximate soft tissues properly, mimicking Aortopulmonary collateral arteries. Second utilization models has educational value for both residents and patients' family. Simplification of 3D flexible process could help in other models of soft tissue pathologies like aneurysms, ventricular septal defects and other vascular anomalies.
A Nuclear Waste Management Cost Model for Policy Analysis
NASA Astrophysics Data System (ADS)
Barron, R. W.; Hill, M. C.
2017-12-01
Although integrated assessments of climate change policy have frequently identified nuclear energy as a promising alternative to fossil fuels, these studies have often treated nuclear waste disposal very simply. Simple assumptions about nuclear waste are problematic because they may not be adequate to capture relevant costs and uncertainties, which could result in suboptimal policy choices. Modeling nuclear waste management costs is a cross-disciplinary, multi-scale problem that involves economic, geologic and environmental processes that operate at vastly different temporal scales. Similarly, the climate-related costs and benefits of nuclear energy are dependent on environmental sensitivity to CO2 emissions and radiation, nuclear energy's ability to offset carbon emissions, and the risk of nuclear accidents, factors which are all deeply uncertain. Alternative value systems further complicate the problem by suggesting different approaches to valuing intergenerational impacts. Effective policy assessment of nuclear energy requires an integrated approach to modeling nuclear waste management that (1) bridges disciplinary and temporal gaps, (2) supports an iterative, adaptive process that responds to evolving understandings of uncertainties, and (3) supports a broad range of value systems. This work develops the Nuclear Waste Management Cost Model (NWMCM). NWMCM provides a flexible framework for evaluating the cost of nuclear waste management across a range of technology pathways and value systems. We illustrate how NWMCM can support policy analysis by estimating how different nuclear waste disposal scenarios developed using the NWMCM framework affect the results of a recent integrated assessment study of alternative energy futures and their effects on the cost of achieving carbon abatement targets. Results suggest that the optimism reflected in previous works is fragile: Plausible nuclear waste management costs and discount rates appropriate for intergenerational cost-benefit analysis produce many scenarios where nuclear energy is economically unattractive.
Development of a Training Model for Laparoscopic Common Bile Duct Exploration
Rodríguez, Omaira; Benítez, Gustavo; Sánchez, Renata; De la Fuente, Liliana
2010-01-01
Background: Training and experience of the surgical team are fundamental for the safety and success of complex surgical procedures, such as laparoscopic common bile duct exploration. Methods: We describe an inert, simple, very low-cost, and readily available training model. Created using a “black box” and basic medical and surgical material, it allows training in the fundamental steps necessary for laparoscopic biliary tract surgery, namely, (1) intraoperative cholangiography, (2) transcystic exploration, and (3) laparoscopic choledochotomy, and t-tube insertion. Results: The proposed model has allowed for the development of the skills necessary for partaking in said procedures, contributing to its development and diminishing surgery time as the trainee advances down the learning curve. Further studies are directed towards objectively determining the impact of the model on skill acquisition. Conclusion: The described model is simple and readily available allowing for accurate reproduction of the main steps and maneuvers that take place during laparoscopic common bile duct exploration, with the purpose of reducing failure and complications. PMID:20529526
Active disturbance rejection controller for chemical reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Both, Roxana; Dulf, Eva H.; Muresan, Cristina I., E-mail: roxana.both@aut.utcluj.ro
2015-03-10
In the petrochemical industry, the synthesis of 2 ethyl-hexanol-oxo-alcohols (plasticizers alcohol) is of high importance, being achieved through hydrogenation of 2 ethyl-hexenal inside catalytic trickle bed three-phase reactors. For this type of processes the use of advanced control strategies is suitable due to their nonlinear behavior and extreme sensitivity to load changes and other disturbances. Due to the complexity of the mathematical model an approach was to use a simple linear model of the process in combination with an advanced control algorithm which takes into account the model uncertainties, the disturbances and command signal limitations like robust control. However themore » resulting controller is complex, involving cost effective hardware. This paper proposes a simple integer-order control scheme using a linear model of the process, based on active disturbance rejection method. By treating the model dynamics as a common disturbance and actively rejecting it, active disturbance rejection control (ADRC) can achieve the desired response. Simulation results are provided to demonstrate the effectiveness of the proposed method.« less
Optimal ordering and production policy for a recoverable item inventory system with learning effect
NASA Astrophysics Data System (ADS)
Tsai, Deng-Maw
2012-02-01
This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.
Review of early assessment models of innovative medical technologies.
Fasterholdt, Iben; Krahn, Murray; Kidholm, Kristian; Yderstræde, Knud Bonnet; Pedersen, Kjeld Møller
2017-08-01
Hospitals increasingly make decisions regarding the early development of and investment in technologies, but a formal evaluation model for assisting hospitals early on in assessing the potential of innovative medical technologies is lacking. This article provides an overview of models for early assessment in different health organisations and discusses which models hold most promise for hospital decision makers. A scoping review of published studies between 1996 and 2015 was performed using nine databases. The following information was collected: decision context, decision problem, and a description of the early assessment model. 2362 articles were identified and 12 studies fulfilled the inclusion criteria. An additional 12 studies were identified and included in the review by searching reference lists. The majority of the 24 early assessment studies were variants of traditional cost-effectiveness analysis. Around one fourth of the studies presented an evaluation model with a broader focus than cost-effectiveness. Uncertainty was mostly handled by simple sensitivity or scenario analysis. This review shows that evaluation models using known methods assessing cost-effectiveness are most prevalent in early assessment, but seems ill-suited for early assessment in hospitals. Four models provided some usable elements for the development of a hospital-based model. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Buss, Aaron T; Wifall, Tim; Hazeltine, Eliot; Spencer, John P
2014-02-01
People are typically slower when executing two tasks than when only performing a single task. These dual-task costs are initially robust but are reduced with practice. Dux et al. (2009) explored the neural basis of dual-task costs and learning using fMRI. Inferior frontal junction (IFJ) showed a larger hemodynamic response on dual-task trials compared with single-task trial early in learning. As dual-task costs were eliminated, dual-task hemodynamics in IFJ reduced to single-task levels. Dux and colleagues concluded that the reduction of dual-task costs is accomplished through increased efficiency of information processing in IFJ. We present a dynamic field theory of response selection that addresses two questions regarding these results. First, what mechanism leads to the reduction of dual-task costs and associated changes in hemodynamics? We show that a simple Hebbian learning mechanism is able to capture the quantitative details of learning at both the behavioral and neural levels. Second, is efficiency isolated to cognitive control areas such as IFJ, or is it also evident in sensory motor areas? To investigate this, we restrict Hebbian learning to different parts of the neural model. None of the restricted learning models showed the same reductions in dual-task costs as the unrestricted learning model, suggesting that efficiency is distributed across cognitive control and sensory motor processing systems.
Ponce, Carlos; Bravo, Carolina; Alonso, Juan Carlos
2014-01-01
Studies evaluating agri-environmental schemes (AES) usually focus on responses of single species or functional groups. Analyses are generally based on simple habitat measurements but ignore food availability and other important factors. This can limit our understanding of the ultimate causes determining the reactions of birds to AES. We investigated these issues in detail and throughout the main seasons of a bird's annual cycle (mating, postfledging and wintering) in a dry cereal farmland in a Special Protection Area for farmland birds in central Spain. First, we modeled four bird response parameters (abundance, species richness, diversity and “Species of European Conservation Concern” [SPEC]-score), using detailed food availability and vegetation structure measurements (food models). Second, we fitted new models, built using only substrate composition variables (habitat models). Whereas habitat models revealed that both, fields included and not included in the AES benefited birds, food models went a step further and included seed and arthropod biomass as important predictors, respectively, in winter and during the postfledging season. The validation process showed that food models were on average 13% better (up to 20% in some variables) in predicting bird responses. However, the cost of obtaining data for food models was five times higher than for habitat models. This novel approach highlighted the importance of food availability-related causal processes involved in bird responses to AES, which remained undetected when using conventional substrate composition assessment models. Despite their higher costs, measurements of food availability add important details to interpret the reactions of the bird community to AES interventions and thus facilitate evaluating the real efficiency of AES programs. PMID:25165523
Estimating the global costs of vitamin A capsule supplementation: a review of the literature.
Neidecker-Gonzales, Oscar; Nestel, Penelope; Bouis, Howarth
2007-09-01
Vitamin A supplementation reduces child mortality. It is estimated that 500 million vitamin A capsules are distributed annually. Policy recommendations have assumed that the supplementation programs offer a proven technology at a relatively low cost of around US$0.10 per capsule. To review data on costs of vitamin A supplementation to analyze the key factors that determine program costs, and to attempt to model these costs as a function of per capita income figures. Using data from detailed cost studies in seven countries, this study generated comparable cost categories for analysis, and then used the correlation between national incomes and wage rates to postulate a simple model where costs of vitamin A supplementation are regressed on per capita incomes. Costs vary substantially by country and depend principally on the cost of labor, which is highly correlated with per capita income. Two other factors driving costs are whether the program is implemented in conjunction with other health programs, such as National Immunization Days (which lowers costs), and coverage in rural areas (which increases costs). Labor accounts for 70% of total costs, both for paid staff and for volunteers, while the capsules account for less than 5%. Marketing, training, and administration account for the remaining 25%. Total costs are lowest (roughly US$0.50 per capsule) in Africa, where wages and incomes are lowest, US$1 in developing countries in Asia, and US$1.50 in Latin America. Overall, this study derives a much higher global estimate of costs of around US$1 per capsule.
Microeconomics of yield learning and process control in semiconductor manufacturing
NASA Astrophysics Data System (ADS)
Monahan, Kevin M.
2003-06-01
Simple microeconomic models that directly link yield learning to profitability in semiconductor manufacturing have been rare or non-existent. In this work, we review such a model and provide links to inspection capability and cost. Using a small number of input parameters, we explain current yield management practices in 200mm factories. The model is then used to extrapolate requirements for 300mm factories, including the impact of technology transitions to 130nm design rules and below. We show that the dramatic increase in value per wafer at the 300mm transition becomes a driver for increasing metrology and inspection capability and sampling. These analyses correlate well wtih actual factory data and often identify millions of dollars in potential cost savings. We demonstrate this using the example of grating-based overlay metrology for the 65nm node.
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
ERIC Educational Resources Information Center
Dahl, Robyn Mieko; Droser, Mary L.
2016-01-01
University earth science departments seeking to establish meaningful geoscience outreach programs often pursue large-scale, grant-funded programs. Although this type of outreach is highly successful, it is also extremely costly, and grant funding can be difficult to secure. Here, we present the Geoscience Education Outreach Program (GEOP), a…
In vivo neuronal calcium imaging in C. elegans.
Chung, Samuel H; Sun, Lin; Gabel, Christopher V
2013-04-10
The nematode worm C. elegans is an ideal model organism for relatively simple, low cost neuronal imaging in vivo. Its small transparent body and simple, well-characterized nervous system allows identification and fluorescence imaging of any neuron within the intact animal. Simple immobilization techniques with minimal impact on the animal's physiology allow extended time-lapse imaging. The development of genetically-encoded calcium sensitive fluorophores such as cameleon and GCaMP allow in vivo imaging of neuronal calcium relating both cell physiology and neuronal activity. Numerous transgenic strains expressing these fluorophores in specific neurons are readily available or can be constructed using well-established techniques. Here, we describe detailed procedures for measuring calcium dynamics within a single neuron in vivo using both GCaMP and cameleon. We discuss advantages and disadvantages of both as well as various methods of sample preparation (animal immobilization) and image analysis. Finally, we present results from two experiments: 1) Using GCaMP to measure the sensory response of a specific neuron to an external electrical field and 2) Using cameleon to measure the physiological calcium response of a neuron to traumatic laser damage. Calcium imaging techniques such as these are used extensively in C. elegans and have been extended to measurements in freely moving animals, multiple neurons simultaneously and comparison across genetic backgrounds. C. elegans presents a robust and flexible system for in vivo neuronal imaging with advantages over other model systems in technical simplicity and cost.
The Interrelationship between Promoter Strength, Gene Expression, and Growth Rate
Klesmith, Justin R.; Detwiler, Emily E.; Tomek, Kyle J.; Whitehead, Timothy A.
2014-01-01
In exponentially growing bacteria, expression of heterologous protein impedes cellular growth rates. Quantitative understanding of the relationship between expression and growth rate will advance our ability to forward engineer bacteria, important for metabolic engineering and synthetic biology applications. Recently, a work described a scaling model based on optimal allocation of ribosomes for protein translation. This model quantitatively predicts a linear relationship between microbial growth rate and heterologous protein expression with no free parameters. With the aim of validating this model, we have rigorously quantified the fitness cost of gene expression by using a library of synthetic constitutive promoters to drive expression of two separate proteins (eGFP and amiE) in E. coli in different strains and growth media. In all cases, we demonstrate that the fitness cost is consistent with the previous findings. We expand upon the previous theory by introducing a simple promoter activity model to quantitatively predict how basal promoter strength relates to growth rate and protein expression. We then estimate the amount of protein expression needed to support high flux through a heterologous metabolic pathway and predict the sizable fitness cost associated with enzyme production. This work has broad implications across applied biological sciences because it allows for prediction of the interplay between promoter strength, protein expression, and the resulting cost to microbial growth rates. PMID:25286161
Cost Models for MMC Manufacturing Processes
NASA Technical Reports Server (NTRS)
Elzey, Dana M.; Wadley, Haydn N. G.
1996-01-01
The quality cost modeling (QCM) tool is intended to be a relatively simple-to-use device for obtaining a first-order assessment of the quality-cost relationship for a given process-material combination. The QCM curve is a plot of cost versus quality (an index indicating microstructural quality), which is unique for a given process-material combination. The QCM curve indicates the tradeoff between cost and performance, thus enabling one to evaluate affordability. Additionally, the effect of changes in process design, raw materials, and process conditions on the cost-quality relationship can be evaluated. Such results might indicate the most efficient means to obtain improved quality at reduced cost by process design refinements, the implementation of sensors and models for closed loop process control, or improvement in the properties of raw materials being fed into the process. QCM also allows alternative processes for producing the same or similar material to be compared in terms of their potential for producing competitively priced, high quality material. Aside from demonstrating the usefulness of the QCM concept, this is one of the main foci of the present research program, namely to compare processes for making continuous fiber reinforced, metal matrix composites (MMC's). Two processes, low pressure plasma spray deposition and tape casting are considered for QCM development. This document consists of a detailed look at the design of the QCM approach, followed by discussion of the application of QCM to each of the selected MMC manufacturing processes along with results, comparison of processes, and finally, a summary of findings and recommendations.
The Economic Impact of Blindness in Europe.
Chakravarthy, Usha; Biundo, Eliana; Saka, Rasit Omer; Fasser, Christina; Bourne, Rupert; Little, Julie-Anne
2017-08-01
To estimate the annual loss of productivity from blindness and moderate to severe visual impairment (MSVI) in the population aged >50 years in the European Union (EU). We estimated the cost of lost productivity using three simple models reported in the literature based on (1) minimum wage (MW), (2) gross national income (GNI), and (3) purchasing power parity-adjusted gross domestic product (GDP-PPP) losses. In the first two models, assumptions included that all individuals worked until 65 years of age, and that half of all visual impairment cases in the >50-year age group would be in those aged between 50 and 65 years. Loss of productivity was estimated to be 100% for blind individuals and 30% for those with MSVI. None of these models included direct medical costs related to visual impairment. The estimated number of blind people in the EU population aged >50 years is ~1.28 million, with a further 9.99 million living with MSVI. Based on the three models, the estimated cost of blindness is €7.81 billion, €6.29 billion and €17.29 billion and that of MSVI €18.02 billion, €24.80 billion and €39.23 billion, with their combined costs €25.83 billion, €31.09 billion and €56.52 billion, respectively. The estimates from the MW and adjusted GDP-PPP models were generally comparable, whereas the GNI model estimates were higher, probably reflecting the lack of adjustment for unemployment. The cost of blindness and MSVI in the EU is substantial. Wider use of available cost-effective treatment and prevention strategies may reduce the burden significantly.
Cost-efficiency trade-off and the design of thermoelectric power generators.
Yazawa, Kazuaki; Shakouri, Ali
2011-09-01
The energy conversion efficiency of today's thermoelectric generators is significantly lower than that of conventional mechanical engines. Almost all of the existing research is focused on materials to improve the conversion efficiency. Here we propose a general framework to study the cost-efficiency trade-off for thermoelectric power generation. A key factor is the optimization of thermoelectric modules together with their heat source and heat sinks. Full electrical and thermal co-optimization yield a simple analytical expression for optimum design. Based on this model, power output per unit mass can be maximized. We show that the fractional area coverage of thermoelectric elements in a module could play a significant role in reducing the cost of power generation systems.
Singer, Y
1997-08-01
A constant rebalanced portfolio is an asset allocation algorithm which keeps the same distribution of wealth among a set of assets along a period of time. Recently, there has been work on on-line portfolio selection algorithms which are competitive with the best constant rebalanced portfolio determined in hindsight (Cover, 1991; Helmbold et al., 1996; Cover and Ordentlich, 1996). By their nature, these algorithms employ the assumption that high returns can be achieved using a fixed asset allocation strategy. However, stock markets are far from being stationary and in many cases the wealth achieved by a constant rebalanced portfolio is much smaller than the wealth achieved by an ad hoc investment strategy that adapts to changes in the market. In this paper we present an efficient portfolio selection algorithm that is able to track a changing market. We also describe a simple extension of the algorithm for the case of a general transaction cost, including the transactions cost models recently investigated in (Blum and Kalai, 1997). We provide a simple analysis of the competitiveness of the algorithm and check its performance on real stock data from the New York Stock Exchange accumulated during a 22-year period. On this data, our algorithm outperforms all the algorithms referenced above, with and without transaction costs.
Ion thruster performance model
NASA Technical Reports Server (NTRS)
Brophy, J. R.
1984-01-01
A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.
ERIC Educational Resources Information Center
Downs, Nathan; Larsen, Kim; Parisi, Alfio; Schouten, Peter; Brennan, Chris
2012-01-01
A practical exercise for developing a simple cost-effective solar ultraviolet radiation dosimeter is presented for use by middle school science students. Specifically, this exercise investigates a series of experiments utilising the historical blue print reaction, combining ammonium iron citrate and potassium hexacyanoferrate to develop an…
Mori, Amani T; Ngalesoni, Frida; Norheim, Ole F; Robberstad, Bjarne
2014-09-15
Dihydroartemisinin-piperaquine (DhP) is highly recommended for the treatment of uncomplicated malaria. This study aims to compare the costs, health benefits and cost-effectiveness of DhP and artemether-lumefantrine (AL) alongside "do-nothing" as a baseline comparator in order to consider the appropriateness of DhP as a first-line anti-malarial drug for children in Tanzania. A cost-effectiveness analysis was performed using a Markov decision model, from a provider's perspective. The study used cost data from Tanzania and secondary effectiveness data from a review of articles from sub-Saharan Africa. Probabilistic sensitivity analysis was used to incorporate uncertainties in the model parameters. In addition, sensitivity analyses were used to test plausible variations of key parameters and the key assumptions were tested in scenario analyses. The model predicts that DhP is more cost-effective than AL, with an incremental cost-effectiveness ratio (ICER) of US$ 12.40 per DALY averted. This result relies on the assumption that compliance to treatment with DhP is higher than that with AL due to its relatively simple once-a-day dosage regimen. When compliance was assumed to be identical for the two drugs, AL was more cost-effective than DhP with an ICER of US$ 12.54 per DALY averted. DhP is, however, slightly more likely to be cost-effective compared to a willingness-to-pay threshold of US$ 150 per DALY averted. Dihydroartemisinin-piperaquine is a very cost-effective anti-malarial drug. The findings support its use as an alternative first-line drug for treatment of uncomplicated malaria in children in Tanzania and other sub-Saharan African countries with similar healthcare infrastructures and epidemiology of malaria.
Hepatic Resection for Colorectal Liver Metastases: A Cost-Effectiveness Analysis
Beard, Stephen M.; Holmes, Michael; Price, Charles; Majeed, Ali W.
2000-01-01
Objective To analyze the cost-effectiveness of resection for liver metastases compared with standard nonsurgical cytotoxic treatment. Summary Background Data The efficacy of hepatic resection for metastases from colorectal cancer has been debated, despite reported 5-year survival rates of 20% to 40%. Resection is confined to specialized centers and is not widely available, perhaps because of lack of appropriate expertise, resources, or awareness of its efficacy. The cost-effectiveness of resection is important from the perspective of managed care in the United States and for the commissioning of health services in the United Kingdom. Methods A simple decision-based model was developed to evaluate the marginal costs and health benefits of hepatic resection. Estimates of resectability for liver metastases were taken from UK-reported case series data. The results of 100 hepatic resections conducted in Sheffield from 1997 to 1999 were used for the cost calculation of liver resection. Survival data from published series of resections were compiled to estimate the incremental cost per life-year gained (LYG) because of the short period of follow-up in the Sheffield series. Results Hepatic resection for colorectal liver metastases provides an estimated marginal benefit of 1.6 life-years (undiscounted) at a marginal cost of £6,742. If 17% of patients have only palliative resections, the overall cost per LYG is approximately £5,236 (£5,985 with discounted benefits). If potential benefits are extended to include 20-year survival rates, these figures fall to approximately £1,821 (£2,793 with discounted benefits). Further univariate sensitivity analysis of key model parameters showed the cost per LYG to be consistently less than £15,000. Conclusion In this model, hepatic resection appears highly cost-effective compared with nonsurgical treatments for colorectal-related liver metastases. PMID:11088071
Estimating the cost of blood: past, present, and future directions.
Shander, Aryeh; Hofmann, Axel; Gombotz, Hans; Theusinger, Oliver M; Spahn, Donat R
2007-06-01
Understanding the costs associated with blood products requires sophisticated knowledge about transfusion medicine and is attracting the attention of clinical and administrative healthcare sectors worldwide. To improve outcomes, blood usage must be optimized and expenditures controlled so that resources may be channeled toward other diagnostic, therapeutic, and technological initiatives. Estimating blood costs, however, is a complex undertaking, surpassing simple supply versus demand economics. Shrinking donor availability and application of a precautionary principle to minimize transfusion risks are factors that continue to drive the cost of blood products upward. Recognizing that historical accounting attempts to determine blood costs have varied in scope, perspective, and methodology, new approaches have been initiated to identify all potential cost elements related to blood and blood product administration. Activities are also under way to tie these elements together in a comprehensive and practical model that will be applicable to all single-donor blood products without regard to practice type (e.g., academic, private, multi- or single-center clinic). These initiatives, their rationale, importance, and future directions are described.
How much does a tokamak reactor cost?
NASA Astrophysics Data System (ADS)
Freidberg, J.; Cerfon, A.; Ballinger, S.; Barber, J.; Dogra, A.; McCarthy, W.; Milanese, L.; Mouratidis, T.; Redman, W.; Sandberg, A.; Segal, D.; Simpson, R.; Sorensen, C.; Zhou, M.
2017-10-01
The cost of a fusion reactor is of critical importance to its ultimate acceptability as a commercial source of electricity. While there are general rules of thumb for scaling both overnight cost and levelized cost of electricity the corresponding relations are not very accurate or universally agreed upon. We have carried out a series of scaling studies of tokamak reactor costs based on reasonably sophisticated plasma and engineering models. The analysis is largely analytic, requiring only a simple numerical code, thus allowing a very large number of designs. Importantly, the studies are aimed at plasma physicists rather than fusion engineers. The goals are to assess the pros and cons of steady state burning plasma experiments and reactors. One specific set of results discusses the benefits of higher magnetic fields, now possible because of the recent development of high T rare earth superconductors (REBCO); with this goal in mind, we calculate quantitative expressions, including both scaling and multiplicative constants, for cost and major radius as a function of central magnetic field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ian Metzger, Jesse Dean
2010-12-31
This software requires inputs of simple water fixture inventory information and calculates the water/energy and cost benefits of various retrofit opportunities. This tool includes water conservation measures for: Low-flow Toilets, Low-flow Urinals, Low-flow Faucets, and Low-flow Showheads. This tool calculates water savings, energy savings, demand reduction, cost savings, and building life cycle costs including: simple payback, discounted payback, net-present value, and savings to investment ratio. In addition this tool also displays the environmental benefits of a project.
NASA Astrophysics Data System (ADS)
Armas, Iuliana; Bostenaru Dan, Maria
2010-05-01
The COST action TU0801 "Semantic enrichment of 3D city models for sustainable urban development" aims at using ontologies to enrich three dimensional models of cities. Such models can be used for various purposes, one of them being disaster management. COST actions are European networks of nationally funded projects, the European Science Foundation funding the networking activities. Romania adhered to the above mentioned COST action in 2009, the nationally funded project being concerned with the use of GIS for the vulnerability to hazards of the city of Bucharest. Among the networking activites Romanian representatives participated in are a training school on 3D GIS for disaster management (with two trainees) and a working group and management committee meeting. It is aimed to further develop the issues of usability and guidance of semantically enriched city models as task from the working group within the Action for the nationally funded project. In this contribution there will be shown how it is aimed to achieve this. One of the issues is on how to extrude GIS to achieve a simple 3D representation for a pilot area in the historic centre of Bucharest. Another one is on how to use this for the study of urbanism aspects, ranging from visual urban composition to the complex 3D aspects in restoration projects, including addition of new floors to buildings.
A low cost, simple, portable instrument for the measurement of infra-red reflectance of paints
NASA Astrophysics Data System (ADS)
Marson, F.
1982-05-01
The construction and design of a low cost, simple, portable infra-red reflectometer which can be used to estimate the reflectance of paint films in the 800 nm region is described. The infra-red reflectances of a range of lustreless, semigloss and gloss olive drab camouflage paints determined using this instrument are compared to those obtained using modified commercial equipment and to the reflectances measured at 800 nm using a Cary model 17 spectrophotometer. The new reflectometer was shown to be superior to the modified commercial instrument currently specified in Australian government paint specifications and to be capable of estimating the reflectance of olive drab paints to within about one per cent of the Cary derived reflectance values. The reflectance values for a range of 24 experimental coatings made with pigments of varying absorption in the infra-red region are used to illustrate the effect of the instrument's spectral response and the necessity of establishing a reliable working standard.
Brian hears: online auditory processing using vectorization over channels.
Fontaine, Bertrand; Goodman, Dan F M; Benichoux, Victor; Brette, Romain
2011-01-01
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.
A simple and fast method for extraction and quantification of cryptophyte phycoerythrin.
Thoisen, Christina; Hansen, Benni Winding; Nielsen, Søren Laurentius
2017-01-01
The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equipment. The cryptophyte cells on the filters were disrupted at -80 °C and added phosphate buffer for extraction at 4 °C followed by absorbance measurement. The cryptophyte Rhodomonas salina was used as a model organism. •Simple method for extraction and quantification of phycoerythrin from cryptophytes.•Minimal usage of equipment and chemicals, and low labor costs.•Applicable for industrial and biological purposes.
Organizing for low cost space operations - Status and plans
NASA Technical Reports Server (NTRS)
Lee, C.
1976-01-01
Design features of the Space Transportation System (vehicle reuse, low cost expendable components, simple payload interfaces, standard support systems) must be matched by economical operational methods to achieve low operating and payload costs. Users will be responsible for their own payloads and will be charged according to the services they require. Efficient use of manpower, simple documentation, simplified test, checkout, and flight planning are firm goals, together with flexibility for quick response to varying user needs. Status of the Shuttle hardware, plans for establishing low cost procedures, and the policy for user charges are discussed.
NASA Astrophysics Data System (ADS)
Mollaei, Zeinab; Davary, Kamran; Majid Hasheminia, Seyed; Faridhosseini, Alireza; Pourmohamad, Yavar
2018-04-01
Due to the uncertainty concerning the location of flow paths on active alluvial fans, alluvial fan floods could be more dangerous than riverine floods. The United States Federal Emergency Management Agency (FEMA) used a simple stochastic model named FAN for this purpose, which has been practiced for many years. In the last decade, this model has been criticized as a consequence of development of more complex computer models. This study was conducted on three alluvial fans located in northeast and southeast Iran using a combination of the FAN model, the hydraulic portion of the FLO-2D model, and geomorphological information. Initial stages included three steps: (a) identifying the alluvial fans' landforms, (b) determining the active and inactive areas of alluvial fans, and (c) delineating 100-year flood within these selected areas. This information was used as an input in the mentioned three approaches of the (i) FLO-2D model, (ii) geomorphological method, and (iii) FAN model. Thereafter, the results of each model were obtained and geographical information system (GIS) layers were created and overlaid. Afterwards, using a scoring system, the results were evaluated and compared. The goal of this research was to introduce a simple but effective solution to estimate the flood hazards. It was concluded that the integrated method proposed in this study is superior at projecting alluvial fan flood hazards with minimum required input data, simplicity, and affordability, which are considered the primary goals of such comprehensive studies. These advantages are more highlighted in underdeveloped and developing countries, which may well lack detailed data and financially cannot support such costly projects. Furthermore, such a highly cost-effective method could be greatly advantageous and pragmatic for developed countries.
Niu, Ji-Cheng; Zhou, Ting; Niu, Li-Li; Xie, Zhen-Sheng; Fang, Fang; Yang, Fu-Quan; Wu, Zhi-Yong
2018-02-01
In this work, fast isoelectric focusing (IEF) was successfully implemented on an open paper fluidic channel for simultaneous concentration and separation of proteins from complex matrix. With this simple device, IEF can be finished in 10 min with a resolution of 0.03 pH units and concentration factor of 10, as estimated by color model proteins by smartphone-based colorimetric detection. Fast detection of albumin from human serum and glycated hemoglobin (HBA1c) from blood cell was demonstrated. In addition, off-line identification of the model proteins from the IEF fractions with matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was also shown. This PAD IEF is potentially useful either for point of care test (POCT) or biomarker analysis as a cost-effective sample pretreatment method.
Simple calculation of ab initio melting curves: Application to aluminum.
Robert, Grégory; Legrand, Philippe; Arnault, Philippe; Desbiens, Nicolas; Clérouin, Jean
2015-03-01
We present a simple, fast, and promising method to compute the melting curves of materials with ab initio molecular dynamics. It is based on the two-phase thermodynamic model of Lin et al [J. Chem. Phys. 119, 11792 (2003)] and its improved version given by Desjarlais [Phys. Rev. E 88, 062145 (2013)]. In this model, the velocity autocorrelation function is utilized to calculate the contribution of the nuclei motion to the entropy of the solid and liquid phases. It is then possible to find the thermodynamic conditions of equal Gibbs free energy between these phases, defining the melting curve. The first benchmark on the face-centered cubic melting curve of aluminum from 0 to 300 GPa demonstrates how to obtain an accuracy of 5%-10%, comparable to the most sophisticated methods, for a much lower computational cost.
Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs.
Ishak, K Jack; Stolar, Marilyn; Hu, Ming-yi; Alvarez, Piedad; Wang, Yamei; Getsios, Denis; Williams, Gregory C
2012-12-01
Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs.Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike's Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs.
Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs
2012-01-01
Background Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. Methods An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs. Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike’s Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. Results The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. Conclusions PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs. PMID:23198908
Mobile Number Portability in Europe
2005-08-01
Anmerkungen zum Balassa - Samuelson -Effekt, Nr. 3/2002, erschienen in: Stefan Reitz (Hg.): Theoretische und wirtschaftspolitische Aspekte der internatio- nalen...However, the argument is slightly more complex. Using a simple model with differentiated networks, Buehler and Haucap (2004) show that the incumbent’s...Elasticities The above arguments suggest that it is more difficult to gain market share in the presence of switching costs, as undercutting needs to be
The Instability of Instability
1991-05-01
thermodynamic principles, changes cannot be effected without some cost. The decision - making associated with Model I can be viewed as rational behavior. Consider...number Democratic simple majority voting is perhaps the most widely used method of group decision making i;i our time. Current theory, based on...incorporate any of several plausible characteristics of decision - making , then the instability theorems do not hold and in fact the probability of
Analyzing costs of space debris mitigation methods
NASA Astrophysics Data System (ADS)
Wiedemann, C.; Krag, H.; Bendisch, J.; Sdunnus, H.
The steadily increasing number of space objects poses a considerable hazard to all kinds of spacecraft. To reduce the risks to future space missions different debris mitigation measures and spacecraft protection techniques have been investigated during the last years. However, the economic efficiency has not been considered yet in this context. This economical background is not always clear to satellite operators and the space industry. Current studies have the objective to evaluate the mission costs due to space debris in a business as usual (no mitigation) scenario compared to the missions costs considering debris mitigation. The aim i an estimation of thes time until the investment in debris mitigation will lead to an effective reduction of mission costs. This paper presents the results of investigations on the key problems of cost estimation for spacecraft and the influence of debris mitigation and shielding on cost. The shielding of a satellite can be an effective method to protect the spacecraft against debris impact. Mitigation strategies like the reduction of orbital lifetime and de- or re-orbit of non-operational satellites are methods to control the space debris environment. These methods result in an increase of costs. In a first step the overall costs of different types of unmanned satellites are analyzed. The key problem is, that it is not possible to provide a simple cost model that can be applied to all types of satellites. Unmanned spacecraft differ very much in mission, complexity of design, payload and operational lifetime. It is important to classify relevant cost parameters and investigate their influence on the respective mission. The theory of empirical cost estimation and existing cost models are discussed. A selected cost model is simplified and generalized for an application on all operational satellites. In a next step the influence of space debris on cost is treated, if the implementation of mitigation strategies is considered.
NASA Technical Reports Server (NTRS)
Sterk, Steve; Chesley, Stephan
2008-01-01
The upcoming retirement of the Baby Boomers will leave a workforce age gap between the younger generation (the future NASA decision makers) and the gray beards. This paper will reflect on the average age of the workforce across NASA Centers, the Aerospace Industry and other Government Agencies, like DoD. This paper will dig into Productivity and Realization Factors and how they get applied to bi-monthly (payroll) data for true full-time equivalent (FTE) calculations that could be used at each of the NASA Centers and other business systems that are on the forefront in being implemented. This paper offers some comparative costs analysis/solutions, from simple FTE cost-estimating relationships (CERs) versus CERs for monthly time-phasing activities for small research projects that start and get completed within a government fiscal year. This paper will present the results of a parametric study investigating the cost-effectiveness of alternative performance-based CERs and how they get applied into the Center's forward pricing rate proposals (FPRP). True CERs based on the relationship of a younger aged workforce will have some effects on labor rates used in both commercial cost models and other internal home-grown cost models which may impact the productivity factors for future NASA missions.
Heil, John R; Nordeste, Ricardo F; Charles, Trevor C
2011-04-01
Here we report a simple cost-effective device for screening colonies on plates for expression of the monomeric red fluorescent protein mRFP1 and the fluorescent dye Nile red. This device can be built from any simple light source, in our case a Quebec Colony Counter, and cost-effective theatre gels. The device can be assembled in as little as 20 min, and it produces excellent results when screening a large number of colonies.
A simple and low-cost permanent magnet system for NMR
NASA Astrophysics Data System (ADS)
Chonlathep, K.; Sakamoto, T.; Sugahara, K.; Kondo, Y.
2017-02-01
We have developed a simple, easy to build, and low-cost magnet system for NMR, of which homogeneity is about 4 ×10-4 at 57 mT, with a pair of two commercially available ferrite magnets. This homogeneity corresponds to about 90 Hz spectral resolution at 2.45 MHz of the hydrogen Larmor frequency. The material cost of this NMR magnet system is little more than 100. The components can be printed by a 3D printer.
Elements of complexity in subsurface modeling, exemplified with three case studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Truex, Michael J.; Rockhold, Mark
2017-04-03
There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, andmore » 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.« less
Elements of complexity in subsurface modeling, exemplified with three case studies
NASA Astrophysics Data System (ADS)
Freedman, Vicky L.; Truex, Michael J.; Rockhold, Mark L.; Bacon, Diana H.; Freshley, Mark D.; Wellman, Dawn M.
2017-09-01
There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this report, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: (1) modeling approach, (2) description of process, and (3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil-vapor-extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.
From market games to real-world markets
NASA Astrophysics Data System (ADS)
Jefferies, P.; Hart, M. L.; Hui, P. M.; Johnson, N. F.
2001-04-01
This paper uses the development of multi-agent market models to present a unified approach to the joint questions of how financial market movements may be simulated, predicted, and hedged against. We first present the results of agent-based market simulations in which traders equipped with simple buy/sell strategies and limited information compete in speculatory trading. We examine the effect of different market clearing mechanisms and show that implementation of a simple Walrasian auction leads to unstable market dynamics. We then show that a more realistic out-of-equilibrium clearing process leads to dynamics that closely resemble real financial movements, with fat-tailed price increments, clustered volatility and high volume autocorrelation. We then show that replacing the `synthetic' price history used by these simulations with data taken from real financial time-series leads to the remarkable result that the agents can collectively learn to identify moments in the market where profit is attainable. Hence on real financial data, the system as a whole can perform better than random. We then employ the formalism of Bouchaud in conjunction with agent based models to show that in general risk cannot be eliminated from trading with these models. We also show that, in the presence of transaction costs, the risk of option writing is greatly increased. This risk, and the costs, can however be reduced through the use of a delta-hedging strategy with modified, time-dependent volatility structure.
Coins and Costs: A Simple and Rapid Assessment of Basic Financial Knowledge
ERIC Educational Resources Information Center
Willner, Paul; Bailey, Rebecca; Dymond, Simon; Parry, Rhonwen
2011-01-01
Introduction: We describe a simple and rapid screening test for basic financial knowledge that is suitable for administration to people with mild intellectual disabilities. Method: The Coins and Costs test asks respondents to name coins, and to estimate prices of objects ranging between 1 British Pound (an ice cream) and 100K British Pounds (a…
Kelishadi, Roya; Ziaee, Vahid; Ardalan, Gelayol; Namazi, Ascieh; Noormohammadpour, Pardis; Ghayour-Mobarhan, Majid; Sadraei, Hoda; Mirmoghtadaee, Parisa; Poursafa, Parinaz
2010-01-01
Objective To provide a low-cost and simple model of culturally-appropriate and low cost facilities for improvement of physical activity for girls and their mothers through an after-school program and to determine the changes in anthropometric indexes after this trial. Methods This national study was conducted in 2006-2007 in 7 provinces with different socioeconomic situations in Iran. Female students who studied in the 7th through 10th grade and their mothers were selected by random cluster sampling. In each province, 24 sessions of after-school aerobic physical activity were held for 90 minutes, two days a week, and 3 months long at school sites in the afternoon. Findings The study comprised 410 participants (204 mothers and 206 daughters), with a mean age of 15.86±1.01 and 40.71±6.3 years in girls and their mothers, respectively. The results of the focus group discussions showed that in general, both mothers and daughters were satisfied from the program and found it feasible and successful. After the trial, the indexes of generalized and abdominal obesity improved significantly both in girls and in their mothers (P-value <0.0001 for weight, body mass index and waist circumference). Conclusion Our findings may provide a low-cost and simple effective model of motivation for physical activity with targeted interventions for girls and their mothers. We suggest that the success of this trial might be a result of bonding and accompaniment of mothers and daughters. Such model can be integrated in the existing health and education systems to increase the physical activity level. PMID:23056741
Si, L; Winzenberg, T M; Palmer, A J
2014-01-01
This review was aimed at the evolution of health economic models used in evaluations of clinical approaches aimed at preventing osteoporotic fractures. Models have improved, with medical continuance becoming increasingly recognized as a contributor to health and economic outcomes, as well as advancements in epidemiological data. Model-based health economic evaluation studies are increasingly used to investigate the cost-effectiveness of osteoporotic fracture preventions and treatments. The objective of this study was to carry out a systematic review of the evolution of health economic models used in the evaluation of osteoporotic fracture preventions. Electronic searches within MEDLINE and EMBASE were carried out using a predefined search strategy. Inclusion and exclusion criteria were used to select relevant studies. References listed of included studies were searched to identify any potential study that was not captured in our electronic search. Data on country, interventions, type of fracture prevention, evaluation perspective, type of model, time horizon, fracture sites, expressed costs, types of costs included, and effectiveness measurement were extracted. Seventy-four models were described in 104 publications, of which 69% were European. Earlier models focused mainly on hip, vertebral, and wrist fracture, but later models included multiple fracture sites (humerus, pelvis, tibia, and other fractures). Modeling techniques have evolved from simple decision trees, through deterministic Markov processes to individual patient simulation models accounting for uncertainty in multiple parameters. Treatment continuance has been increasingly taken into account in the models in the last decade. Models have evolved in their complexity and emphasis, with medical continuance becoming increasingly recognized as a contributor to health and economic outcomes. This evolution may be driven in part by the desire to capture all the important differentiating characteristics of medications under scrutiny, as well as the advancement in epidemiological data relevant to osteoporosis fractures.
Hendrix, Kristin S; Downs, Stephen M; Brophy, Ginger; Carney Doebbeling, Caroline; Swigonski, Nancy L
2013-01-01
Most state Medicaid programs reimburse physicians for providing fluoride varnish, yet the only published studies of cost-effectiveness do not show cost-savings. Our objective is to apply state-specific claims data to an existing published model to quickly and inexpensively estimate the cost-savings of a policy consideration to better inform decisions - specifically, to assess whether Indiana Medicaid children's restorative service rates met the threshold to generate cost-savings. Threshold analysis was based on the 2006 model by Quiñonez et al. Simple calculations were used to "align" the Indiana Medicaid data with the published model. Quarterly likelihoods that a child would receive treatment for caries were annualized. The probability of a tooth developing a cavitated lesion was multiplied by the probability of using restorative services. Finally, this rate of restorative services given cavitation was multiplied by 1.5 to generate the threshold to attain cost-savings. Restorative services utilization rates, extrapolated from available Indiana Medicaid claims, were compared with these thresholds. For children 1-2 years old, restorative services utilization was 2.6 percent, which was below the 5.8 percent threshold for cost-savings. However, for children 3-5 years of age, restorative services utilization was 23.3 percent, exceeding the 14.5 percent threshold that suggests cost-savings. Combining a published model with state-specific data, we were able to quickly and inexpensively demonstrate that restorative service utilization rates for children 36 months and older in Indiana are high enough that fluoride varnish regularly applied by physicians to children starting at 9 months of age could save Medicaid funds over a 3-year horizon. © 2013 American Association of Public Health Dentistry.
Prediction of power requirements for a longwall armored face conveyor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broadfoot, A.R.; Betz, R.E.
1995-12-31
Longwall armored face conveyors (AFC`s) have traditionally been designed using a combination of heuristics and simple models. However, as longwalls increase in length these design procedures are proving to be inadequate. The result has either been costly loss of production due to AFC stalling or component failure, or larger than necessary capital investment due to overdesign. In order to allow accurate estimation of the power requirements for an AFC this paper develops a comprehensive model of all the friction forces associated with the AFC. Power requirement predictions obtained from these models are then compared with measurements from two mine faces.
The Instructional Cost Index. A Simplified Approach to Interinstitutional Cost Comparison.
ERIC Educational Resources Information Center
Beatty, George, Jr.; And Others
The paper describes a simple, yet effective method of computing a comparative index of instructional costs. The Instructional Cost Index identifies direct cost differentials among instructional programs. Cost differentials are described in terms of differences among numerical values of variables that reflect fundamental academic and resource…
The dominance of the herbicide resistance cost in several Arabidopsis thaliana mutant lines.
Roux, Fabrice; Gasquez, Jacques; Reboud, Xavier
2004-01-01
Resistance evolution depends upon the balance between advantage and disadvantage (cost) conferred in treated and untreated areas. By analyzing morphological characters and simple fitness components, the cost associated with each of eight herbicide resistance alleles (acetolactate synthase, cellulose synthase, and auxin-induced target genes) was studied in the model plant Arabidopsis thaliana. The use of allele-specific PCR to discriminate between heterozygous and homozygous plants was used to provide insights into the dominance of the resistance cost, a parameter rarely described. Morphological characters appear more sensitive than fitness (seed production) because 6 vs. 4 differences between resistant and sensitive homozygous plants were detected, respectively. Dominance levels for the fitness cost ranged from recessivity (csr1-1, ixr1-2, and axr1-3) to dominance (axr2-1) to underdominance (aux1-7). Furthermore, the dominance level of the herbicide resistance trait did not predict the dominance level of the cost of resistance. The relationship of our results to theoretical predictions of dominance and the consequences of fitness cost and its dominance in resistance management are discussed. PMID:15020435
Influence of task switching costs on colony homeostasis
NASA Astrophysics Data System (ADS)
Jeanson, Raphaël; Lachaud, Jean-Paul
2015-06-01
In social insects, division of labour allows colonies to optimise the allocation of workers across all available tasks to satisfy colony requirements. The maintenance of stable conditions within colonies (homeostasis) requires that some individuals move inside the nest to monitor colony needs and execute unattended tasks. We developed a simple theoretical model to explore how worker mobility inside the nest and task switching costs influence the maintenance of stable levels of task-associated stimuli. Our results indicate that worker mobility in large colonies generates important task switching costs and is detrimental to colony homeostasis. Our study suggests that the balance between benefits and costs associated with the mobility of workers patrolling inside the nest depends on colony size. We propose that several species of ants with diverse life-history traits should be appropriate to test the prediction that the proportion of mobile workers should vary during colony ontogeny.
Steady flow model user's guide
NASA Astrophysics Data System (ADS)
Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.
1984-07-01
Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.
NASA Technical Reports Server (NTRS)
Sterk, Steve; Chesley, Stephen
2008-01-01
The upcoming retirement of the Baby Boomers on the horizon will leave a performance gap between younger generation (the future NASA decision makers) and the gray beards. This paper will reflect on the average age of workforce across NASA Centers, the Aerospace Industry and other Government Agencies, like DoD. This papers will dig into Productivity and Realization Factors and how they get applied to bimonthly (payroll data) for true FTE calculations that could be used at each of the NASA Centers and other business systems that are on the forefront in being implemented. This paper offers some comparative costs solutions, from simple - full time equivalent (FTE) cost estimating relationships CERs, to complex - CERs for monthly time-phasing activities for small research projects that start and get completed within a government fiscal year. This paper will present the results of a parametric study investigating the cost-effectiveness of different alternatives performance based cost estimating relationships (CERs) and how they get applied into the Center s forward pricing rate proposals (FPRP). True CERs based on the relationship of a younger aged workforce will have some effects on labor rates used in both commercial cost models and internal home-grown cost models which may impact the productivity factors for future NASA missions.
Strategies for Diagnosing and Treating Suspected Acute Bacterial Sinusitis
Balk, Ethan M; Zucker, Deborah R; Engels, Eric A; Wong, John B; Williams, John W; Lau, Joseph
2001-01-01
OBJECTIVE Symptoms suggestive of acute bacterial sinusitis are common. Available diagnostic and treatment options generate substantial costs with uncertain benefits. We assessed the cost-effectiveness of alternative management strategies to identify the optimal approach. DESIGN For such patients, we created a Markov model to examine four strategies: 1) no antibiotic treatment; 2) empirical antibiotic treatment; 3) clinical criteria-guided treatment; and 4) radiography-guided treatment. The model simulated a 14-day course of illness, included sinusitis prevalence, antibiotic side effects, sinusitis complications, direct and indirect costs, and symptom severity. Strategies costing less than $50,000 per quality-adjusted life year gained were considered “cost-effective.” MEASUREMENTS AND MAIN RESULTS For mild or moderate disease, basing antibiotic treatment on clinical criteria was cost-effective in clinical settings where sinusitis prevalence is within the range of 15% to 93% or 3% to 63%, respectively. For severe disease, or to prevent sinusitis or antibiotic side effect symptoms, use of clinical criteria was cost-effective in settings with lower prevalence (below 51% or 44%, respectively); empirical antibiotics was cost-effective with higher prevalence. Sinus radiography-guided treatment was never cost-effective for initial treatment. CONCLUSIONS Use of a simple set of clinical criteria to guide treatment is a cost-effective strategy in most clinical settings. Empirical antibiotics are cost-effective in certain settings; however, their use results in many unnecessary prescriptions. If this resulted in increased antibiotic resistance, costs would substantially rise and efficacy would fall. Newer, expensive antibiotics are of limited value. Additional testing is not cost-effective. Further studies are needed to find an accurate, low-cost diagnostic test for acute bacterial sinusitis. PMID:11679039
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
Optimization Under Uncertainty of Site-Specific Turbine Configurations
NASA Astrophysics Data System (ADS)
Quick, J.; Dykes, K.; Graf, P.; Zahle, F.
2016-09-01
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained with increasing risk aversion on the part of the designer.
Development of a funding, cost, and spending model for satellite projects
NASA Technical Reports Server (NTRS)
Johnson, Jesse P.
1989-01-01
The need for a predictive budget/funging model is obvious. The current models used by the Resource Analysis Office (RAO) are used to predict the total costs of satellite projects. An effort to extend the modeling capabilities from total budget analysis to total budget and budget outlays over time analysis was conducted. A statistical based and data driven methodology was used to derive and develop the model. Th budget data for the last 18 GSFC-sponsored satellite projects were analyzed and used to build a funding model which would describe the historical spending patterns. This raw data consisted of dollars spent in that specific year and their 1989 dollar equivalent. This data was converted to the standard format used by the RAO group and placed in a database. A simple statistical analysis was performed to calculate the gross statistics associated with project length and project cost ant the conditional statistics on project length and project cost. The modeling approach used is derived form the theory of embedded statistics which states that properly analyzed data will produce the underlying generating function. The process of funding large scale projects over extended periods of time is described by Life Cycle Cost Models (LCCM). The data was analyzed to find a model in the generic form of a LCCM. The model developed is based on a Weibull function whose parameters are found by both nonlinear optimization and nonlinear regression. In order to use this model it is necessary to transform the problem from a dollar/time space to a percentage of total budget/time space. This transformation is equivalent to moving to a probability space. By using the basic rules of probability, the validity of both the optimization and the regression steps are insured. This statistically significant model is then integrated and inverted. The resulting output represents a project schedule which relates the amount of money spent to the percentage of project completion.
Simulation Based Low-Cost Composite Process Development at the US Air Force Research Laboratory
NASA Technical Reports Server (NTRS)
Rice, Brian P.; Lee, C. William; Curliss, David B.
2003-01-01
Low-cost composite research in the US Air Force Research Laboratory, Materials and Manufacturing Directorate, Organic Matrix Composites Branch has focused on the theme of affordable performance. Practically, this means that we use a very broad view when considering the affordability of composites. Factors such as material costs, labor costs, recurring and nonrecurring manufacturing costs are balanced against performance to arrive at the relative affordability vs. performance measure of merit. The research efforts discussed here are two projects focused on affordable processing of composites. The first topic is the use of a neural network scheme to model cure reaction kinetics, then utilize the kinetics coupled with simple heat transport models to predict, in real-time, future exotherms and control them. The neural network scheme is demonstrated to be very robust and a much more efficient method that mechanistic cure modeling approach. This enables very practical low-cost processing of thick composite parts. The second project is liquid composite molding (LCM) process simulation. LCM processing of large 3D integrated composite parts has been demonstrated to be a very cost effective way to produce large integrated aerospace components specific examples of LCM processes are resin transfer molding (RTM), vacuum assisted resin transfer molding (VARTM), and other similar approaches. LCM process simulation is a critical part of developing an LCM process approach. Flow simulation enables the development of the most robust approach to introducing resin into complex preforms. Furthermore, LCM simulation can be used in conjunction with flow front sensors to control the LCM process in real-time to account for preform or resin variability.
NASA Technical Reports Server (NTRS)
Prince, Frank A.
2017-01-01
Building a parametric cost model is hard work. The data is noisy and often does not behave like we want it to. We need statistics to give us an indication of the goodness of our models, but; statistics can be manipulated and mislead. On top of all of that, our own very human biases can lead us astray; causing us to see patterns in the noise and draw false conclusions from the data. Yet, it is the data itself that is the foundation for making better cost estimates and cost models. I believe the mistake we often make is we believe that our models are representative of the data; that our models summarize the experiences, the knowledge, and the stories contained in the data. However, it is the opposite that is true. Our models are but imitations of reality. They give us trends, but not truth. The experiences, the knowledge, and the stories that we need in order to make good cost estimates is bound up in the data. You cannot separate good cost estimating from a knowledge of the historical data. One final thought. It is our attempts to make sense out of the randomness that leads us astray. In order to make progress as cost modelers and cost estimators, we must accept that there are real limitations on our ability to model the past and predict the future. I do not believe we should throw up our hands and say this is the best we can do. Rather, to see real improvement we must first recognize these limitations, avoid the easy but misleading solutions, and seek to find ways to better model the world we live in. I don't have any simple solutions. Perhaps the answers lie in better data or in a totally different approach to simulating how the world works. All I know is that we must do our best to speak truth to ourselves and our customers. Misleading ourselves and our customers will, in the end, result in an inability to have a positive impact on those we serve.
Modelling cost-effectiveness of different vasectomy methods in India, Kenya, and Mexico.
Seamans, Yancy; Harner-Jay, Claudia M
2007-07-13
Vasectomy is generally considered a safe and effective method of permanent contraception. The historical effectiveness of vasectomy has been questioned by recent research results indicating that the most commonly used method of vasectomy--simple ligation and excision (L and E)--appears to have a relatively high failure rate, with reported pregnancy rates as high as 4%. Updated methods such as fascial interposition (FI) and thermal cautery can lower the rate of failure but may require additional financial investments and may not be appropriate for low-resource clinics. In order to better compare the cost-effectiveness of these different vasectomy methods, we modelled the costs of different vasectomy methods using cost data collected in India, Kenya, and Mexico and effectiveness data from the latest published research. The costs associated with providing vasectomies were determined in each country through interviews with clinic staff. Costs collected were economic, direct, programme costs of fixed vasectomy services but did not include large capital expenses or general recurrent costs for the health care facility. Estimates of the time required to provide service were gained through interviews and training costs were based on the total costs of vasectomy training programmes in each country. Effectiveness data were obtained from recent published studies and comparative cost-effectiveness was determined using cost per couple years of protection (CYP). In each country, the labour to provide the vasectomy and follow-up services accounts for the greatest portion of the overall cost. Because each country almost exclusively used one vasectomy method at all of the clinics included in the study, we modelled costs based on the additional material, labour, and training costs required in each country. Using a model of a robust vasectomy program, more effective methods such as FI and thermal cautery reduce the cost per CYP of a vasectomy by $0.08-$0.55. Based on the results presented, more effective methods of vasectomy--including FI, thermal cautery, and thermal cautery combined with FI--are more cost-effective than L and E alone. Analysis shows that for a programme in which a minimum of 20 clients undergo vasectomies per month, the cost per CYP is reduced in all three countries by updated vasectomy methods.
Modelling cost-effectiveness of different vasectomy methods in India, Kenya, and Mexico
Seamans, Yancy; Harner-Jay, Claudia M
2007-01-01
Background Vasectomy is generally considered a safe and effective method of permanent contraception. The historical effectiveness of vasectomy has been questioned by recent research results indicating that the most commonly used method of vasectomy – simple ligation and excision (L and E) – appears to have a relatively high failure rate, with reported pregnancy rates as high as 4%. Updated methods such as fascial interposition (FI) and thermal cautery can lower the rate of failure but may require additional financial investments and may not be appropriate for low-resource clinics. In order to better compare the cost-effectiveness of these different vasectomy methods, we modelled the costs of different vasectomy methods using cost data collected in India, Kenya, and Mexico and effectiveness data from the latest published research. Methods The costs associated with providing vasectomies were determined in each country through interviews with clinic staff. Costs collected were economic, direct, programme costs of fixed vasectomy services but did not include large capital expenses or general recurrent costs for the health care facility. Estimates of the time required to provide service were gained through interviews and training costs were based on the total costs of vasectomy training programmes in each country. Effectiveness data were obtained from recent published studies and comparative cost-effectiveness was determined using cost per couple years of protection (CYP). Results In each country, the labour to provide the vasectomy and follow-up services accounts for the greatest portion of the overall cost. Because each country almost exclusively used one vasectomy method at all of the clinics included in the study, we modelled costs based on the additional material, labour, and training costs required in each country. Using a model of a robust vasectomy program, more effective methods such as FI and thermal cautery reduce the cost per CYP of a vasectomy by $0.08 – $0.55. Conclusion Based on the results presented, more effective methods of vasectomy – including FI, thermal cautery, and thermal cautery combined with FI – are more cost-effective than L and E alone. Analysis shows that for a programme in which a minimum of 20 clients undergo vasectomies per month, the cost per CYP is reduced in all three countries by updated vasectomy methods. PMID:17629921
NASA Astrophysics Data System (ADS)
Ye, Xiaosheng; Shi, Hui; He, Xiaoxiao; Yu, Yanru; He, Dinggeng; Tang, Jinlu; Lei, Yanli; Wang, Kemin
2016-01-01
As a star material in cancer theranostics, photoresponsive gold (Au) nanostructures may still have drawbacks, such as low thermal conductivity, irradiation-induced melting effect and high cost. To solve the problem, copper (Cu) with a much higher thermal conductivity and lower cost was introduced to generate a novel Cu-Au alloy nanostructure produced by a simple, gentle and one-pot synthetic method. Having the good qualities of both Cu and Au, the irregularly-shaped Cu-Au alloy nanostructures showed several advantages over traditional Au nanorods, including a broad and intense near-infrared (NIR) absorption band from 400 to 1100 nm, an excellent heating performance under laser irradiation at different wavelengths and even a notable photostability against melting. Then, via a simple conjugation of fluorophore-labeled aptamers on the Cu-Au alloy nanostructures, active targeting and signal output were simultaneously introduced, thus constructing a theranostic platform based on fluorophore-labeled, aptamer-coated Cu-Au alloy nanostructures. By using human leukemia CCRF-CEM cancer and Cy5-labeled aptamer Sgc8c (Cy5-Sgc8c) as the model, a selective fluorescence imaging and NIR photothermal therapy was successfully realized for both in vitro cancer cells and in vivo tumor tissues. It was revealed that Cy5-Sgc8c-coated Cu-Au alloy nanostructures were not only capable of robust target recognition and stable signal output for molecular imaging in complex biological systems, but also killed target cancer cells in mice with only five minutes of 980 nm irradiation. The platform was found to be simple, stable, biocompatible and highly effective, and shows great potential as a versatile tool for cancer theranostics.As a star material in cancer theranostics, photoresponsive gold (Au) nanostructures may still have drawbacks, such as low thermal conductivity, irradiation-induced melting effect and high cost. To solve the problem, copper (Cu) with a much higher thermal conductivity and lower cost was introduced to generate a novel Cu-Au alloy nanostructure produced by a simple, gentle and one-pot synthetic method. Having the good qualities of both Cu and Au, the irregularly-shaped Cu-Au alloy nanostructures showed several advantages over traditional Au nanorods, including a broad and intense near-infrared (NIR) absorption band from 400 to 1100 nm, an excellent heating performance under laser irradiation at different wavelengths and even a notable photostability against melting. Then, via a simple conjugation of fluorophore-labeled aptamers on the Cu-Au alloy nanostructures, active targeting and signal output were simultaneously introduced, thus constructing a theranostic platform based on fluorophore-labeled, aptamer-coated Cu-Au alloy nanostructures. By using human leukemia CCRF-CEM cancer and Cy5-labeled aptamer Sgc8c (Cy5-Sgc8c) as the model, a selective fluorescence imaging and NIR photothermal therapy was successfully realized for both in vitro cancer cells and in vivo tumor tissues. It was revealed that Cy5-Sgc8c-coated Cu-Au alloy nanostructures were not only capable of robust target recognition and stable signal output for molecular imaging in complex biological systems, but also killed target cancer cells in mice with only five minutes of 980 nm irradiation. The platform was found to be simple, stable, biocompatible and highly effective, and shows great potential as a versatile tool for cancer theranostics. Electronic supplementary information (ESI) available: Fig. S1, S2 and Table S1. See DOI: 10.1039/c5nr07017a
AskIT Service Desk Support Value Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashcraft, Phillip Lynn; Cummings, Susan M.; Fogle, Blythe G.
The value model discussed herein provides an accurate and simple calculation of the funding required to adequately staff the AskIT Service Desk (SD). The model is incremental – only technical labor cost is considered. All other costs, such as management, equipment, buildings, HVAC, and training are considered common elements of providing any labor related IT Service. Depending on the amount of productivity loss and the number of hours the defect was unresolved, the value of resolving work from the SD is unquestionably an economic winner; the average cost of $16 per SD resolution can commonly translate to cost avoidance exceeding well overmore » $100. Attempting to extract too much from the SD will likely create a significant downside. The analysis used to develop the value model indicates that the utilization of the SD is very high (approximately 90%). As a benchmark, consider a comment from a manager at Vitalyst (a commercial IT service desk) that their utilization target is approximately 60%. While high SD utilization is impressive, over the long term it is likely to cause unwanted consequences to staff such as higher turnover, illness, or burnout. A better solution is to staff the SD so that analysts have time to improve skills through training, develop knowledge, improve processes, collaborate with peers, and improve customer relationship skills.« less
NASA Astrophysics Data System (ADS)
Rosyidi, C. N.; Puspitoingrum, W.; Jauhari, W. A.; Suhardi, B.; Hamada, K.
2016-02-01
The specification of tolerances has a significant impact on the quality of product and final production cost. The company should carefully pay attention to the component or product tolerance so they can produce a good quality product at the lowest cost. Tolerance allocation has been widely used to solve problem in selecting particular process or supplier. But before merely getting into the selection process, the company must first make a plan to analyse whether the component must be made in house (make), to be purchased from a supplier (buy), or used the combination of both. This paper discusses an optimization model of process and supplier selection in order to minimize the manufacturing costs and the fuzzy quality loss. This model can also be used to determine the allocation of components to the selected processes or suppliers. Tolerance, process capability and production capacity are three important constraints that affect the decision. Fuzzy quality loss function is used in this paper to describe the semantic of the quality, in which the product quality level is divided into several grades. The implementation of the proposed model has been demonstrated by solving a numerical example problem that used a simple assembly product which consists of three components. The metaheuristic approach were implemented to OptQuest software from Oracle Crystal Ball in order to obtain the optimal solution of the numerical example.
Jamison, Aaron; Benjamin, Larry; Lockington, David
2018-06-06
Surgical adjuncts in cataract surgery are often perceived as sometimes necessary, always expensive, particularly in the "lean" cost-saving era. However, prevention of a surgical complication, rather than subsequent management, should always be the preferred strategy. We wished to model real-world costs associated with surgical adjuncts use and test the maxim for cataract surgery-"if you think of it, use it". We compared UK list prices for equipment and related costs of preventing vitreous loss (VL) via use of surgical adjuncts vs its subsequent management in a hypothetical cataract surgery scenario of a white swollen cataract with a moderately dilated pupil. The original surgery costs for the "cautious with adjuncts, no complications" approach was £943.54, including adjuncts costing £137.47. In the "minimalist, no adjunct" scenario, management of VL using the Anterior Vitrectomy Kit cost £142.45, and additional management and follow-up costs resulted in total cost of £1178.20 (£234.66 (25%) more expensive). If left aphakic, an additional operation for secondary iris clip IOL insertion and further follow-up to address the impact of the complication ultimately cost £2124.67 overall. An additional initial spend on surgical adjuncts of £137.47 could potentially prevent £1293.60 (9× increase) in direct costs in this scenario. Through simple scenario modelling, we have demonstrated the cost benefits provided by the use of precautionary surgical adjuncts during cataract surgery. VL costs significantly more in terms of complication management and follow-up. This supports the cataract surgeon's maxim-"if you think of it, use it".
Assessing map accuracy in a remotely sensed, ecoregion-scale cover map
Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.
1998-01-01
Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.
Heuristics and Biases in Military Decision Making
2010-10-01
rationality and is based on a linear, step-based model that generates a specific course of action and is useful for the examination of problems that...exhibit stability and are underpinned by assumptions of “technical- rationality .”5 The Army values MDMP as the sanctioned approach for solving...theory) which sought to describe human behavior as a rational maximization of cost-benefit decisions, Kahne- man and Tversky provided a simple
The Structured Intuitive Model for Product Line Economics (SIMPLE)
2005-02-01
units are features and use cases. A feature is just as nebulous as a requirement, but techniques such as feature-oriented domain analysis ( FODA ) [Kang 90...cost avoidance DM design modified DOCU degree of documentation GQM Goal Question Metric FODA feature-oriented domain analysis IM integration effort...Hess, J.; Novak, W.; & Peterson, A. Feature- Oriented Domain Analysis ( FODA ) Feasibility Study (CMU/SEI- 90-TR-02 1, ADA235785). Pittsburgh, PA
Some Results Bearing on the Value of Improvements of Membranes for Reverse Osmosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamont, A
2006-03-08
This analysis evaluates the potential economic benefits that could result from the improvements in the permeability of membranes for reverse osmosis. The discussion provides a simple model of the operation of a reverse osmosis plant. It examines the change in the operation that might result from improvements in the membrane and computes the cost of water as a function of the membrane permeability.
Perona, Paolo; Dürrenmatt, David J; Characklis, Gregory W
2013-03-30
We propose a theoretical river modeling framework for generating variable flow patterns in diverted-streams (i.e., no reservoir). Using a simple economic model and the principle of equal marginal utility in an inverse fashion we first quantify the benefit of the water that goes to the environment in relation to that of the anthropic activity. Then, we obtain exact expressions for optimal water allocation rules between the two competing uses, as well as the related statistical distributions. These rules are applied using both synthetic and observed streamflow data, to demonstrate that this approach may be useful in 1) generating more natural flow patterns in the river reach downstream of the diversion, thus reducing the ecodeficit; 2) obtaining a more enlightened economic interpretation of Minimum Flow Release (MFR) strategies, and; 3) comparing the long-term costs and benefits of variable versus MFR policies and showing the greater ecological sustainability of this new approach. Copyright © 2013 Elsevier Ltd. All rights reserved.
Trans-African Hydro-Meteorological Observatory
NASA Astrophysics Data System (ADS)
van de Giesen, N.; Andreini, M.; Selker, J.
2009-04-01
Our computing capacity to model hydrological processes is such that we can readily model every hectare of the globe's surface in real time. Satellites provide us with important state observations that allow us to calibrate our models and estimate model errors. Still, ground observations will remain necessary to obtain data that can not readily be observed from space. Hydro-Meteorological data availability is particularly scarce in Africa. This presentation launches a simple idea by which Africa can leapfrog into a new era of closely knit environmental observation networks. The basic idea is the design of a robust measurement station, based on the smart use of new sensors without moving parts. For example, instead of using a Eu 5000 long-wave pyrgeometer, a factory calibrated IR microwave oven sensor is used that costs less than Eu 10. In total, each station should cost Eu 200 or less. Every 30 km, one station will be installed, being equivalent to 20,000 stations for all of sub-Saharan Africa. The roll-out will follow the XO project ("100 computer") and focus on high schools. The stations will be accompanied by an educational package that allows high school children to learn about their environment, measurements, electronics, and mathematical modeling. Total program costs lie around MEu 18.
Characterization of simple wireless neurostimulators and sensors.
Gulick, Daniel W; Towe, Bruce C
2014-01-01
A single diode with a wireless power source and electrodes can act as an implantable stimulator or sensor. We have built such devices using RF and ultrasound power coupling. These simple devices could drastically reduce the size, weight, and cost of implants for applications where efficiency is not critical. However, a shortcoming has been a lack of control: any movement of the external power source would change the power coupling, thereby changing the stimulation current or modulating the sensor response. To correct for changes in power and signal coupling, we propose to use harmonic signals from the device. The diode acts as a frequency multiplier, and the harmonics it emits contain information about the drive level and bias. A simplified model suggests that estimation of power, electrode bias, and electrode resistance is possible from information contained in radiated harmonics even in the presence of significant noise. We also built a simple RF-powered stimulator with an onboard voltage limiter.
A simple and low-cost permanent magnet system for NMR.
Chonlathep, K; Sakamoto, T; Sugahara, K; Kondo, Y
2017-02-01
We have developed a simple, easy to build, and low-cost magnet system for NMR, of which homogeneity is about 4×10 -4 at 57mT, with a pair of two commercially available ferrite magnets. This homogeneity corresponds to about 90Hz spectral resolution at 2.45MHz of the hydrogen Larmor frequency. The material cost of this NMR magnet system is little more than $100. The components can be printed by a 3D printer. Copyright © 2016 Elsevier Inc. All rights reserved.
Simple Automatic File Exchange (SAFE) to Support Low-Cost Spacecraft Operation via the Internet
NASA Technical Reports Server (NTRS)
Baker, Paul; Repaci, Max; Sames, David
1998-01-01
Various issues associated with Simple Automatic File Exchange (SAFE) are presented in viewgraph form. Specific topics include: 1) Packet telemetry, Internet IP networks and cost reduction; 2) Basic functions and technical features of SAFE; 3) Project goals, including low-cost satellite transmission to data centers to be distributed via an Internet; 4) Operations with a replicated file protocol; 5) File exchange operation; 6) Ground stations as gateways; 7) Lessons learned from demonstrations and tests with SAFE; and 8) Feedback and future initiatives.
A fast analytical undulator model for realistic high-energy FEL simulations
NASA Astrophysics Data System (ADS)
Tatchyn, R.; Cremer, T.
1997-02-01
A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.
Turbulent shear layers in confining channels
NASA Astrophysics Data System (ADS)
Benham, Graham P.; Castrejon-Pita, Alfonso A.; Hewitt, Ian J.; Please, Colin P.; Style, Rob W.; Bird, Paul A. D.
2018-06-01
We present a simple model for the development of shear layers between parallel flows in confining channels. Such flows are important across a wide range of topics from diffusers, nozzles and ducts to urban air flow and geophysical fluid dynamics. The model approximates the flow in the shear layer as a linear profile separating uniform-velocity streams. Both the channel geometry and wall drag affect the development of the flow. The model shows good agreement with both particle image velocimetry experiments and computational turbulence modelling. The simplicity and low computational cost of the model allows it to be used for benchmark predictions and design purposes, which we demonstrate by investigating optimal pressure recovery in diffusers with non-uniform inflow.
Course Keeping Control of an Autonomous Boat using Low Cost Sensors
NASA Astrophysics Data System (ADS)
Yu, Zhenyu; Bao, Xinping; Nonami, Kenzo
This paper discusses the course keeping control problem for a small autonomous boat using low cost sensors. Comparing with full scale ships, a small boat is more sensitive to the environmental disturbances because of its small size and low inertia. The sensors available in the boat are a low cost GPS and a rate gyro while the commonly used compass in ship control is absent. The combined effect from disturbance, poor accuracy and significant delay in GPS measurement makes it a challenging task to achieve good performance. In this paper, we propose a simple dynamic model for the boat's horizontal motion. The model is based on the Nomoto's model and can be seen as an extension to it. The model describes the dynamics between rudder deflection and the boat's velocity vector angle while Nomoto's model reveals that between rudder deflection and the boat's yaw angle. With the proposed model there is no need for a yaw sensor for control if the boat's moving direction can be measured. GPS is a convenient device for that job. Based on the derived model, we apply mixed H2/H∞ control method to design the controller. It can guarantee the robust stability, and as the same time it can optimize the performance in the sense of H2 norm. The experimental data show that the proposed approach is proved to be effective and useful.
Evaporation estimation of rift valley lakes: comparison of models.
Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe
2009-01-01
Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.
Public Housing: A Tailored Approach to Energy Retrofits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dentz, J.; Conlin, F.; Podorson, D.
2014-06-01
Over one million HUD-supported public housing units provide rental housing for eligible low-income families across the country. A survey of over 100 PHAs across the country indicated that there is a high level of interest in developing low cost solutions that improve energy efficiency and can be seamlessly included in the refurbishment process. Further, PHAs, have incentives (both internal and external) to reduce utility bills. ARIES worked with two public housing authorities (PHAs) to develop packages of energy efficiency retrofit measures the PHAs can cost effectively implement with their own staffs in the normal course of housing operations at themore » time when units are refurbished between occupancies. The energy efficiency turnover protocols emphasized air infiltration reduction, duct sealing and measures that improve equipment efficiency. ARIES documented implementation in ten housing units. Reductions in average air leakage were 16-20% and duct leakage reductions averaged 38%. Total source energy consumption savings was estimated at 6-10% based on BEopt modeling with a simple payback of 1.7 to 2.2 years. Implementation challenges were encountered mainly related to required operational changes and budgetary constraints. Nevertheless, simple measures can feasibly be accomplished by PHA staff at low or no cost. At typical housing unit turnover rates, these measures could impact hundreds of thousands of unit per year nationally.« less
Preliminary Analysis of a Water Shield for a Surface Power Reactor
NASA Technical Reports Server (NTRS)
Pearson, J. Boise
2006-01-01
A water based shielding system is being investigated for use on initial lunar surface power systems. The use of water may lower overall cost (as compared to development cost for other materials) and simplify operations in the setup and handling. The thermal hydraulic performance of the shield is of significant interest. The mechanism for transferring heat through the shield is natural convection. A simple 1-D thermal model indicates the necessity of natural convection to maintain acceptable temperatures and pressures in the water shield. CFD analysis is done to quantify the natural convection in the shield, and predicts sufficient natural convection to transfer heat through the shield with small temperature gradients. A test program will he designed to experimentally verify the thermal hydraulic performance of the shield, and to anchor the CFD models to experimental results.
A process model to estimate biodiesel production costs.
Haas, Michael J; McAloon, Andrew J; Yee, Winnie C; Foglia, Thomas A
2006-03-01
'Biodiesel' is the name given to a renewable diesel fuel that is produced from fats and oils. It consists of the simple alkyl esters of fatty acids, most typically the methyl esters. We have developed a computer model to estimate the capital and operating costs of a moderately-sized industrial biodiesel production facility. The major process operations in the plant were continuous-process vegetable oil transesterification, and ester and glycerol recovery. The model was designed using contemporary process simulation software, and current reagent, equipment and supply costs, following current production practices. Crude, degummed soybean oil was specified as the feedstock. Annual production capacity of the plant was set at 37,854,118 l (10 x 10(6)gal). Facility construction costs were calculated to be US dollar 11.3 million. The largest contributors to the equipment cost, accounting for nearly one third of expenditures, were storage tanks to contain a 25 day capacity of feedstock and product. At a value of US dollar 0.52/kg (dollar 0.236/lb) for feedstock soybean oil, a biodiesel production cost of US dollar 0.53/l (dollar 2.00/gal) was predicted. The single greatest contributor to this value was the cost of the oil feedstock, which accounted for 88% of total estimated production costs. An analysis of the dependence of production costs on the cost of the feedstock indicated a direct linear relationship between the two, with a change of US dollar 0.020/l (dollar 0.075/gal) in product cost per US dollar 0.022/kg (dollar 0.01/lb) change in oil cost. Process economics included the recovery of coproduct glycerol generated during biodiesel production, and its sale into the commercial glycerol market as an 80% w/w aqueous solution, which reduced production costs by approximately 6%. The production cost of biodiesel was found to vary inversely and linearly with variations in the market value of glycerol, increasing by US dollar 0.0022/l (dollar 0.0085/gal) for every US dollar 0.022/kg (dollar 0.01/lb) reduction in glycerol value. The model is flexible in that it can be modified to calculate the effects on capital and production costs of changes in feedstock cost, changes in the type of feedstock employed, changes in the value of the glycerol coproduct, and changes in process chemistry and technology.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
A simple hyperbolic model for communication in parallel processing environments
NASA Technical Reports Server (NTRS)
Stoica, Ion; Sultan, Florin; Keyes, David
1994-01-01
We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.
Kim, Sun-Young; Sweet, Steven; Chang, Joshua; Goldie, Sue J
2011-06-16
Immunization policymakers at global and local levels need to establish priorities among new vaccines competing for limited resources. However, comparison of the potential impact of single vaccination programs is challenging, primarily due to the limited number of vaccine analyses as well as their differing analytic approaches and reporting formats. The purpose of this study is to provide early insight into how the comparative impact of different new vaccines could be assessed in resource-poor settings with respect to affordability, cost-effectiveness, and distributional equity. We compared the health, economic, and financial consequences of introducing the two vaccines in 72 GAVI-eligible countries using a number of different outcome measures to evaluate affordability, cost-effectiveness, and distributional equity. We use simple static models to standardize the analytic framework and improve comparability between the two new vaccines. These simple models were validated by leveraging previously developed, more complex models for rotavirus and human papillomavirus (HPV). With 70% coverage of a single-age cohort of infants and pre-adolescent girls, the lives saved with rotavirus (~274,000) and HPV vaccines (~286,000) are similar, although the timing of averted mortality differs; rotavirus-attributable deaths occur in close proximity to infection, while HPV-related cancer deaths occur largely after age 30. Deaths averted per 1000 vaccinated are 5.2 (rotavirus) and 12.6 (HPV). Disability-adjusted life years (DALYs) averted were ~7.15 million (rotavirus) and ~1.30 million (HPV), reflecting the greater influence of discounting on the latter, given the lagtime between vaccination and averted cancer. In most countries (68 for rotavirus and 66 for HPV, at the cost of I$25 per vaccinated individual) the incremental cost per DALY averted was lower than each country's GDP per capita. Financial resources required for vaccination with rotavirus are higher than with HPV since both genders are vaccinated. While lifesaving benefits of rotavirus and HPV vaccines will be realized at different times, the number of lives saved over each target populations' lifetimes will be similar. Model-based analyses that use a standardized analytic approach and generate comparable outputs can enrich the priority-setting dialogue. Although new vaccines may be deemed cost-effective, other factors including affordability and distributional equity need to be considered in different settings. We caution that for priority setting in an individual country, more rigorous comparisons should be performed, using more comprehensive models and considering all relevant vaccines and delivery strategies.
Harpold, Adrian A.; Burns, Douglas A.; Walter, M.T.; Steenhuis, Tammo S.
2013-01-01
Describing the distribution of aquatic habitats and the health of biological communities can be costly and time-consuming; therefore, simple, inexpensive methods to scale observations of aquatic biota to watersheds that lack data would be useful. In this study, we explored the potential of a simple “hydrogeomorphic” model to predict the effects of acid deposition on macroinvertebrate, fish, and diatom communities in 28 sub-watersheds of the 176-km2 Neversink River basin in the Catskill Mountains of New York State. The empirical model was originally developed to predict stream-water acid neutralizing capacity (ANC) using the watershed slope and drainage density. Because ANC is known to be strongly related to aquatic biological communities in the Neversink, we speculated that the model might correlate well with biotic indicators of ANC response. The hydrogeomorphic model was strongly correlated to several measures of macroinvertebrate and fish community richness and density, but less strongly correlated to diatom acid tolerance. The model was also strongly correlated to biological communities in 18 sub-watersheds independent of the model development, with the linear correlation capturing the strongly acidic nature of small upland watersheds (2). Overall, we demonstrated the applicability of geospatial data sets and a simple hydrogeomorphic model for estimating aquatic biological communities in areas with stream-water acidification, allowing estimates where no direct field observations are available. Similar modeling approaches have the potential to complement or refine expensive and time-consuming measurements of aquatic biota populations and to aid in regional assessments of aquatic health.
Indiana chronic disease management program risk stratification analysis.
Li, Jingjin; Holmes, Ann M; Rosenman, Marc B; Katz, Barry P; Downs, Stephen M; Murray, Michael D; Ackermann, Ronald T; Inui, Thomas S
2005-10-01
The objective of this study was to compare the ability of risk stratification models derived from administrative data to classify groups of patients for enrollment in a tailored chronic disease management program. This study included 19,548 Medicaid patients with chronic heart failure or diabetes in the Indiana Medicaid data warehouse during 2001 and 2002. To predict costs (total claims paid) in FY 2002, we considered candidate predictor variables available in FY 2001, including patient characteristics, the number and type of prescription medications, laboratory tests, pharmacy charges, and utilization of primary, specialty, inpatient, emergency department, nursing home, and home health care. We built prospective models to identify patients with different levels of expenditure. Model fit was assessed using R statistics, whereas discrimination was assessed using the weighted kappa statistic, predictive ratios, and the area under the receiver operating characteristic curve. We found a simple least-squares regression model in which logged total charges in FY 2002 were regressed on the log of total charges in FY 2001, the number of prescriptions filled in FY 2001, and the FY 2001 eligibility category, performed as well as more complex models. This simple 3-parameter model had an R of 0.30 and, in terms in classification efficiency, had a sensitivity of 0.57, a specificity of 0.90, an area under the receiver operator curve of 0.80, and a weighted kappa statistic of 0.51. This simple model based on readily available administrative data stratified Medicaid members according to predicted future utilization as well as more complicated models.
NASA Astrophysics Data System (ADS)
Haji, Shaker; Durazi, Amal; Al-Alawi, Yaser
2018-05-01
In this study, the feed-in tariff (FIT) scheme was considered to facilitate an effective introduction of renewable energy in the Kingdom of Bahrain. An economic model was developed for the estimation of feasible FIT rates for photovoltaic (PV) electricity on a residential scale. The calculations of FIT rates were based mainly on the local solar radiation, the cost of a grid-connected PV system, the operation and maintenance cost, and the provided financial support. The net present value and internal rate of return methods were selected for model evaluation with the guide of simple payback period to determine the cost of energy and feasible FIT rates under several scenarios involving different capital rebate percentages, loan down payment percentages, and PV system costs. Moreover, to capitalise on the FIT benefits, its impact on the stakeholders beyond the households was investigated in terms of natural gas savings, emissions cutback, job creation, and PV-electricity contribution towards the energy demand growth. The study recommended the introduction of the FIT scheme in the Kingdom of Bahrain due to its considerable benefits through a setup where each household would purchase the PV system through a loan, with the government and the electricity customers sharing the FIT cost.
ERIC Educational Resources Information Center
Meeks, Glenn E.; Fisher, Ricki; Loveless, Warren
Personnel involved in planning or developing schools lack the costing tools that will enable them to determine educational technology costs. This report presents an overview of the technology costing process and the general costs used in estimating educational technology systems on a macro-budget basis, along with simple cost estimates for…
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Randomized shortest-path problems: two related models.
Saerens, Marco; Achbany, Youssef; Fouss, François; Yen, Luh
2009-08-01
This letter addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread throughout the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way, for instance, by minimizing the total travel cost-nothing particular up to now-you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, avoid congestion, or avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness or unpredictability in the routing policy (i.e., the routing policy is randomized). This problem, which will be called the randomized shortest-path problem (RSP), is investigated in this work. The global level of randomness of the routing policy is quantified by the expected Shannon entropy spread throughout the network and is provided a priori by the designer. Then, necessary conditions to compute the optimal randomized policy-minimizing the expected routing cost-are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows computing an optimal policy, that is, a set of transition probabilities in each node. Interestingly and surprisingly enough, this first model, while formulated in a totally different framework, is equivalent to Akamatsu's model ( 1996 ), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way. For instance, it is shown that the unique optimal policy can be obtained by solving a simple linear system of equations. This second model is therefore more convincing because of its computational efficiency and soundness. Finally, simulation results obtained on simple, illustrative examples show that the models behave as expected.
Could CT screening for lung cancer ever be cost effective in the United Kingdom?
Whynes, David K
2008-01-01
Background The absence of trial evidence makes it impossible to determine whether or not mass screening for lung cancer would be cost effective and, indeed, whether a clinical trial to investigate the problem would be justified. Attempts have been made to resolve this issue by modelling, although the complex models developed to date have required more real-world data than are currently available. Being founded on unsubstantiated assumptions, they have produced estimates with wide confidence intervals and of uncertain relevance to the United Kingdom. Method I develop a simple, deterministic, model of a screening regimen potentially applicable to the UK. The model includes only a limited number of parameters, for the majority of which, values have already been established in non-trial settings. The component costs of screening are derived from government guidance and from published audits, whilst the values for test parameters are derived from clinical studies. The expected health gains as a result of screening are calculated by combining published survival data for screened and unscreened cohorts with data from Life Tables. When a degree of uncertainty over a parameter value exists, I use a conservative estimate, i.e. one likely to make screening appear less, rather than more, cost effective. Results The incremental cost effectiveness ratio of a single screen amongst a high-risk male population is calculated to be around £14,000 per quality-adjusted life year gained. The average cost of this screening regimen per person screened is around £200. It is possible that, when obtained experimentally in any future trial, parameter values will be found to differ from those previously obtained in non-trial settings. On the basis both of differing assumptions about evaluation conventions and of reasoned speculations as to how test parameters and costs might behave under screening, the model generates cost effectiveness ratios as high as around £20,000 and as low as around £7,000. Conclusion It is evident that eventually being able to identify a cost effective regimen of CT screening for lung cancer in the UK is by no means an unreasonable expectation. PMID:18302756
A simple integrated assessment approach to global change simulation and evaluation
NASA Astrophysics Data System (ADS)
Ogutu, Keroboto; D'Andrea, Fabio; Ghil, Michael
2016-04-01
We formulate and study the Coupled Climate-Economy-Biosphere (CoCEB) model, which constitutes the basis of our idealized integrated assessment approach to simulating and evaluating global change. CoCEB is composed of a physical climate module, based on Earth's energy balance, and an economy module that uses endogenous economic growth with physical and human capital accumulation. A biosphere model is likewise under study and will be coupled to the existing two modules. We concentrate on the interactions between the two subsystems: the effect of climate on the economy, via damage functions, and the effect of the economy on climate, via a control of the greenhouse gas emissions. Simple functional forms of the relation between the two subsystems permit simple interpretations of the coupled effects. The CoCEB model is used to make hypotheses on the long-term effect of investment in emission abatement, and on the comparative efficacy of different approaches to abatement, in particular by investing in low carbon technology, in deforestation reduction or in carbon capture and storage (CCS). The CoCEB model is very flexible and transparent, and it allows one to easily formulate and compare different functional representations of climate change mitigation policies. Using different mitigation measures and their cost estimates, as found in the literature, one is able to compare these measures in a coherent way.
The Foot's Arch and the Energetics of Human Locomotion.
Stearne, Sarah M; McDonald, Kirsty A; Alderson, Jacqueline A; North, Ian; Oxnard, Charles E; Rubenson, Jonas
2016-01-19
The energy-sparing spring theory of the foot's arch has become central to interpretations of the foot's mechanical function and evolution. Using a novel insole technique that restricted compression of the foot's longitudinal arch, this study provides the first direct evidence that arch compression/recoil during locomotion contributes to lowering energy cost. Restricting arch compression near maximally (~80%) during moderate-speed (2.7 ms(-1)) level running increased metabolic cost by + 6.0% (p < 0.001, d = 0.67; unaffected by foot strike technique). A simple model shows that the metabolic energy saved by the arch is largely explained by the passive-elastic work it supplies that would otherwise be done by active muscle. Both experimental and model data confirm that it is the end-range of arch compression that dictates the energy-saving role of the arch. Restricting arch compression had no effect on the cost of walking or incline running (3°), commensurate with the smaller role of passive-elastic mechanics in these gaits. These findings substantiate the elastic energy-saving role of the longitudinal arch during running, and suggest that arch supports used in some footwear and orthotics may increase the cost of running.
Dykes, Patricia C; Wantland, Dean; Whittenburg, Luann; Lipsitz, Stuart; Saba, Virginia K
2013-01-01
While nursing activities represent a significant proportion of inpatient care, there are no reliable methods for determining nursing costs based on the actual services provided by the nursing staff. Capture of data to support accurate measurement and reporting on the cost of nursing services is fundamental to effective resource utilization. Adopting standard terminologies that support tracking both the quality and the cost of care could reduce the data entry burden on direct care providers. This pilot study evaluated the feasibility of using a standardized nursing terminology, the Clinical Care Classification System (CCC), for developing a reliable costing method for nursing services. Two different approaches are explored; the Relative Value Unit RVU and the simple cost-to-time methods. We found that the simple cost-to-time method was more accurate and more transparent in its derivation than the RVU method and may support a more consistent and reliable approach for costing nursing services.
Willingness to pay and cost of illness for changes in health capital depreciation.
Ried, W
1996-01-01
The paper investigates the relationship between the willingness to pay and the cost of illness approach with respect to the evaluation of economic burden due to adverse health effects. The basic intertemporal framework is provided by Grossman's pure investment model, while effects on individual morbidity are taken to be generated by marginal changes in the rate of health capital depreciation. More specifically, both the simple example of purely temporary changes and the more general case of persistent variations in health capital depreciation are discussed. The analysis generates two principal findings. First, for a class of identical individuals cost as measured by the cost of illness approach is demonstrated to provide a lower bound on the true welfare cost to the individual, i.e. cost as given by the willingness to pay approach. Moreover, the cost of illness is increasing in the size of the welfare loss. Second, if one takes into account the possible heterogeneity of individuals, a clear relationship between the cost values supplied by the two approaches no longer exists. As an example, the impact of variations in either financial wealth or health capital endowment is discussed. Thus, diversity in individual type turns out to blur the link between cost of illness and the true economic cost.
Hoomans, Ties; Abrams, Keith R; Ament, Andre J H A; Evers, Silvia M A A; Severens, Johan L
2009-10-01
Decision making about resource allocation for guideline implementation to change clinical practice is inevitably undertaken in a context of uncertainty surrounding the cost-effectiveness of both clinical guidelines and implementation strategies. Adopting a total net benefit approach, a model was recently developed to overcome problems with the use of combined ratio statistics when analyzing decision uncertainty. To demonstrate the stochastic application of the model for informing decision making about the adoption of an audit and feedback strategy for implementing a guideline recommending intensive blood glucose control in type 2 diabetes in primary care in the Netherlands. An integrated Bayesian approach to decision modeling and evidence synthesis is adopted, using Markov Chain Monte Carlo simulation in WinBUGs. Data on model parameters is gathered from various sources, with effectiveness of implementation being estimated using pooled, random-effects meta-analysis. Decision uncertainty is illustrated using cost-effectiveness acceptability curves and frontier. Decisions about whether to adopt intensified glycemic control and whether to adopt audit and feedback alter for the maximum values that decision makers are willing to pay for health gain. Through simultaneously incorporating uncertain economic evidence on both guidance and implementation strategy, the cost-effectiveness acceptability curves and cost-effectiveness acceptability frontier show an increase in decision uncertainty concerning guideline implementation. The stochastic application in diabetes care demonstrates that the model provides a simple and useful tool for quantifying and exploring the (combined) uncertainty associated with decision making about adopting guidelines and implementation strategies and, therefore, for informing decisions about efficient resource allocation to change clinical practice.
Modeling diffuse phosphorus emissions to assist in best management practice designing
NASA Astrophysics Data System (ADS)
Kovacs, Adam; Zessner, Matthias; Honti, Mark; Clement, Adrienne
2010-05-01
A diffuse emission modeling tool has been developed, which is appropriate to support decision-making in watershed management. The PhosFate (Phosphorus Fate) tool allows planning best management practices (BMPs) in catchments and simulating their possible impacts on the phosphorus (P) loads. PhosFate is a simple fate model to calculate diffuse P emissions and their transport within a catchment. The model is a semi-empirical, catchment scale, distributed parameter and long-term (annual) average model. It has two main parts: (a) the emission and (b) the transport model. The main input data of the model are digital maps (elevation, soil types and landuse categories), statistical data (crop yields, animal numbers, fertilizer amounts and precipitation distribution) and point information (precipitation, meteorology, soil humus content, point source emissions and reservoir data). The emission model calculates the diffuse P emissions at their source. It computes the basic elements of the hydrology as well as the soil loss. The model determines the accumulated P surplus of the topsoil and distinguishes the dissolved and the particulate P forms. Emissions are calculated according to the different pathways (surface runoff, erosion and leaching). The main outputs are the spatial distribution (cell values) of the runoff components, the soil loss and the P emissions within the catchment. The transport model joins the independent cells based on the flow tree and it follows the further fate of emitted P from each cell to the catchment outlets. Surface runoff and P fluxes are accumulated along the tree and the field and in-stream retention of the particulate forms are computed. In case of base flow and subsurface P loads only the channel transport is taken into account due to the less known hydrogeological conditions. During the channel transport, point sources and reservoirs are also considered. Main results of the transport algorithm are the discharge, dissolved and sediment-bounded P load values at any arbitrary point within the catchment. Finally, a simple design procedure has been built up to plan BMPs in the catchments and simulate their possible impacts on diffuse P fluxes as well as calculate their approximately costs. Both source and transport controlling measures have been involved into the planning procedure. The model also allows examining the impacts of alterations of fertilizer application, point source emissions as well as the climate change on the river loads. Besides this, a simple optimization algorithm has been developed to select the most effective source areas (real hot spots), which should be targeted by the interventions. The fate model performed well in Hungarian pilot catchments. Using the calibrated and validated model, different management scenarios were worked out and their effects and costs evaluated and compared to each other. The results show that the approach is suitable to effectively design BMP measures at local scale. Combinative application of the source and transport controlling BMPs can result in high P reduction efficiency. Optimization of the interventions can remarkably reduce the area demand of the necessary BMPs, consequently the establishment costs can be decreased. The model can be coupled with a larger scale catchment model to form a "screening and planning" modeling system.
Brian Hears: Online Auditory Processing Using Vectorization Over Channels
Fontaine, Bertrand; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain
2011-01-01
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in “Brian Hears,” a library for the spiking neural network simulator package “Brian.” This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations. PMID:21811453
Scaling laws between population and facility densities.
Um, Jaegon; Son, Seung-Woo; Lee, Sung-Ik; Jeong, Hawoong; Kim, Beom Jun
2009-08-25
When a new facility like a grocery store, a school, or a fire station is planned, its location should ideally be determined by the necessities of people who live nearby. Empirically, it has been found that there exists a positive correlation between facility and population densities. In the present work, we investigate the ideal relation between the population and the facility densities within the framework of an economic mechanism governing microdynamics. In previous studies based on the global optimization of facility positions in minimizing the overall travel distance between people and facilities, it was shown that the density of facility D and that of population rho should follow a simple power law D approximately rho(2/3). In our empirical analysis, on the other hand, the power-law exponent alpha in D approximately rho(alpha) is not a fixed value but spreads in a broad range depending on facility types. To explain this discrepancy in alpha, we propose a model based on economic mechanisms that mimic the competitive balance between the profit of the facilities and the social opportunity cost for populations. Through our simple, microscopically driven model, we show that commercial facilities driven by the profit of the facilities have alpha = 1, whereas public facilities driven by the social opportunity cost have alpha = 2/3. We simulate this model to find the optimal positions of facilities on a real U.S. map and show that the results are consistent with the empirical data.
Prediction of power requirements for a longwall armored face conveyor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broadfoot, A.R.; Betz, R.E.
1997-01-01
Longwall armored face conveyors (AFC`s) have traditionally been designed using a combination of heuristics and simple models. However, as longwalls increase in length, these design procedures are proving to be inadequate. The result has either been a costly loss of production due to AFC stalling or component failure, or larger than necessary capital investment due to overdesign. In order to allow accurate estimation of the power requirements for an AFC, this paper develops a comprehensive model of all the friction forces associated with the AFC. Power requirement predictions obtained from these models are then compared with measurements from two minemore » faces.« less
A color prediction model for imagery analysis
NASA Technical Reports Server (NTRS)
Skaley, J. E.; Fisher, J. R.; Hardy, E. E.
1977-01-01
A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.
Simulated breeding with QU-GENE graphical user interface.
Hathorn, Adrian; Chapman, Scott; Dieters, Mark
2014-01-01
Comparing the efficiencies of breeding methods with field experiments is a costly, long-term process. QU-GENE is a highly flexible genetic and breeding simulation platform capable of simulating the performance of a range of different breeding strategies and for a continuum of genetic models ranging from simple to complex. In this chapter we describe some of the basic mechanics behind the QU-GENE user interface and give a simplified example of how it works.
EDF's studies and first choices regarding the design of electrical equipment
NASA Technical Reports Server (NTRS)
Paris, Michel; Metzger, Gisele; Pays, Michel; Pasdeloup, Maurice
1988-01-01
In the performance of its studies and in its first choices, Electricite de France has taken into account the three parameters that have been judged essential for its electrical installations: flammability and flame propagation; smoke opacity; and corrosiveness and toxicity of emitted gases. In this research, materials tests have been widely developed in order to insure simple manufacturing controls, and to decrease the costly testing of near to full size models.
A piezo-ring-on-chip microfluidic device for simple and low-cost mass spectrometry interfacing.
Tsao, Chia-Wen; Lei, I-Chao; Chen, Pi-Yu; Yang, Yu-Liang
2018-02-12
Mass spectrometry (MS) interfacing technology provides the means for incorporating microfluidic processing with post MS analysis. In this study, we propose a simple piezo-ring-on-chip microfluidic device for the controlled spraying of MALDI-MS targets. This device uses a low-cost, commercially-available ring-shaped piezoelectric acoustic atomizer (piezo-ring) directly integrated into a polydimethylsiloxane microfluidic device to spray the sample onto the MS target substrate. The piezo-ring-on-chip microfluidic device's design, fabrication, and actuation, and its pulsatile pumping effects were evaluated. The spraying performance was examined by depositing organic matrix samples onto the MS target substrate by using both an automatic linear motion motor, and manual deposition. Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) was performed to analyze the peptide samples on the MALDI target substrates. Using our technique, model peptides with 10 -6 M concentration can be successfully detected. The results also indicate that the piezo-ring-on-chip approach forms finer matrix crystals and presents better MS signal uniformity with little sample consumption compared to the conventional pipetting method.
Rodríguez-Limas, William A; Pastor, Ana Ruth; Esquivel-Soto, Ernesto; Esquivel-Guadarrama, Fernando; Ramírez, Octavio T; Palomares, Laura A
2014-05-19
Rotavirus is the most common cause of severe diarrhea in many animal species of economic interest. A simple, safe and cost-effective vaccine is required for the control and prevention of rotavirus in animals. In this study, we evaluated the use of Saccharomyces cerevisiae extracts containing rotavirus-like particles (RLP) as a vaccine candidate in an adult mice model. Two doses of 1mg of yeast extract containing rotavirus proteins (between 0.3 and 3 μg) resulted in an immunological response capable of reducing the replication of rotavirus after infection. Viral shedding in all mice groups diminished in comparison with the control group when challenged with 100 50% diarrhea doses (DD50) of murine rotavirus strain EDIM. Interestingly, when immunizing intranasally protection against rotavirus infection was observed even when no increase in rotavirus-specific antibody titers was evident, suggesting that cellular responses were responsible of protection. Our results indicate that raw yeast extracts containing rotavirus proteins and RLP are a simple, cost-effective alternative for veterinary vaccines against rotavirus. Copyright © 2014 Elsevier Ltd. All rights reserved.
Darmon, Nicole; Ferguson, Elaine L; Briend, André
2002-12-01
Economic constraints may contribute to the unhealthy food choices observed among low socioeconomic groups in industrialized countries. The objective of the present study was to predict the food choices a rational individual would make to reduce his or her food budget, while retaining a diet as close as possible to the average population diet. Isoenergetic diets were modeled by linear programming. To ensure these diets were consistent with habitual food consumption patterns, departure from the average French diet was minimized and constraints that limited portion size and the amount of energy from food groups were introduced into the models. A cost constraint was introduced and progressively strengthened to assess the effect of cost on the selection of foods by the program. Strengthening the cost constraint reduced the proportion of energy contributed by fruits and vegetables, meat and dairy products and increased the proportion from cereals, sweets and added fats, a pattern similar to that observed among low socioeconomic groups. This decreased the nutritional quality of modeled diets, notably the lowest cost linear programming diets had lower vitamin C and beta-carotene densities than the mean French adult diet (i.e., <25% and 10% of the mean density, respectively). These results indicate that a simple cost constraint can decrease the nutrient densities of diets and influence food selection in ways that reproduce the food intake patterns observed among low socioeconomic groups. They suggest that economic measures will be needed to effectively improve the nutritional quality of diets consumed by these populations.
Optimal short-range trajectories for helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, G.L.; Erzberger, H.
1982-12-01
An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less
Dong, Hengjin; Buxton, Martin
2006-01-01
The objective of this study is to apply a Markov model to compare cost-effectiveness of total knee replacement (TKR) using computer-assisted surgery (CAS) with that of TKR using a conventional manual method in the absence of formal clinical trial evidence. A structured search was carried out to identify evidence relating to the clinical outcome, cost, and effectiveness of TKR. Nine Markov states were identified based on the progress of the disease after TKR. Effectiveness was expressed by quality-adjusted life years (QALYs). The simulation was carried out initially for 120 cycles of a month each, starting with 1,000 TKRs. A discount rate of 3.5 percent was used for both cost and effectiveness in the incremental cost-effectiveness analysis. Then, a probabilistic sensitivity analysis was carried out using a Monte Carlo approach with 10,000 iterations. Computer-assisted TKR was a long-term cost-effective technology, but the QALYs gained were small. After the first 2 years, the incremental cost per QALY of computer-assisted TKR was dominant because of cheaper and more QALYs. The incremental cost-effectiveness ratio (ICER) was sensitive to the "effect of CAS," to the CAS extra cost, and to the utility of the state "Normal health after primary TKR," but it was not sensitive to utilities of other Markov states. Both probabilistic and deterministic analyses produced similar cumulative serious or minor complication rates and complex or simple revision rates. They also produced similar ICERs. Compared with conventional TKR, computer-assisted TKR is a cost-saving technology in the long-term and may offer small additional QALYs. The "effect of CAS" is to reduce revision rates and complications through more accurate and precise alignment, and although the conclusions from the model, even when allowing for a full probabilistic analysis of uncertainty, are clear, the "effect of CAS" on the rate of revisions awaits long-term clinical evidence.
Pham-The, Hai; Casañola-Martin, Gerardo; Garrigues, Teresa; Bermejo, Marival; González-Álvarez, Isabel; Nguyen-Hai, Nam; Cabrera-Pérez, Miguel Ángel; Le-Thi-Thu, Huong
2016-02-01
In many absorption, distribution, metabolism, and excretion (ADME) modeling problems, imbalanced data could negatively affect classification performance of machine learning algorithms. Solutions for handling imbalanced dataset have been proposed, but their application for ADME modeling tasks is underexplored. In this paper, various strategies including cost-sensitive learning and resampling methods were studied to tackle the moderate imbalance problem of a large Caco-2 cell permeability database. Simple physicochemical molecular descriptors were utilized for data modeling. Support vector machine classifiers were constructed and compared using multiple comparison tests. Results showed that the models developed on the basis of resampling strategies displayed better performance than the cost-sensitive classification models, especially in the case of oversampling data where misclassification rates for minority class have values of 0.11 and 0.14 for training and test set, respectively. A consensus model with enhanced applicability domain was subsequently constructed and showed improved performance. This model was used to predict a set of randomly selected high-permeability reference drugs according to the biopharmaceutics classification system. Overall, this study provides a comparison of numerous rebalancing strategies and displays the effectiveness of oversampling methods to deal with imbalanced permeability data problems.
van Baarlen, Peter; van Belkum, Alex; Thomma, Bart P H J
2007-02-01
Relatively simple eukaryotic model organisms such as the genetic model weed plant Arabidopsis thaliana possess an innate immune system that shares important similarities with its mammalian counterpart. In fact, some human pathogens infect Arabidopsis and cause overt disease with human symptomology. In such cases, decisive elements of the plant's immune system are likely to be targeted by the same microbial factors that are necessary for causing disease in humans. These similarities can be exploited to identify elementary microbial pathogenicity factors and their corresponding targets in a green host. This circumvents important cost aspects that often frustrate studies in humans or animal models and, in addition, results in facile ethical clearance.
New generation of elastic network models.
López-Blanco, José Ramón; Chacón, Pablo
2016-04-01
The intrinsic flexibility of proteins and nucleic acids can be grasped from remarkably simple mechanical models of particles connected by springs. In recent decades, Elastic Network Models (ENMs) combined with Normal Model Analysis widely confirmed their ability to predict biologically relevant motions of biomolecules and soon became a popular methodology to reveal large-scale dynamics in multiple structural biology scenarios. The simplicity, robustness, low computational cost, and relatively high accuracy are the reasons behind the success of ENMs. This review focuses on recent advances in the development and application of ENMs, paying particular attention to combinations with experimental data. Successful application scenarios include large macromolecular machines, structural refinement, docking, and evolutionary conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.
A measurement-based performability model for a multiprocessor system
NASA Technical Reports Server (NTRS)
Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.
1987-01-01
A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.
Reducing the healthcare costs of urban air pollution: the South African experience.
Leiman, Anthony; Standish, Barry; Boting, Antony; van Zyl, Hugo
2007-07-01
Air pollutants often have adverse effects on human health. This paper investigates and ranks a set of policy and technological interventions intended to reduce such health costs in the high population density areas of South Africa. It initially uses a simple benefit-cost rule, later extended to capture sectoral employment impacts. Although the focus of state air quality legislation is on industrial pollutants, the most efficient interventions were found to be at household level. These included such low-cost interventions as training householders to place kindling above rather than below the coal in a fireplace and insulating roofs. The first non-household policies to emerge involved vehicle fuels and technologies. Most proposed industrial interventions failed a simple cost-benefit test. The paper's policy messages are that interventions should begin with households and that further industry controls are not yet justifiable in their present forms as these relate to the health care costs of such interventions.
Optimization Under Uncertainty of Site-Specific Turbine Configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, J.; Dykes, K.; Graf, P.
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less
Optimization under Uncertainty of Site-Specific Turbine Configurations: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, Julian; Dykes, Katherine; Graf, Peter
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained withmore » increasing risk aversion on the part of the designer.« less
Optimization Under Uncertainty of Site-Specific Turbine Configurations
Quick, J.; Dykes, K.; Graf, P.; ...
2016-10-03
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yazawa, Kazuaki; Shakouri, Ali
The energy conversion efficiency of today’s thermoelectric generators is significantly lower than that of conventional mechanical engines. Almost all of the existing research is focused on materials to improve the conversion efficiency. Here we propose a general framework to study the cost-efficiency trade-off for thermoelectric power generation. A key factor is the optimization of thermoelectric modules together with their heat source and heat sinks. Full electrical and thermal co-optimization yield a simple analytical expression for optimum design. Based on this model, power output per unit mass can be maximized. We show that the fractional area coverage of thermoelectric elements inmore » a module could play a significant role in reducing the cost of power generation systems.« less
A Cost-Utility Model of Care for Peristomal Skin Complications
Inglese, Gary; Manson, Andrea; Townshend, Arden
2016-01-01
PURPOSE: The aim of this study was to evaluate the economic and humanistic implications of using ostomy components to prevent subsequent peristomal skin complications (PSCs) in individuals who experience an initial, leakage-related PSC event. DESIGN: Cost-utility analysis. METHODS: We developed a simple decision model to consider, from a payer's perspective, PSCs managed with and without the use of ostomy components over 1 year. The model evaluated the extent to which outcomes associated with the use of ostomy components (PSC events avoided; quality-adjusted life days gained) offset the costs associated with their use. RESULTS: Our base case analysis of 1000 hypothetical individuals over 1 year assumes that using ostomy components following a first PSC reduces recurrent events versus PSC management without components. In this analysis, component acquisition costs were largely offset by lower resource use for ostomy supplies (barriers; pouches) and lower clinical utilization to manage PSCs. The overall annual average resource use for individuals using components was about 6.3% ($139) higher versus individuals not using components. Each PSC event avoided yielded, on average, 8 additional quality-adjusted life days over 1 year. CONCLUSIONS: In our analysis, (1) acquisition costs for ostomy components were offset in whole or in part by the use of fewer ostomy supplies to manage PSCs and (2) use of ostomy components to prevent PSCs produced better outcomes (fewer repeat PSC events; more health-related quality-adjusted life days) over 1 year compared to not using components. PMID:26633166
10 CFR 455.63 - Cost-effectiveness testing.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) The simple payback period of each energy conservation measure (except measures to shift demand, or...), by the estimated annual cost savings accruing from the measure (adjusted for demand charges), as... non-renewable fuels displaced less the annual cost of the renewable fuel, if any, and the annual cost...
10 CFR 455.63 - Cost-effectiveness testing.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) The simple payback period of each energy conservation measure (except measures to shift demand, or...), by the estimated annual cost savings accruing from the measure (adjusted for demand charges), as... non-renewable fuels displaced less the annual cost of the renewable fuel, if any, and the annual cost...
Sicras-Mainar, Antoni; Velasco-Velasco, Soledad; Navarro-Artieda, Ruth; Blanca Tamayo, Milagrosa; Aguado Jodar, Alba; Ruíz Torrejón, Amador; Prados-Torres, Alexandra; Violan-Fors, Concepción
2012-06-01
To compare three methods of measuring multiple morbidity according to the use of health resources (cost of care) in primary healthcare (PHC). Retrospective study using computerized medical records. Thirteen PHC teams in Catalonia (Spain). Assigned patients requiring care in 2008. The socio-demographic variables were co-morbidity and costs. Methods of comparison were: a) Combined Comorbidity Index (CCI): an index itself was developed from the scores of acute and chronic episodes, b) Charlson Index (ChI), and c) Adjusted Clinical Groups case-mix: resource use bands (RUB). The cost model was constructed by differentiating between fixed (operational) and variable costs. 3 multiple lineal regression models were developed to assess the explanatory power of each measurement of co-morbidity which were compared from the determination coefficient (R(2)), p< .05. The study included 227,235 patients. The mean unit of cost was €654.2. The CCI explained an R(2)=50.4%, the ChI an R(2)=29.2% and BUR an R(2)=39.7% of the variability of the cost. The behaviour of the ICC is acceptable, albeit with low scores (1 to 3 points), showing inconclusive results. The CCI may be a simple method of predicting PHC costs in routine clinical practice. If confirmed, these results will allow improvements in the comparison of the case-mix. Copyright © 2011 Elsevier España, S.L. All rights reserved.
Cost effectiveness of drug eluting coronary artery stenting in a UK setting: cost-utility study.
Bagust, A; Grayson, A D; Palmer, N D; Perry, R A; Walley, T
2006-01-01
To assess the cost effectiveness of drug eluting stents (DES) compared with conventional stents for treatment of symptomatic coronary artery disease in the UK. Cost-utility analysis of audit based patient subgroups by means of a simple economic model. Tertiary care. 12 month audit data for 2884 patients receiving percutaneous coronary intervention with stenting at the Cardiothoracic Centre Liverpool between January 2000 and December 2002. Risk of repeat revascularisation within 12 months of index procedure and reduction in risk from use of DES. Economic modelling was used to estimate the cost-utility ratio and threshold price premium. Four factors were identified for patients undergoing elective surgery (n = 1951) and two for non-elective surgery (n = 933) to predict risk of repeat revascularisation within 12 months. Most patients fell within the subgroup with lowest risk (57% of the elective surgery group with 5.6% risk and 91% of the non-elective surgery group with 9.9% risk). Modelled cost-utility ratios were acceptable for only one group of high risk patients undergoing non-elective surgery (only one patient in audit data). Restricting the number of DES for each patient improved results marginally: 4% of stents could then be drug eluting on economic grounds. The threshold price premium justifying 90% substitution of conventional stents was estimated to be 112 pound sterling (212 USD, 162 pound sterling) (sirolimus stents) or 89 pound sterling (167 USD, 130 pound sterling) (paclitaxel stents). At current UK prices, DES are not cost effective compared with conventional stents except for a small minority of patients. Although the technology is clearly effective, general substitution is not justified unless the price premium falls substantially.
Energy technologies evaluated against climate targets using a cost and carbon trade-off curve.
Trancik, Jessika E; Cross-Call, Daniel
2013-06-18
Over the next few decades, severe cuts in emissions from energy will be required to meet global climate-change mitigation goals. These emission reductions imply a major shift toward low-carbon energy technologies, and the economic cost and technical feasibility of mitigation are therefore highly dependent upon the future performance of energy technologies. However, existing models do not readily translate into quantitative targets against which we can judge the dynamic performance of technologies. Here, we present a simple, new model for evaluating energy-supply technologies and their improvement trajectories against climate-change mitigation goals. We define a target for technology performance in terms of the carbon intensity of energy, consistent with emission reduction goals, and show how the target depends upon energy demand levels. Because the cost of energy determines the level of adoption, we then compare supply technologies to one another and to this target based on their position on a cost and carbon trade-off curve and how the position changes over time. Applying the model to U.S. electricity, we show that the target for carbon intensity will approach zero by midcentury for commonly cited emission reduction goals, even under a high demand-side efficiency scenario. For Chinese electricity, the carbon intensity target is relaxed and less certain because of lesser emission reductions and greater variability in energy demand projections. Examining a century-long database on changes in the cost-carbon space, we find that the magnitude of changes in cost and carbon intensity that are required to meet future performance targets is not unprecedented, providing some evidence that these targets are within engineering reach. The cost and carbon trade-off curve can be used to evaluate the dynamic performance of existing and new technologies against climate-change mitigation goals.
Three essays on auction markets
NASA Astrophysics Data System (ADS)
Shunda, Nicholas James
This dissertation contains a series of theoretical investigations of auction markets. The essays it contains cover wholesale electricity markets, a popular selling mechanism on eBay, and supplier entry into multi-unit procurement auctions. The study in Chapter 1 compares the procurement cost-minimizing and productive efficiency performance of the auction mechanism used by independent system operators in wholesale electricity auction markets in the U.S. with that of a proposed alternative. The current practice allocates energy contracts as if the auction featured a discriminatory final payment method when, in fact, the markets are uniform price auctions. The proposed alternative explicitly accounts for the market-clearing price during the allocation phase. We find that the proposed alternative largely outperforms the current practice on the basis of procurement costs in the context of simple auction markets featuring both day-ahead and real-time auctions and that the procurement cost advantage of the alternative is complete when we simulate the effects of increased competition. We also find that a tradeoff between the objectives of procurement cost minimization and productive efficiency emerges in our simple auction markets and persists in the face of increased competition. The study in Chapter 2 considers a possible rationale for an auction with a buy price. In an auction with a buy price, the seller provides bidders with an option to end the auction early by accepting a transaction at a posted price. The "Buy-It-Now" option on eBay is a leading example of an auction with a buy price. The study develops a model of an auction with a buy price in which bidders use the auction's reserve price and buy price to formulate a reference price. The model both explains why a revenue-maximizing seller would want to augment her auction with a buy price and demonstrates that the seller sets a higher reserve price when she can affect the bidders' reference price through the auction's reserve price and buy price than when she can affect the bidders' reference price through the auction's reserve price only. Introducing a small reference-price effect can shrink the range of buy prices bidders are willing to exercise. The comparative statics properties of bidding behavior are in sharp contrast to equilibrium behavior in other models where the existence and size of the auction's buy price have no effect on bidding behavior. The study in Chapter 3 investigates endogenous entry in multi-unit auctions. We formulate and study models of multi-unit discriminatory and uniform price auctions and investigate the entry incentives and procurement costs they generate in equilibrium. We study two types of endogenous entry: in auctions with "interim entry costs," suppliers know their private cost information before deciding whether or not to undertake entry; in auctions with ex ante entry costs, suppliers do not know their private cost information before deciding whether or not to enter. The discriminatory and uniform price auctions are efficient and procurement cost equivalent in all the environments we study. With interim entry costs, the two auctions provide identical entry incentives. In contrast, with ex ante entry costs, suppliers enter the discriminatory auction at a higher rate than they enter the uniform price auction.
Zhang, Xiaomei; Yu, Hongwen; Yang, Hongjun; Wan, Yuchun; Hu, Hong; Zhai, Zhuang; Qin, Jieming
2015-01-01
A simple sol-gel method using non-toxic and cost-effective precursors has been developed to prepare graphene oxide (GO)/cellulose bead (GOCB) composites for removal of dye pollutants. Taking advantage of the combined benefits of GO and cellulose, the prepared GOCB composites exhibit excellent removal efficiency towards malachite green (>96%) and can be reused for over 5 times through simple filtration method. The high-decontamination performance of the GOCB system is strongly dependent on encapsulation amount of GO, temperature and pH value. In addition, the adsorption behavior of this new adsorbent fits well with the Langmuir isotherm and pseudo-second-order kinetic model. Copyright © 2014 Elsevier Inc. All rights reserved.
A design study for a simple-to-fly, constant attitude light aircraft
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Humphreys, D. E.; Montoya, R. J.; Rickard, W. W.; Wilkinson, I. E.
1973-01-01
The activities during a four-year study by doctoral students to evolve in detail a design for a simple-to-fly, constant attitude light airplane are described. The study indicated that such aircraft could materially reduce the hazards to light airplane occupants which arise from the high pilot work load and poor visibility that occur during landing. Preliminary cost studies indicate that in volume production this system would increase the cost of the aircraft in roughly the same fashion that automatic transmission, power steering, power brakes, and cruise control increase the cost of a compact car.
Uncertainty and the Social Cost of Methane Using Bayesian Constrained Climate Models
NASA Astrophysics Data System (ADS)
Errickson, F. C.; Anthoff, D.; Keller, K.
2016-12-01
Social cost estimates of greenhouse gases are important for the design of sound climate policies and are also plagued by uncertainty. One major source of uncertainty stems from the simplified representation of the climate system used in the integrated assessment models that provide these social cost estimates. We explore how uncertainty over the social cost of methane varies with the way physical processes and feedbacks in the methane cycle are modeled by (i) coupling three different methane models to a simple climate model, (ii) using MCMC to perform a Bayesian calibration of the three coupled climate models that simulates direct sampling from the joint posterior probability density function (pdf) of model parameters, and (iii) producing probabilistic climate projections that are then used to calculate the Social Cost of Methane (SCM) with the DICE and FUND integrated assessment models. We find that including a temperature feedback in the methane cycle acts as an additional constraint during the calibration process and results in a correlation between the tropospheric lifetime of methane and several climate model parameters. This correlation is not seen in the models lacking this feedback. Several of the estimated marginal pdfs of the model parameters also exhibit different distributional shapes and expected values depending on the methane model used. As a result, probabilistic projections of the climate system out to the year 2300 exhibit different levels of uncertainty and magnitudes of warming for each of the three models under an RCP8.5 scenario. We find these differences in climate projections result in differences in the distributions and expected values for our estimates of the SCM. We also examine uncertainty about the SCM by performing a Monte Carlo analysis using a distribution for the climate sensitivity while holding all other climate model parameters constant. Our SCM estimates using the Bayesian calibration are lower and exhibit less uncertainty about extremely high values in the right tail of the distribution compared to the Monte Carlo approach. This finding has important climate policy implications and suggests previous work that accounts for climate model uncertainty by only varying the climate sensitivity parameter may overestimate the SCM.
Mesoscale carbon sequestration site screening and CCS infrastructure analysis.
Keating, Gordon N; Middleton, Richard S; Stauffer, Philip H; Viswanathan, Hari S; Letellier, Bruce C; Pasqualini, Donatella; Pawar, Rajesh J; Wolfsberg, Andrew V
2011-01-01
We explore carbon capture and sequestration (CCS) at the meso-scale, a level of study between regional carbon accounting and highly detailed reservoir models for individual sites. We develop an approach to CO(2) sequestration site screening for industries or energy development policies that involves identification of appropriate sequestration basin, analysis of geologic formations, definition of surface sites, design of infrastructure, and analysis of CO(2) transport and storage costs. Our case study involves carbon management for potential oil shale development in the Piceance-Uinta Basin, CO and UT. This study uses new capabilities of the CO(2)-PENS model for site screening, including reservoir capacity, injectivity, and cost calculations for simple reservoirs at multiple sites. We couple this with a model of optimized source-sink-network infrastructure (SimCCS) to design pipeline networks and minimize CCS cost for a given industry or region. The CLEAR(uff) dynamical assessment model calculates the CO(2) source term for various oil production levels. Nine sites in a 13,300 km(2) area have the capacity to store 6.5 GtCO(2), corresponding to shale-oil production of 1.3 Mbbl/day for 50 years (about 1/4 of U.S. crude oil production). Our results highlight the complex, nonlinear relationship between the spatial deployment of CCS infrastructure and the oil-shale production rate.
Linking Physical Climate Research and Economic Assessments of Mitigation Policies
NASA Astrophysics Data System (ADS)
Stainforth, David; Calel, Raphael
2017-04-01
Evaluating climate change policies requires economic assessments which balance the costs and benefits of climate action. A certain class of Integrated Assessment Models (IAMS) are widely used for this type of analysis; DICE, PAGE and FUND are three of the most influential. In the economics community there has been much discussion and debate about the economic assumptions implemented within these models. Two aspects in particular have gained much attention: i) the costs of damages resulting from climate change - the so-called damage function, and ii) the choice of discount rate applied to future costs and benefits. There has, however, been rather little attention given to the consequences of the choices made in the physical climate models within these IAMS. Here we discuss the practical aspects of the implementation of the physical models in these IAMS, as well as the implications of choices made in these physical science components for economic assessments[1]. We present a simple breakdown of how these IAMS differently represent the climate system as a consequence of differing underlying physical models, different parametric assumptions (for parameters representing, for instance, feedbacks and ocean heat uptake) and different numerical approaches to solving the models. We present the physical and economic consequences of these differences and reflect on how we might better incorporate the latest physical science understanding in economic models of this type. [1] Calel, R. and Stainforth D.A., "On the Physics of Three Integrated Assessment Models", Bulletin of the American Meteorological Society, in press.
Gerke, Oke; Poulsen, Mads H; Høilund-Carlsen, Poul Flemming
2015-01-01
Diagnostic studies of accuracy targeting sensitivity and specificity are commonly done in a paired design in which all modalities are applied in each patient, whereas cost-effectiveness and cost-utility analyses are usually assessed either directly alongside to or indirectly by means of stochastic modeling based on larger randomized controlled trials (RCTs). However the conduct of RCTs is hampered in an environment such as ours, in which technology is rapidly evolving. As such, there is a relatively limited number of RCTs. Therefore, we investigated as to which extent paired diagnostic studies of accuracy can be also used to shed light on economic implications when considering a new diagnostic test. We propose a simple decision tree model-based cost-utility analysis of a diagnostic test when compared to the current standard procedure and exemplify this approach with published data from lymph node staging of prostate cancer. Average procedure costs were taken from the Danish Diagnosis Related Groups Tariff in 2013 and life expectancy was estimated for an ideal 60 year old patient based on prostate cancer stage and prostatectomy or radiation and chemotherapy. Quality-adjusted life-years (QALYs) were deduced from the literature, and an incremental cost-effectiveness ratio (ICER) was used to compare lymph node dissection with respective histopathological examination (reference standard) and (18)F-fluoromethylcholine positron emission tomography/computed tomography (FCH-PET/CT). Lower bounds of sensitivity and specificity of FCH-PET/CT were established at which the replacement of the reference standard by FCH-PET/CT comes with a trade-off between worse effectiveness and lower costs. Compared to the reference standard in a diagnostic accuracy study, any imperfections in accuracy of a diagnostic test imply that replacing the reference standard generates a loss in effectiveness and utility. We conclude that diagnostic studies of accuracy can be put to a more extensive use, over and above a mere indication of sensitivity and specificity of an imaging test, and that health economic considerations should be undertaken when planning a prospective diagnostic accuracy study. These endeavors will prove especially fruitful when comparing several imaging techniques with one another, or the same imaging technique using different tracers, with an independent reference standard for the evaluation of results.
Kahn, James G.; Jiwani, Aliya; Gomez, Gabriela B.; Hawkes, Sarah J.; Chesson, Harrell W.; Broutet, Nathalie; Kamb, Mary L.; Newman, Lori M.
2014-01-01
Background Syphilis in pregnancy imposes a significant global health and economic burden. More than half of cases result in serious adverse events, including infant mortality and infection. The annual global burden from mother-to-child transmission (MTCT) of syphilis is estimated at 3.6 million disability-adjusted life years (DALYs) and $309 million in medical costs. Syphilis screening and treatment is simple, effective, and affordable, yet, worldwide, most pregnant women do not receive these services. We assessed cost-effectiveness of scaling-up syphilis screening and treatment in existing antenatal care (ANC) programs in various programmatic, epidemiologic, and economic contexts. Methods and Findings We modeled the cost, health impact, and cost-effectiveness of expanded syphilis screening and treatment in ANC, compared to current services, for 1,000,000 pregnancies per year over four years. We defined eight generic country scenarios by systematically varying three factors: current maternal syphilis testing and treatment coverage, syphilis prevalence in pregnant women, and the cost of healthcare. We calculated program and net costs, DALYs averted, and net costs per DALY averted over four years in each scenario. Program costs are estimated at $4,142,287 – $8,235,796 per million pregnant women (2010 USD). Net costs, adjusted for averted medical care and current services, range from net savings of $12,261,250 to net costs of $1,736,807. The program averts an estimated 5,754 – 93,484 DALYs, yielding net savings in four scenarios, and a cost per DALY averted of $24 – $111 in the four scenarios with net costs. Results were robust in sensitivity analyses. Conclusions Eliminating MTCT of syphilis through expanded screening and treatment in ANC is likely to be highly cost-effective by WHO-defined thresholds in a wide range of settings. Countries with high prevalence, low current service coverage, and high healthcare cost would benefit most. Future analyses can be tailored to countries using local epidemiologic and programmatic data. PMID:24489931
Natural wind variability triggered drop in German redispatch volume and costs from 2015 to 2016.
Wohland, Jan; Reyers, Mark; Märker, Carolin; Witthaut, Dirk
2018-01-01
Avoiding dangerous climate change necessitates the decarbonization of electricity systems within the next few decades. In Germany, this decarbonization is based on an increased exploitation of variable renewable electricity sources such as wind and solar power. While system security has remained constantly high, the integration of renewables causes additional costs. In 2015, the costs of grid management saw an all time high of about € 1 billion. Despite the addition of renewable capacity, these costs dropped substantially in 2016. We thus investigate the effect of natural climate variability on grid management costs in this study. We show that the decline is triggered by natural wind variability focusing on redispatch as a main cost driver. In particular, we find that 2016 was a weak year in terms of wind generation averages and the occurrence of westerly circulation weather types. Moreover, we show that a simple model based on the wind generation time series is skillful in detecting redispatch events on timescales of weeks and beyond. As a consequence, alterations in annual redispatch costs in the order of hundreds of millions of euros need to be understood and communicated as a normal feature of the current system due to natural wind variability.
A simple approach to lifetime learning in genetic programming-based symbolic regression.
Azad, Raja Muhammad Atif; Ryan, Conor
2014-01-01
Genetic programming (GP) coarsely models natural evolution to evolve computer programs. Unlike in nature, where individuals can often improve their fitness through lifetime experience, the fitness of GP individuals generally does not change during their lifetime, and there is usually no opportunity to pass on acquired knowledge. This paper introduces the Chameleon system to address this discrepancy and augment GP with lifetime learning by adding a simple local search that operates by tuning the internal nodes of individuals. Although not the first attempt to combine local search with GP, its simplicity means that it is easy to understand and cheap to implement. A simple cache is added which leverages the local search to reduce the tuning cost to a small fraction of the expected cost, and we provide a theoretical upper limit on the maximum tuning expense given the average tree size of the population and show that this limit grows very conservatively as the average tree size of the population increases. We show that Chameleon uses available genetic material more efficiently by exploring more actively than with standard GP, and demonstrate that not only does Chameleon outperform standard GP (on both training and test data) over a number of symbolic regression type problems, it does so by producing smaller individuals and it works harmoniously with two other well-known extensions to GP, namely, linear scaling and a diversity-promoting tournament selection method.
Accounting for the drug life cycle and future drug prices in cost-effectiveness analysis.
Hoyle, Martin
2011-01-01
Economic evaluations of health technologies typically assume constant real drug prices and model only the cohort of patients currently eligible for treatment. It has recently been suggested that, in the UK, we should assume that real drug prices decrease at 4% per annum and, in New Zealand, that real drug prices decrease at 2% per annum and at patent expiry the drug price falls. It has also recently been suggested that we should model multiple future incident cohorts. In this article, the cost effectiveness of drugs is modelled based on these ideas. Algebraic expressions are developed to capture all costs and benefits over the entire life cycle of a new drug. The lifetime of a new drug in the UK, a key model parameter, is estimated as 33 years, based on the historical lifetime of drugs in England over the last 27 years. Under the proposed methodology, cost effectiveness is calculated for seven new drugs recently appraised in the UK. Cost effectiveness as assessed in the future is also estimated. Whilst the article is framed in mathematics, the findings and recommendations are also explained in non-mathematical language. The 'life-cycle correction factor' is introduced, which is used to convert estimates of cost effectiveness as traditionally calculated into estimates under the proposed methodology. Under the proposed methodology, all seven drugs appear far more cost effective in the UK than published. For example, the incremental cost-effectiveness ratio decreases by 46%, from £61, 900 to £33, 500 per QALY, for cinacalcet versus best supportive care for end-stage renal disease, and by 45%, from £31,100 to £17,000 per QALY, for imatinib versus interferon-α for chronic myeloid leukaemia. Assuming real drug prices decrease over time, the chance that a drug is publicly funded increases over time, and is greater when modelling multiple cohorts than with a single cohort. Using the methodology (compared with traditional methodology) all drugs in the UK and New Zealand are predicted to be more cost effective. It is suggested that the willingness-to-pay threshold should be reduced in the UK and New Zealand. The ranking of cost effectiveness will change with drugs assessed as relatively more cost effective and medical devices and surgical procedures relatively less cost effective than previously thought. The methodology is very simple to implement. It is suggested that the model should be parameterized for other countries.
SIGNALING EFFICACY DRIVES THE EVOLUTION OF LARGER SEXUAL ORNAMENTS BY SEXUAL SELECTION
Tazzyman, Samuel J; Iwasa, Yoh; Pomiankowski, Andrew
2014-01-01
Why are there so few small secondary sexual characters? Theoretical models predict that sexual selection should lead to reduction as often as exaggeration, and yet we mainly associate secondary sexual ornaments with exaggerated features such as the peacock's tail. We review the literature on mate choice experiments for evidence of reduced sexual traits. This shows that reduced ornamentation is effectively impossible in certain types of ornamental traits (behavioral, pheromonal, or color-based traits, and morphological ornaments for which the natural selection optimum is no trait), but that there are many examples of morphological traits that would permit reduction. Yet small sexual traits are very rarely seen. We analyze a simple mathematical model of Fisher's runaway process (the null model for sexual selection). Our analysis shows that the imbalance cannot be wholly explained by larger ornaments being less costly than smaller ornaments, nor by preferences for larger ornaments being less costly than preferences for smaller ornaments. Instead, we suggest that asymmetry in signaling efficacy limits runaway to trait exaggeration. PMID:24099137
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
24 CFR 200.97 - Adjustments resulting from cost certification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Adjustments resulting from cost certification. (a) Fee simple site. Upon receipt of the mortgagor's... held under a leasehold or other interest less than a fee, the cost, if any, of acquiring the leasehold or other interest is considered an allowable expense which may be added to actual cost provided that...
Proton beam therapy and accountable care: the challenges ahead.
Elnahal, Shereef M; Kerstiens, John; Helsper, Richard S; Zietman, Anthony L; Johnstone, Peter A S
2013-03-15
Proton beam therapy (PBT) centers have drawn increasing public scrutiny for their high cost. The behavior of such facilities is likely to change under the Affordable Care Act. We modeled how accountable care reform may affect the financial standing of PBT centers and their incentives to treat complex patient cases. We used operational data and publicly listed Medicare rates to model the relationship between financial metrics for PBT center performance and case mix (defined as the percentage of complex cases, such as pediatric central nervous system tumors). Financial metrics included total daily revenues and debt coverage (daily revenues - daily debt payments). Fee-for-service (FFS) and accountable care (ACO) reimbursement scenarios were modeled. Sensitivity analyses were performed around the room time required to treat noncomplex cases: simple (30 minutes), prostate (24 minutes), and short prostate (15 minutes). Sensitivity analyses were also performed for total machine operating time (14, 16, and 18 h/d). Reimbursement under ACOs could reduce daily revenues in PBT centers by up to 32%. The incremental revenue gained by replacing 1 complex case with noncomplex cases was lowest for simple cases and highest for short prostate cases. ACO rates reduced this incremental incentive by 53.2% for simple cases and 41.7% for short prostate cases. To cover daily debt payments after ACO rates were imposed, 26% fewer complex patients were allowable at varying capital costs and interest rates. Only facilities with total machine operating times of 18 hours per day would cover debt payments in all scenarios. Debt-financed PBT centers will face steep challenges to remain financially viable after ACO implementation. Paradoxically, reduced reimbursement for noncomplex cases will require PBT centers to treat more such cases over cases for which PBT has demonstrated superior outcomes. Relative losses will be highest for those facilities focused primarily on treating noncomplex cases. Copyright © 2013 Elsevier Inc. All rights reserved.
Proton Beam Therapy and Accountable Care: The Challenges Ahead
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elnahal, Shereef M., E-mail: selnahal@partners.org; Kerstiens, John; Helsper, Richard S.
2013-03-15
Purpose: Proton beam therapy (PBT) centers have drawn increasing public scrutiny for their high cost. The behavior of such facilities is likely to change under the Affordable Care Act. We modeled how accountable care reform may affect the financial standing of PBT centers and their incentives to treat complex patient cases. Methods and Materials: We used operational data and publicly listed Medicare rates to model the relationship between financial metrics for PBT center performance and case mix (defined as the percentage of complex cases, such as pediatric central nervous system tumors). Financial metrics included total daily revenues and debt coveragemore » (daily revenues − daily debt payments). Fee-for-service (FFS) and accountable care (ACO) reimbursement scenarios were modeled. Sensitivity analyses were performed around the room time required to treat noncomplex cases: simple (30 minutes), prostate (24 minutes), and short prostate (15 minutes). Sensitivity analyses were also performed for total machine operating time (14, 16, and 18 h/d). Results: Reimbursement under ACOs could reduce daily revenues in PBT centers by up to 32%. The incremental revenue gained by replacing 1 complex case with noncomplex cases was lowest for simple cases and highest for short prostate cases. ACO rates reduced this incremental incentive by 53.2% for simple cases and 41.7% for short prostate cases. To cover daily debt payments after ACO rates were imposed, 26% fewer complex patients were allowable at varying capital costs and interest rates. Only facilities with total machine operating times of 18 hours per day would cover debt payments in all scenarios. Conclusions: Debt-financed PBT centers will face steep challenges to remain financially viable after ACO implementation. Paradoxically, reduced reimbursement for noncomplex cases will require PBT centers to treat more such cases over cases for which PBT has demonstrated superior outcomes. Relative losses will be highest for those facilities focused primarily on treating noncomplex cases.« less
Asymmetric public goods game cooperation through pest control.
Reeves, T; Ohtsuki, H; Fukui, S
2017-12-21
Cooperation in a public goods game has been studied extensively to find the conditions for sustaining the commons, yet the effect of asymmetry between agents has been explored very little. Here we study a game theoretic model of cooperation for pest control among farmers. In our simple model, each farmer has a paddy of the same size arranged adjacently on a line. A pest outbreak occurs at an abandoned paddy at one end of the line, directly threatening the frontier farmer adjacent to it. Each farmer pays a cost of his or her choice to an agricultural collective, and the total sum held by the collective is used for pest control, with success probability increasing with the sum. Because the farmers' incentives depend on their distance from the pest outbreak, our model is an asymmetric public goods game. We derive each farmer's cost strategy at the Nash equilibrium. We find that asymmetry among farmers leads to a few unexpected outcomes. The individual costs at the equilibrium do not necessarily increase with how much the future is valued but rather show threshold behavior. Moreover, an increase in the number of farmers can sometimes paradoxically undermine pest prevention. A comparison with a symmetric public goods game model reveals that the farmer at the greatest risk pays a disproportionate amount of cost in the asymmetric game, making the use of agricultural lands less sustainable. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Schmidt, James R; De Houwer, Jan; Rothermund, Klaus
2016-12-01
The current paper presents an extension of the Parallel Episodic Processing model. The model is developed for simulating behaviour in performance (i.e., speeded response time) tasks and learns to anticipate both how and when to respond based on retrieval of memories of previous trials. With one fixed parameter set, the model is shown to successfully simulate a wide range of different findings. These include: practice curves in the Stroop paradigm, contingency learning effects, learning acquisition curves, stimulus-response binding effects, mixing costs, and various findings from the attentional control domain. The results demonstrate several important points. First, the same retrieval mechanism parsimoniously explains stimulus-response binding, contingency learning, and practice effects. Second, as performance improves with practice, any effects will shrink with it. Third, a model of simple learning processes is sufficient to explain phenomena that are typically (but perhaps incorrectly) interpreted in terms of higher-order control processes. More generally, we argue that computational models with a fixed parameter set and wider breadth should be preferred over those that are restricted to a narrow set of phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.
Simple predictive model for Early Childhood Caries of Chilean children.
Fierro Monti, Claudia; Pérez Flores, M; Brunotto, M
2014-01-01
Early Childhood Caries (ECC), in both industrialized and developing countries, is the most prevalent chronic disease in childhood and it is still a health public problem, affecting mainly populations considered as vulnerable, despite being preventable. The purpose of this study was to obtain a simple predictive model based on risk factors for improving public health strategies for ECC prevention for 3-5 year-old children. Clinical, environmental and psycho-socio-cultural data of children (n=250) aged 3-5 years, of both genders, from the Health Centers, were recorded in a Clinical History and Behavioral Survey. 24% of children presented behavioral problems (bizarre behavior was the main feature observed as behavioral problems). The variables associated to dmf ?4 were: bad children temperament (OR=2.43 [1.34, 4.40]) and home stress (OR=3.14 [1.54, 6.41]). It was observed that the model for male gender has higher accuracy for ECC (AUC= 78%, p-value=0.000) than others. Based on the results, we proposed a model where oral hygiene, sugar intake, male gender, and difficult temperament are main factors for predicting ECC. This model could be a promising tool for cost-effective early childhood caries control.
NASA Astrophysics Data System (ADS)
Javed, Hassan; Armstrong, Peter
2015-08-01
The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.
Improvements, testing and development of the ADM-τ sub-grid surface tension model for two-phase LES
NASA Astrophysics Data System (ADS)
Aniszewski, Wojciech
2016-12-01
In this paper, a specific subgrid term occurring in Large Eddy Simulation (LES) of two-phase flows is investigated. This and other subgrid terms are presented, we subsequently elaborate on the existing models for those and re-formulate the ADM-τ model for sub-grid surface tension previously published by these authors. This paper presents a substantial, conceptual simplification over the original model version, accompanied by a decrease in its computational cost. At the same time, it addresses the issues the original model version faced, e.g. introduces non-isotropic applicability criteria based on resolved interface's principal curvature radii. Additionally, this paper introduces more throughout testing of the ADM-τ, in both simple and complex flows.
Model Checking Satellite Operational Procedures
NASA Astrophysics Data System (ADS)
Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri
2011-08-01
We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.
75 FR 57719 - Federal Acquisition Regulation; TINA Interest Calculations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-22
... the term ``simple interest'' as the requirement for calculating interest for TINA cost impacts with.... Revising the date of the clause; and b. Removing from paragraph (e)(1) ``Simple interest'' and adding...) ``Simple interest'' and adding ``Interest compounded daily, as required by 26 U.S.C. 6622,'' in its place...
Simple economic evaluation and applications experiments for photovoltaic systems for remote sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios, M. Jr.
1980-01-01
A simple evaluation of the cost effectiveness of photovoltaic systems is presented. The evaluation is based on a calculation of breakeven costs of photovoltaics (PV) arrays with the levelized costs of two alternative energy sources (1) extension of the utility grid and (2) diesel generators. A selected number of PV applications experiments that are in progress in remote areas of the US are summarized. These applications experiments range from a 23 watt insect survey trap to a 100 kW PV system for a national park complex. It is concluded that PV systems for remote areas are now cost effective inmore » remote small applications with commercially available technology and will be cost competitive for intermediate scale systems (approx. 10 kW) in the 1980s if the DOE 1986 Commercial Readiness Goals are achieved.« less
Domestic refrigeration appliances in Poland: Potential for improving energy efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyers, S.; Schipper, L.; Lebot, B.
1993-08-01
This report is based on information collected from the main Polish manufacturer of refrigeration appliances. We describe their production facilities, and show that the energy consumption of their models for domestic sale is substantially higher than the average for similar models made in W. Europe. Lack of data and uncertainty about future production costs in Poland limits our evaluation of the cost-effective potential to increase energy efficiency, but it appears likely that considerable improvement would be economic from a societal perspective. Many design options are likely to have a simple payback of less than five years. We found that themore » production facilities are in need of substantial modernization in order to produce higher quality and more efficient appliances. We discuss policy options that could help to build a market for more efficient appliances in Poland and thereby encourage investment to produce such equipment.« less
Currie, Adrian; Sterelny, Kim
2017-04-01
We argue that narratives are central to the success of historical reconstruction. Narrative explanation involves tracing causal trajectories across time. The construction of narrative, then, often involves postulating relatively speculative causal connections between comparatively well-established events. But speculation is not always idle or harmful: it also aids in overcoming local underdetermination by forming scaffolds from which new evidence becomes relevant. Moreover, as our understanding of the past's causal milieus become richer, the constraints on narrative plausibility become increasingly strict: a narrative's admissibility does not turn on mere logical consistency with background data. Finally, narrative explanation and explanation generated by simple, formal models complement one another. Where models often achieve isolation and precision at the cost of simplification and abstraction, narratives can track complex changes in a trajectory over time at the cost of simplicity and precision. In combination both allow us to understand and explain highly complex historical sequences. Copyright © 2017 Elsevier Ltd. All rights reserved.
Data-driven modeling of solar-powered urban microgrids
Halu, Arda; Scala, Antonio; Khiyami, Abdulaziz; González, Marta C.
2016-01-01
Distributed generation takes center stage in today’s rapidly changing energy landscape. Particularly, locally matching demand and generation in the form of microgrids is becoming a promising alternative to the central distribution paradigm. Infrastructure networks have long been a major focus of complex networks research with their spatial considerations. We present a systemic study of solar-powered microgrids in the urban context, obeying real hourly consumption patterns and spatial constraints of the city. We propose a microgrid model and study its citywide implementation, identifying the self-sufficiency and temporal properties of microgrids. Using a simple optimization scheme, we find microgrid configurations that result in increased resilience under cost constraints. We characterize load-related failures solving power flows in the networks, and we show the robustness behavior of urban microgrids with respect to optimization using percolation methods. Our findings hint at the existence of an optimal balance between cost and robustness in urban microgrids. PMID:26824071
NASA Astrophysics Data System (ADS)
Morton de Lachapelle, David; Challet, Damien
2010-07-01
Despite the availability of very detailed data on financial markets, agent-based modeling is hindered by the lack of information about real trader behavior. This makes it impossible to validate agent-based models, which are thus reverse-engineering attempts. This work is a contribution towards building a set of stylized facts about the traders themselves. Using the client database of Swissquote Bank SA, the largest online Swiss broker, we find empirical relationships between turnover, account values and the number of assets in which a trader is invested. A theory based on simple mean-variance portfolio optimization that crucially includes variable transaction costs is able to reproduce faithfully the observed behaviors. We finally argue that our results bring to light the collective ability of a population to construct a mean-variance portfolio that takes into account the structure of transaction costs.
Maximum Power Point tracking charge controllers for telecom applications -- Analysis and economics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wills, R.H.
Simple charge controllers connect photovoltaic modules directly to the battery bank resulting in a significant power loss if the battery bank voltage differs greatly from the PV Maximum Power Point (MPP) voltage. Recent modeling work at AES has shown that dc-dc converter type MPP tracking charge controllers can deliver more than 30% more energy from PV modules to the battery when the PV modules are cool and the battery state of charge is low--this is typically both the worst case condition (i.e., winter) and also the design condition that determines the PV array size. Economic modeling, based on typical telecommore » system installed costs shows benefits of more than $3/Wp for MPPT over conventional charge controllers in this application--a value that greatly exceeds the additional cost of the dc-dc converter.« less
An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reid, Michael R.; Powers, Edward I. (Technical Monitor)
2000-01-01
The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.
Data-driven modeling of solar-powered urban microgrids.
Halu, Arda; Scala, Antonio; Khiyami, Abdulaziz; González, Marta C
2016-01-01
Distributed generation takes center stage in today's rapidly changing energy landscape. Particularly, locally matching demand and generation in the form of microgrids is becoming a promising alternative to the central distribution paradigm. Infrastructure networks have long been a major focus of complex networks research with their spatial considerations. We present a systemic study of solar-powered microgrids in the urban context, obeying real hourly consumption patterns and spatial constraints of the city. We propose a microgrid model and study its citywide implementation, identifying the self-sufficiency and temporal properties of microgrids. Using a simple optimization scheme, we find microgrid configurations that result in increased resilience under cost constraints. We characterize load-related failures solving power flows in the networks, and we show the robustness behavior of urban microgrids with respect to optimization using percolation methods. Our findings hint at the existence of an optimal balance between cost and robustness in urban microgrids.
FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooker, A.; Gonder, J.; Wang, L.
2015-05-04
The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles tomore » provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).« less
A simplified life-cycle cost comparison of various engines for small helicopter use
NASA Technical Reports Server (NTRS)
Civinskas, K. C.; Fishbach, L. M.
1974-01-01
A ten-year, life-cycle cost comparison is made of the following engines for small helicopter use: (1) simple turboshaft; (2) regenerative turboshaft; (3) compression-ignition reciprocator; (4) spark-ignited rotary; and (5) spark-ignited reciprocator. Based on a simplified analysis and somewhat approximate data, the simple turboshaft engine apparently has the lowest costs for mission times up to just under 2 hours. At 2 hours and above, the regenerative turboshaft appears promising. The reciprocating and rotary engines are less attractive, requiring from 10 percent to 80 percent more aircraft to have the same total payload capability as a given number of turbine powered craft. A nomogram was developed for estimating total costs of engines not covered in this study.
Cost-effectiveness analysis of interventions for migraine in four low- and middle-income countries.
Linde, Mattias; Steiner, Timothy J; Chisholm, Dan
2015-02-18
Evidence of the cost and effects of interventions for reducing the global burden of migraine remains scarce. Our objective was to estimate the population-level cost-effectiveness of evidence-based migraine interventions and their contributions towards reducing current burden in low- and middle-income countries. Using a standard WHO approach to cost-effectiveness analysis (CHOICE), we modelled core set intervention strategies for migraine, taking account of coverage and efficacy as well as non-adherence. The setting was primary health care including pharmacies. We modelled 26 intervention strategies implemented during 10 years. These included first-line acute and prophylactic drugs, and the expected consequences of adding consumer-education and provider-training. Total population-level costs and effectiveness (healthy life years [HLY] gained) were combined to form average and incremental cost-effectiveness ratios. We executed runs of the model for the general populations of China, India, Russia and Zambia. Of the strategies considered, acute treatment of attacks with acetylsalicylic acid (ASA) was by far the most cost-effective and generated a HLY for less than US$ 100. Adding educational actions increased annual costs by 1-2 US cents per capita of the population. Cost-effectiveness ratios then became slightly less favourable but still less than US$ 100 per HLY gained for ASA. An incremental cost of > US$ 10,000 would have to be paid per extra HLY by adding a triptan in a stepped-care treatment paradigm. For prophylaxis, amitriptyline was more cost-effective than propranolol or topiramate. Self-management with simple analgesics was by far the most cost-effective strategy for migraine treatment in low- and middle-income countries and represents a highly efficient use of health resources. Consumer education and provider training are expected to accelerate progress towards desired levels of coverage and adherence, cost relatively little to implement, and can therefore be considered also economically attractive. Evidence-based interventions for migraine should have as much a claim on scarce health resources as those for other chronic, non-communicable conditions that impose a significant burden on societies.
2014-01-01
Background Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. Methods We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. Results In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. Conclusions The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes. PMID:24888356
Sadatsafavi, Mohsen; Marra, Carlo; Aaron, Shawn; Bryan, Stirling
2014-06-03
Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes.
Optimal assessment of multiple cues.
Fawcett, Tim W; Johnstone, Rufus A
2003-01-01
In a wide range of contexts from mate choice to foraging, animals are required to discriminate between alternative options on the basis of multiple cues. How should they best assess such complex multicomponent stimuli? Here, we construct a model to investigate this problem, focusing on a simple case where a 'chooser' faces a discrimination task involving two cues. These cues vary in their accuracy and in how costly they are to assess. As an example, we consider a mate-choice situation where females choose between males of differing quality. Our model predicts the following: (i) females should become less choosy as the cost of finding new males increases; (ii) females should prioritize cues differently depending on how choosy they are; (iii) females may sometimes prioritize less accurate cues; and (iv) which cues are most important depends on the abundance of desirable mates. These predictions are testable in mate-choice experiments where the costs of choice can be manipulated. Our findings are applicable to other discrimination tasks besides mate choice, for example a predator's choice between palatable and unpalatable prey, or an altruist's choice between kin and non-kin. PMID:12908986
Espinosa, Gabriela; Annapragada, Ananth
2013-10-01
We evaluated three diagnostic strategies with the objective of comparing the current standard of care for individuals presenting acute chest pain and no history of coronary artery disease (CAD) with a novel diagnostic strategy using an emerging technology (blood-pool contrast agent [BPCA]) to identify the potential benefits and cost reductions. A decision analytic model of diagnostic strategies and outcomes using a BPCA and a conventional agent for CT angiography (CTA) in patients with acute chest pain was built. The model was used to evaluate three diagnostic strategies: CTA using a BPCA followed by invasive coronary angiography (ICA), CTA using a conventional agent followed by ICA, and ICA alone. The use of the two CTA-based triage tests before ICA in a population with a CAD prevalence of less than 47% was predicted to be more cost-effective than ICA alone. Using the base-case values and a cost premium for BPCA over the conventional CT agent (cost of BPCA ≈ 5× that of a conventional agent) showed that CTA with a BPCA before ICA resulted in the most cost-effective strategy; the other strategies were ruled out by simple dominance. The model strongly depends on the rates of complications from the diagnostic tests included in the model. In a population with an elevated risk of contrast-induced nephropathy (CIN), a significant premium cost per BPCA dose still resulted in the alternative whereby CTA using BPCA was more cost-effective than CTA using a conventional agent. A similar effect was observed for potential complications resulting from the BPCA injection. Conversely, in the presence of a similar complication rate from BPCA, the diagnostic strategy of CTA using a conventional agent would be the optimal alternative. BPCAs could have a significant impact in the diagnosis of acute chest pain, in particular for populations with high incidences of CIN. In addition, a BPCA strategy could garner further savings if currently excluded phenomena including renal disease and incidental findings were included in the decision model.
New simple and low-cost methods for periodic checks of Cyclone® Plus Storage Phosphor System.
Edalucci, Elisabetta; Maffione, Anna Margherita; Fornasier, Maria Rosa; De Denaro, Mario; Scian, Giovanni; Dore, Franca; Rubello, Domenico
2017-01-01
The recent large use of the Cyclone® Plus Storage Phosphor System, especially in European countries, as imaging system for quantification of radiochemical purity of radiopharmaceuticals raised the problem of setting the periodic controls as required by European Legislation. We described simple, low-cost methods for Cyclone® Plus quality controls, which can be useful to evaluate the performance measurement of this imaging system.
More memory under evolutionary learning may lead to chaos
NASA Astrophysics Data System (ADS)
Diks, Cees; Hommes, Cars; Zeppini, Paolo
2013-02-01
We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.
Environmental Impacts of Stover Removal in the Corn Belt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alicia English; Wallace E. Tyner; Juan Sesmero
2012-08-01
When considering the market for biomass from corn stover resources erosion and soil quality issues are important to consider. Removal of stover can be beneficial in some areas, especially when coordinated with other conservation practices, such as vegetative barrier strips and cover crops. However, benefits are highly dependent on several factors, namely if farmers see costs and benefits associated with erosion and the tradeoffs with the removal of biomass. This paper uses results from an integrated RUSLE2/WEPS model to incorporate six different regime choices, covering management, harvest and conservation, into simple profit maximization model to show these tradeoffs.
Tyson, Adam L.; Hilton, Stephen T.; Andreae, Laura C.
2015-01-01
The cost of 3D printing has reduced dramatically over the last few years and is now within reach of many scientific laboratories. This work presents an example of how 3D printing can be applied to the development of custom laboratory equipment that is specifically adapted for use with the novel brain tissue clearing technique, CLARITY. A simple, freely available online software tool was used, along with consumer-grade equipment, to produce a brain slicing chamber and a combined antibody staining and imaging chamber. Using standard 3D printers we were able to produce research-grade parts in an iterative manner at a fraction of the cost of commercial equipment. 3D printing provides a reproducible, flexible, simple and cost-effective method for researchers to produce the equipment needed to quickly adopt new methods. PMID:25797056
A simple and low-cost platform technology for producing pexiganan antimicrobial peptide in E. coli.
Zhao, Chun-Xia; Dwyer, Mirjana Dimitrijev; Yu, Alice Lei; Wu, Yang; Fang, Sheng; Middelberg, Anton P J
2015-05-01
Antimicrobial peptides, as a new class of antibiotics, have generated tremendous interest as potential alternatives to classical antibiotics. However, the large-scale production of antimicrobial peptides remains a significant challenge. This paper reports a simple and low-cost chromatography-free platform technology for producing antimicrobial peptides in Escherichia coli (E. coli). A fusion protein comprising a variant of the helical biosurfactant protein DAMP4 and the known antimicrobial peptide pexiganan is designed by joining the two polypeptides, at the DNA level, via an acid-sensitive cleavage site. The resulting DAMP4(var)-pexiganan fusion protein expresses at high level and solubility in recombinant E. coli, and a simple heat-purification method was applied to disrupt cells and deliver high-purity DAMP4(var)-pexiganan protein. Simple acid cleavage successfully separated the DAMP4 variant protein and the antimicrobial peptide. Antimicrobial activity tests confirmed that the bio-produced antimicrobial peptide has the same antimicrobial activity as the equivalent product made by conventional chemical peptide synthesis. This simple and low-cost platform technology can be easily adapted to produce other valuable peptide products, and opens a new manufacturing approach for producing antimicrobial peptides at large scale using the tools and approaches of biochemical engineering. © 2014 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tucker, Michael C.; Phillips, Adam; Weber, Adam Z.
We proposed and developed an all-iron redox flow battery for end users without access to an electricity grid. The concept is a low-cost battery which the user assembles, discharges, and then disposes of the active materials. Our design goals are: (1) minimize upfront cost, (2) maximize discharge energy, and (3) utilize non-toxic and environmentally benign materials. These are different goals than typically considered for electrochemical battery technology, which provides the opportunity for a novel solution. The selected materials are: low-carbon-steel negative electrode, paper separator, porous-carbon-paper positive electrode, and electrolyte solution containing 0.5 m Fe 2 (SO 4 ) 3 activemore » material and 1.2 m NaCl supporting electrolyte. Furthermore, with these materials, an average power density around 20 mW cm -2 and a maximum energy density of 11.5 Wh L -1 are achieved. A simple cost model indicates the consumable materials cost US$6.45 per kWh -1 , or only US$0.034 per mobile phone charge.« less
Whatever the cost? Information integration in memory-based inferences depends on cognitive effort.
Hilbig, Benjamin E; Michalkiewicz, Martha; Castela, Marta; Pohl, Rüdiger F; Erdfelder, Edgar
2015-05-01
One of the most prominent models of probabilistic inferences from memory is the simple recognition heuristic (RH). The RH theory assumes that judgments are based on recognition in isolation, such that other information is ignored. However, some prior research has shown that available knowledge is not generally ignored. In line with the notion of adaptive strategy selection--and, thus, a trade-off between accuracy and effort--we hypothesized that information integration crucially depends on how easily accessible information beyond recognition is, how much confidence decision makers have in this information, and how (cognitively) costly it is to acquire it. In three experiments, we thus manipulated (a) the availability of information beyond recognition, (b) the subjective usefulness of this information, and (c) the cognitive costs associated with acquiring this information. In line with the predictions, we found that RH use decreased substantially, the more easily and confidently information beyond recognition could be integrated, and increased substantially with increasing cognitive costs.
Predicting all-cause risk of 30-day hospital readmission using artificial neural networks.
Jamei, Mehdi; Nisnevich, Aleksandr; Wetchler, Everett; Sudat, Sylvia; Liu, Eric
2017-01-01
Avoidable hospital readmissions not only contribute to the high costs of healthcare in the US, but also have an impact on the quality of care for patients. Large scale adoption of Electronic Health Records (EHR) has created the opportunity to proactively identify patients with high risk of hospital readmission, and apply effective interventions to mitigate that risk. To that end, in the past, numerous machine-learning models have been employed to predict the risk of 30-day hospital readmission. However, the need for an accurate and real-time predictive model, suitable for hospital setting applications still exists. Here, using data from more than 300,000 hospital stays in California from Sutter Health's EHR system, we built and tested an artificial neural network (NN) model based on Google's TensorFlow library. Through comparison with other traditional and non-traditional models, we demonstrated that neural networks are great candidates to capture the complexity and interdependency of various data fields in EHRs. LACE, the current industry standard, showed a precision (PPV) of 0.20 in identifying high-risk patients in our database. In contrast, our NN model yielded a PPV of 0.24, which is a 20% improvement over LACE. Additionally, we discussed the predictive power of Social Determinants of Health (SDoH) data, and presented a simple cost analysis to assist hospitalists in implementing helpful and cost-effective post-discharge interventions.
Predicting all-cause risk of 30-day hospital readmission using artificial neural networks
2017-01-01
Avoidable hospital readmissions not only contribute to the high costs of healthcare in the US, but also have an impact on the quality of care for patients. Large scale adoption of Electronic Health Records (EHR) has created the opportunity to proactively identify patients with high risk of hospital readmission, and apply effective interventions to mitigate that risk. To that end, in the past, numerous machine-learning models have been employed to predict the risk of 30-day hospital readmission. However, the need for an accurate and real-time predictive model, suitable for hospital setting applications still exists. Here, using data from more than 300,000 hospital stays in California from Sutter Health’s EHR system, we built and tested an artificial neural network (NN) model based on Google’s TensorFlow library. Through comparison with other traditional and non-traditional models, we demonstrated that neural networks are great candidates to capture the complexity and interdependency of various data fields in EHRs. LACE, the current industry standard, showed a precision (PPV) of 0.20 in identifying high-risk patients in our database. In contrast, our NN model yielded a PPV of 0.24, which is a 20% improvement over LACE. Additionally, we discussed the predictive power of Social Determinants of Health (SDoH) data, and presented a simple cost analysis to assist hospitalists in implementing helpful and cost-effective post-discharge interventions. PMID:28708848
Third-party punishment as a costly signal of high continuation probabilities in repeated games.
Jordan, Jillian J; Rand, David G
2017-05-21
Why do individuals pay costs to punish selfish behavior, even as third-party observers? A large body of research suggests that reputation plays an important role in motivating such third-party punishment (TPP). Here we focus on a recently proposed reputation-based account (Jordan et al., 2016) that invokes costly signaling. This account proposed that "trustworthy type" individuals (who are incentivized to cooperate with others) typically experience lower costs of TPP, and thus that TPP can function as a costly signal of trustworthiness. Specifically, it was argued that some but not all individuals face incentives to cooperate, making them high-quality and trustworthy interaction partners; and, because the same mechanisms that incentivize cooperation also create benefits for using TPP to deter selfish behavior, these individuals are likely to experience reduced costs of punishing selfishness. Here, we extend this conceptual framework by providing a concrete, "from-the-ground-up" model demonstrating how this process could work in the context of repeated interactions incentivizing both cooperation and punishment. We show how individual differences in the probability of future interaction can create types that vary in whether they find cooperation payoff-maximizing (and thus make high-quality partners), as well as in their net costs of TPP - because a higher continuation probability increases the likelihood of receiving rewards from the victim of the punished transgression (thus offsetting the cost of punishing). We also provide a simple model of dispersal that demonstrates how types that vary in their continuation probabilities can stably coexist, because the payoff from remaining in one's local environment (i.e. not dispersing) decreases with the number of others who stay. Together, this model demonstrates, from the group up, how TPP can serve as a costly signal of trustworthiness arising from exposure to repeated interactions. Copyright © 2017 Elsevier Ltd. All rights reserved.
3D Printed Surgical Simulation Models as educational tool by maxillofacial surgeons.
Werz, S M; Zeichner, S J; Berg, B-I; Zeilhofer, H-F; Thieringer, F
2018-02-26
The aim of this study was to evaluate whether inexpensive 3D models can be suitable to train surgical skills to dental students or oral and maxillofacial surgery residents. Furthermore, we wanted to know which of the most common filament materials, acrylonitrile butadiene styrene (ABS) or polylactic acid (PLA), can better simulate human bone according to surgeons' subjective perceptions. Upper and lower jaw models were produced with common 3D desktop printers, ABS and PLA filament and silicon rubber for soft tissue simulation. Those models were given to 10 blinded, experienced maxillofacial surgeons to perform sinus lift and wisdom teeth extraction. Evaluation was made using a questionnaire. Because of slightly different density and filament prices, each silicon-covered model costs between 1.40-1.60 USD (ABS) and 1.80-2.00 USD (PLA) based on 2017 material costs. Ten experienced raters took part in the study. All raters deemed the models suitable for surgical education. No significant differences between ABS and PLA were found, with both having distinct advantages. The study demonstrated that 3D printing with inexpensive printing filaments is a promising method for training oral and maxillofacial surgery residents or dental students in selected surgical procedures. With a simple and cost-efficient manufacturing process, models of actual patient cases can be produced on a small scale, simulating many kinds of surgical procedures. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Mami, Fares
The aeronautical sector, responsible for about 3 % of the world emissions of greenhouse gases, predict a 70 % growth in 2025 and 300 % to 500 % in 2050 of its emissions compared to the level of 2005. The decision-makers must thus be supported in their choice of conception to integrate the environmental aspect into the decision-making. Our industrial partner in the aeronautical sector developed an expertise in Life Cycle Assessment (LCA) and seeks to integrate the costs and the environmental impacts in a systematic way into the ecodesign of products. Based on the literature review and the objectives of this research we propose a model of eco-efficiency, which integrates LCA with Life Cycle Costing (LCC). This model is consistent with defined cost cutting and environmental impacts reduction targets and allows a simple interpretation of the results while minimizing the efforts during data collection. The model is applied for 3D printing as an alternative production process in the manufacturing of an aircraft blocker door. 3D printing is a new technology of production working by addition of material and present interesting opportunities of cost cutting and environmental impacts, particularly in the aeronautical domain. The results showed that 3D printing, when associated with improvement in the topology of the part, allows an improvement both on costs and environmental impacts of the part life cycle. Nevertheless, the results are sensitive to the productivity of the 3D printing machine, in particular with costs when the productivity of the 3D printing is reduced. This eco-efficiency model presents several opportunities of improvement. A more elaborate definition of the objectives in reduction of environmental impacts would allow to direct the choices in design to considerations of eco-efficiency at a macro level. Moreover, the integration of the social dimension in the model constitutes an important stage to operationalize the stakes of environmental and social responsibility of the company.
Invariance and optimality in the regulation of an enzyme
2013-01-01
Background The Michaelis-Menten equation, proposed a century ago, describes the kinetics of enzyme-catalyzed biochemical reactions. Since then, this equation has been used in countless, increasingly complex models of cellular metabolism, often including time-dependent enzyme levels. However, even for a single reaction, there remains a fundamental disconnect between our understanding of the reaction kinetics, and the regulation of that reaction through changes in the abundance of active enzyme. Results We revisit the Michaelis-Menten equation under the assumption of a time-dependent enzyme concentration. We show that all temporal enzyme profiles with the same average enzyme level yield identical substrate degradation– a simple analytical conclusion that can be thought of as an invariance principle, and which we validate experimentally using a β-galactosidase assay. The ensemble of all time-dependent enzyme trajectories with the same average concentration constitutes a space of functions. We develop a simple model of biological fitness which assigns a cost to each of these trajectories (in the form of a function of functions, i.e. a functional). We then show how one can use variational calculus to analytically infer temporal enzyme profiles that minimize the overall enzyme cost. In particular, by separately treating the static costs of amino acid sequestration and the dynamic costs of protein production, we identify a fundamental cellular tradeoff. Conclusions The overall metabolic outcome of a reaction described by Michaelis-Menten kinetics is ultimately determined by the average concentration of the enzyme during a given time interval. This invariance in analogy to path-independent phenomena in physics, suggests a new way in which variational calculus can be employed to address biological questions. Together, our results point to possible avenues for a unified approach to studying metabolism and its regulation. Reviewers This article was reviewed by Sergei Maslov, William Hlavacek and Daniel Kahn. PMID:23522082
Computational algorithm to evaluate product disassembly cost index
NASA Astrophysics Data System (ADS)
Zeid, Ibrahim; Gupta, Surendra M.
2002-02-01
Environmentally conscious manufacturing is an important paradigm in today's engineering practice. Disassembly is a crucial factor in implementing this paradigm. Disassembly allows the reuse and recycling of parts and products that reach their death after their life cycle ends. There are many questions that must be answered before a disassembly decision can be reached. The most important question is economical. The cost of disassembly versus the cost of scrapping a product is always considered. This paper develops a computational tool that allows decision-makers to calculate the disassembly cost of a product. The tool makes it simple to perform 'what if' scenarios fairly quickly. The tool is Web based and has two main parts. The front-end part is a Web page and runs on the client side in a Web browser, while the back-end part is a disassembly engine (servlet) that has disassembly knowledge and costing algorithms and runs on the server side. The tool is based on the client/server model that is pervasively utilized throughout the World Wide Web. An example is used to demonstrate the implementation and capabilities of the tool.
van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F
2013-08-01
Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.
Determining the full costs of medical education in Thai Binh, Vietnam: a generalizable model.
Bicknell, W J; Beggs, A C; Tham, P V
2001-12-01
We summarize a model for determining the full cost of educating a medical student at Thai Binh Medical School in Vietnam. This is the first full-cost analysis of medical education in a low-income country in over 20 years. We emphasize policy implications and the importance of looking at the educational costs and service roles of the major health professions. In Vietnam fully subsidized medical education has given way to a system combining student-paid tuition and fees with decreased government subsidies. Full cost information facilitates resource management, setting tuition charges at a school and adjusting budget allocations between medical schools, teaching hospitals, and health centres. When linked to quality indicators, trends within and useful comparisons between schools are possible. Cost comparisons between different types of providers can assist policy-makers in judging the appropriateness of expenditures per graduate for nursing and allied health education versus physician education. If privatization of medical education is considered, cost analysis allows policy-makers to know the full costs of educating physicians including the subsidies required in clinical settings. Our approach is intuitively simple and provides useful, understandable new information to managers and policy-makers. The full cost per medical graduate in 1997 was 111 462 989 Vietnamese Dong (US$9527). The relative expenditure per Vietnamese physician educated was 2.8 times the expenditure in the United States when adjusted for GNP per capita. Preliminary findings suggest that, within Vietnam, the cost to educate a physician is 14 times the cost of educating a nurse. Given the direct costs of physician education, the lifetime earnings of physicians and the costs that physicians generate for the use of health services and supplies, it is remarkable that so little attention is paid to the costs of educating physicians. Studies of this type can provide the quantitative basis for vital human resource and health services policy considerations.
Nishikiori, Nobuyuki; Van Weezenbeek, Catharina
2013-02-02
Despite the progress made in the past decade, tuberculosis (TB) control still faces significant challenges. In many countries with declining TB incidence, the disease tends to concentrate in vulnerable populations that often have limited access to health care. In light of the limitations of the current case-finding approach and the global urgency to improve case detection, active case-finding (ACF) has been suggested as an important complementary strategy to accelerate tuberculosis control especially among high-risk populations. The present exercise aims to develop a model that can be used for county-level project planning. A simple deterministic model was developed to calculate the number of estimated TB cases diagnosed and the associated costs of diagnosis. The model was designed to compare cost-effectiveness parameters, such as the cost per case detected, for different diagnostic algorithms when they are applied to different risk populations. The model was transformed into a web-based tool that can support national TB programmes and civil society partners in designing ACF activities. According to the model output, tuberculosis active case-finding can be a costly endeavor, depending on the target population and the diagnostic strategy. The analysis suggests the following: (1) Active case-finding activities are cost-effective only if the tuberculosis prevalence among the target population is high. (2) Extensive diagnostic methods (e.g. X-ray screening for the entire group, use of sputum culture or molecular diagnostics) can be applied only to very high-risk groups such as TB contacts, prisoners or people living with human immunodeficiency virus (HIV) infection. (3) Basic diagnostic approaches such as TB symptom screening are always applicable although the diagnostic yield is very limited. The cost-effectiveness parameter was sensitive to local diagnostic costs and the tuberculosis prevalence of target populations. The prioritization of appropriate target populations and careful selection of cost-effective diagnostic strategies are critical prerequisites for rational active case-finding activities. A decision to conduct such activities should be based on the setting-specific cost-effectiveness analysis and programmatic assessment. A web-based tool was developed and is available to support national tuberculosis programmes and partners in the formulation of cost-effective active case-finding activities at the national and subnational levels.
Why fruit rots: theoretical support for Janzen's theory of microbe-macrobe competition.
Ruxton, Graeme D; Wilkinson, David M; Schaefer, H Martin; Sherratt, Thomas N
2014-05-07
We present a formal model of Janzen's influential theory that competition for resources between microbes and vertebrates causes microbes to be selected to make these resources unpalatable to vertebrates. That is, fruit rots, seeds mould and meat spoils, in part, because microbes gain a selective advantage if they can alter the properties of these resources to avoid losing the resources to vertebrate consumers. A previous model had failed to find circumstances in which such a costly spoilage trait could flourish; here, we present a simple analytic model of a general situation where costly microbial spoilage is selected and persists. We argue that the key difference between the two models lies in their treatments of microbial dispersal. If microbial dispersal is sufficiently spatially constrained that different resource items can have differing microbial communities, then spoilage will be selected; however, if microbial dispersal has a strong homogenizing effect on the microbial community then spoilage will not be selected. We suspect that both regimes will exist in the natural world, and suggest how future empirical studies could explore the influence of microbial dispersal on spoilage.
McGrath, Brian; Buckius, Michelle T; Grim, Rod; Bell, Theodore; Ahuja, Vanita
2011-12-01
Laparoscopic appendectomy (LA) has become more acceptable for the treatment of appendicitis over the last decade; however, its cost benefit compared to open appendectomy (OA) remains under debate. The purpose of this study is to evaluate the utilization of LA and its cost effectiveness based on total hospital charges stratified by complexity of disease and complications compared to OA. Nationwide Inpatient Sample data from 1998 to 2008 with the principal diagnosis of appendicitis were included. Appendicitis cases were divided by simple and complex (peritonitis or abscess) and subdivided by OA, LA, and lap converted to open (CONV). Total charges (2008 value), length of stay (LOS), and complications were assessed by disease presentation and operative approach. Between 1998 and 2008, 1,561,518 (54.3%) OA, 1,231,643 (42.8%) LA, and 84,662 (2.9%) CONV appendectomies were performed. LA had shorter LOS (2 d) than OA (3 d) and CONV (5 d) (P<0.001). CONV (7.4%) cases had more complications than OA (3.7%) and LA (2.6%). LA ($19,978) and CONV ($28,103) are costlier than OA ($15,714) based on normalized cost for simple and complex diseases (P<0.001). LA is more prevalent but its cost is higher in both simple and complex cases. Cost and complications increase if the case is converted to open. OA remains the most cost effective approach for patients with acute appendicitis. Copyright © 2011 Elsevier Inc. All rights reserved.
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.
2007-01-01
Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-02-01
Over one million HUD-supported public housing units provide rental housing for eligible low-income families across the country. A survey of over 100 public housing authorities (PHAs) across the country indicated that there is a high level of interest in developing low-cost solutions that improve energy efficiency and can be seamlessly included in the refurbishment process. Further, PHAs, have incentives (both internal and external) to reduce utility bills. ARIES worked with four PHAs to develop packages of energy efficiency retrofit measures the PHAs can cost-effectively implement with their own staffs in the normal course of housing operations at the time whenmore » units are refurbished between occupancies. The energy efficiency turnover protocols emphasized air infiltration reduction, duct sealing, and measures that improve equipment efficiency. ARIES documented implementation in 18 housing units. Reductions in average air leakage were 16 percent and duct leakage reductions averaged 23 percent. Total source energy consumption savings due to implemented measures was estimated at 3-10 percent based on BEopt modeling with a simple payback of 1.6 to 2.5 years. Implementation challenges were encountered mainly related to required operational changes and budgetary constraints. Nevertheless, simple measures can feasibly be accomplished by PHA staff at low or no cost. At typical housing unit turnover rates, these measures could impact hundreds of thousands of units per year nationally.« less
Islip Housing Authority Energy Efficiency Turnover Protocols, Islip, New York (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-08-01
More than 1 million HUD-supported public housing units provide rental housing for eligible low-income families across the country. A survey of over 100 PHAs across the country indicated that there is a high level of interest in developing low cost solutions that improve energy efficiency and can be seamlessly included in the refurbishment process. Further, PHAs, have incentives (both internal and external) to reduce utility bills. ARIES worked with two public housing authorities (PHAs) to develop packages of energy efficiency retrofit measures the PHAs can cost effectively implement with their own staffs in the normal course of housing operations atmore » the time when units are refurbished between occupancies. The energy efficiency turnover protocols emphasized air infiltration reduction, duct sealing and measures that improve equipment efficiency. ARIES documented implementation in ten housing units. Reductions in average air leakage were 16-20% and duct leakage reductions averaged 38%. Total source energy consumption savings was estimated at 6-10% based on BEopt modeling with a simple payback of 1.7 to 2.2 years. Implementation challenges were encountered mainly related to required operational changes and budgetary constraints. Nevertheless, simple measures can feasibly be accomplished by PHA staff at low or no cost. At typical housing unit turnover rates, these measures could impact hundreds of thousands of unit per year nationally.« less
Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan
2016-08-01
Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
The construction and operation of a low-cost poultry waste digester.
Steinsberger, S C; Shih, J C
1984-05-01
A simple and low-cost poultry waste digester (PWD) was constructed to treat the waste from 4000 caged laying hens on University Research Unit No. 2 at North Carolina State University. The system was built basically of a plastic lining with insulation, a heating system, a hot-water tank, and other metering equipment. It was operated at 50 degrees C and pH 7.5-8.0. The initiation of methane production was achieved using the indigenous microflora in the poultry waste. At an optimal loading rate (7.5 kg volatile solids/m(3) day), the PWD produced biogas (55% methane) at a rate of 4.0 m(3)/m(3) day. The PWD was biologically stable and able to tolerate temporary overloads and shutdowns. A higher loading rate failed to maintain a high gas production rate and caused drops in methane content and pH value. Under optimal conditions, a positive energy balance was demonstrated with a net surplus of 50.6% of the gross energy. For methane production, the PWD system was proved to be technically feasible. The simple design and inexpensive materials used for this model could significantly reduce the cost of digestion compared to more conventional systems. More studies are needed to determine the durability, the required maintenance of the system, and the most economical method of biogas and solid residue utilization.
Public Housing: A Tailored Approach to Energy Retrofits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dentz, J.; Conlin, F.; Podorson, D.
2016-02-18
Over one million HUD-supported public housing units provide rental housing for eligible low-income families across the country. A survey of over 100 public housing authorities (PHAs) across the country indicated that there is a high level of interest in developing low-cost solutions that improve energy efficiency and can be seamlessly included in the refurbishment process. Further, PHAs, have incentives (both internal and external) to reduce utility bills. ARIES worked with four PHAs to develop packages of energy efficiency retrofit measures the PHAs can cost-effectively implement with their own staffs in the normal course of housing operations at the time whenmore » units are refurbished between occupancies. The energy efficiency turnover protocols emphasized air infiltration reduction, duct sealing, and measures that improve equipment efficiency. ARIES documented implementation in 18 housing units. Reductions in average air leakage were 16% and duct leakage reductions averaged 23%. Total source energy consumption savings due to implemented measures was estimated at 3-10% based on BEopt modeling with a simple payback of 1.6 to 2.5 years. Implementation challenges were encountered mainly related to required operational changes and budgetary constraints. Nevertheless, simple measures can feasibly be accomplished by PHA staff at low or no cost. At typical housing unit turnover rates, these measures could impact hundreds of thousands of units per year nationally.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
High performance water heaters are typically more time consuming and costly to install in retrofit applications, making high performance water heaters difficult to justify economically. However, recent advancements in high performance water heaters have targeted the retrofit market, simplifying installations and reducing costs. Four high efficiency natural gas water heaters designed specifically for retrofit applications were installed in single-family homes along with detailed monitoring systems to characterize their savings potential, their installed efficiencies, and their ability to meet household demands. The water heaters tested for this project were designed to improve the cost-effectiveness and increase market penetration of high efficiencymore » water heaters in the residential retrofit market. The retrofit high efficiency water heaters achieved their goal of reducing costs, maintaining savings potential and installed efficiency of other high efficiency water heaters, and meeting the necessary capacity in order to improve cost-effectiveness. However, the improvements were not sufficient to achieve simple paybacks of less than ten years for the incremental cost compared to a minimum efficiency heater. Significant changes would be necessary to reduce the simple payback to six years or less. Annual energy savings in the range of $200 would also reduce paybacks to less than six years. These energy savings would require either significantly higher fuel costs (greater than $1.50 per therm) or very high usage (around 120 gallons per day). For current incremental costs, the water heater efficiency would need to be similar to that of a heat pump water heater to deliver a six year payback.« less
Simple Retrofit High-Efficiency Natural Gas Water Heater Field Test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoenbauer, Ben
High-performance water heaters are typically more time consuming and costly to install in retrofit applications, making high performance water heaters difficult to justify economically. However, recent advancements in high performance water heaters have targeted the retrofit market, simplifying installations and reducing costs. Four high efficiency natural gas water heaters designed specifically for retrofit applications were installed in single-family homes along with detailed monitoring systems to characterize their savings potential, their installed efficiencies, and their ability to meet household demands. The water heaters tested for this project were designed to improve the cost-effectiveness and increase market penetration of high efficiency watermore » heaters in the residential retrofit market. The retrofit high efficiency water heaters achieved their goal of reducing costs, maintaining savings potential and installed efficiency of other high efficiency water heaters, and meeting the necessary capacity in order to improve cost-effectiveness. However, the improvements were not sufficient to achieve simple paybacks of less than ten years for the incremental cost compared to a minimum efficiency heater. Significant changes would be necessary to reduce the simple payback to six years or less. Annual energy savings in the range of $200 would also reduce paybacks to less than six years. These energy savings would require either significantly higher fuel costs (greater than $1.50 per therm) or very high usage (around 120 gallons per day). For current incremental costs, the water heater efficiency would need to be similar to that of a heat pump water heater to deliver a six year payback.« less
Simple Retrofit High-Efficiency Natural Gas Water Heater Field Test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoenbauer, Ben
High performance water heaters are typically more time consuming and costly to install in retrofit applications, making high performance water heaters difficult to justify economically. However, recent advancements in high performance water heaters have targeted the retrofit market, simplifying installations and reducing costs. Four high efficiency natural gas water heaters designed specifically for retrofit applications were installed in single-family homes along with detailed monitoring systems to characterize their savings potential, their installed efficiencies, and their ability to meet household demands. The water heaters tested for this project were designed to improve the cost-effectiveness and increase market penetration of high efficiencymore » water heaters in the residential retrofit market. The retrofit high efficiency water heaters achieved their goal of reducing costs, maintaining savings potential and installed efficiency of other high efficiency water heaters, and meeting the necessary capacity in order to improve cost-effectiveness. However, the improvements were not sufficient to achieve simple paybacks of less than ten years for the incremental cost compared to a minimum efficiency heater. Significant changes would be necessary to reduce the simple payback to six years or less. Annual energy savings in the range of $200 would also reduce paybacks to less than six years. These energy savings would require either significantly higher fuel costs (greater than $1.50 per therm) or very high usage (around 120 gallons per day). For current incremental costs, the water heater efficiency would need to be similar to that of a heat pump water heater to deliver a six year payback.« less
In-plane cost-effective magnetically actuated valve for microfluidic applications
NASA Astrophysics Data System (ADS)
Pugliese, Marco; Ferrara, Francesco; Bramanti, Alessandro Paolo; Gigli, Giuseppe; Maiorano, Vincenzo
2017-04-01
We present a new in-plane magnetically actuated microfluidic valve. Its simple design includes a circular area joining two channels lying on the same plane. The area is parted by a septum lying on and adhering to a magneto-active polymeric ‘floor’ membrane, keeping the channels normally separated (valve closed). Under the action of a magnetic field, the membrane collapses, letting the liquid flow below the septum (valve open). The valve was extensively characterized experimentally, and modeled and optimized theoretically. The growing interest in lab on chips, especially for diagnostics and precision medicine, is driving researchers towards smart, efficient and low cost solutions to the management of biological samples. In this context, the valve developed in this work represents a useful building-block for microfluidic applications requiring precise flow control, its main features being easy and rapid manufacturing, biocompatibility and low cost.
Science and society test VI: Energy economics
NASA Astrophysics Data System (ADS)
Hafemeister, David W.
1982-01-01
Simple numerical estimates are developed in order to quantify a variety of energy economics issues. The Verhulst equation, which considers the effect of finite resources on petroleum production, is modified to take into account supply and demand economics. Numerical and analytical solutions to these differential equations are presented in terms of supply and demand elasticity functions, various finite resources, and the rate of increase in fuel costs. The indirect cost per barrel of imported oil from OPEC is shown to be about the same as the direct cost. These effects, as well as those of discounted benefits and deregulation, are used in a calculation of payback periods for various energy conserving devices. A phenomenological model for market penetration is developed along with the factors for future energy growth rates. A brief analysis of the economic returns of the ''house doctor'' program to reprofit houses for energy conservation is presented.
Task 28: Web Accessible APIs in the Cloud Trade Study
NASA Technical Reports Server (NTRS)
Gallagher, James; Habermann, Ted; Jelenak, Aleksandar; Lee, Joe; Potter, Nathan; Yang, Muqun
2017-01-01
This study explored three candidate architectures for serving NASA Earth Science Hierarchical Data Format Version 5 (HDF5) data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the project are: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and Network Common Data Format Version 4 (netCDF4) data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3).Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.
Chesson, Harrell W; Forhan, Sara E; Gottlieb, Sami L; Markowitz, Lauri E
2008-08-18
We estimated the health and economic benefits of preventing recurrent respiratory papillomatosis (RRP) through quadrivalent human papillomavirus (HPV) vaccination. We applied a simple mathematical model to estimate the averted costs and quality-adjusted life years (QALYs) saved by preventing RRP in children whose mothers had been vaccinated at age 12 years. Under base case assumptions, the prevention of RRP would avert an estimated USD 31 (range: USD 2-178) in medical costs (2006 US dollars) and save 0.00016 QALYs (range: 0.00001-0.00152) per 12-year-old girl vaccinated. Including the benefits of RRP reduced the estimated cost per QALY gained by HPV vaccination by roughly 14-21% in the base case and by <2% to >100% in the sensitivity analyses. More precise estimates of the incidence of RRP are needed, however, to quantify this impact more reliably.
Laser Speckle Photography: Some Simple Experiments for the Undergraduate Laboratory.
ERIC Educational Resources Information Center
Bates, B.; And Others
1986-01-01
Describes simple speckle photography experiments which are easy to set up and require only low cost standard laboratory equipment. Included are procedures for taking single, double, and multiple exposures. (JN)
Technology in rural transportation: "Simple Solutions"
DOT National Transportation Integrated Search
1997-10-01
The Rural Outreach Project: Simple Solutions Report contains the findings of a research effort aimed at identifying and describing proven, cost-effective, low-tech solutions for rural transportation-related problems or needs. Through a process ...
Assessing the costs attributed to project delay during project pre-construction stages
DOT National Transportation Integrated Search
2016-03-01
This project for the Texas Department of Transportation (TxDOT) developed a simple but sound : methodology for estimating the cost of delaying most types of highway projects. Researchers considered the : cost of delays during the pre-construction pha...
Assessing the costs attributed to project delay during project pre-construction stages.
DOT National Transportation Integrated Search
2016-03-01
This project for the Texas Department of Transportation (TxDOT) developed a simple but sound : methodology for estimating the cost of delaying most types of highway projects. Researchers considered the : cost of delays during the pre-construction pha...
Performability modeling based on real data: A case study
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1988-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.
Performability modeling based on real data: A casestudy
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1987-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.
NASA Astrophysics Data System (ADS)
Cowden, J. R.; Watkins, D. W.; Mihelcic, J. R.; Fry, L. M.
2007-12-01
Urban populations now exceed rural populations worldwide, creating unique challenges in providing basic services, especially in developing countries where informal or illegal settlements grow in peri-urban areas. West Africa is an acute example of the problems created by rapid urban growth, with high levels of urban poverty and low water and sanitation access rates. Although considerable effort has been made in providing improved water access and urban services to slum communities, research indicates that clean water access rates are not keeping up with urbanization rates in several areas of the world and that rapidly growing slum communities are beginning to overwhelm many prior water improvements projects. In the face of these challenges, domestic rainwater harvesting is proposed as a technologically appropriate and economically viable option for enhancing water supplies to urban slum households. However, assessing the reliability, potential health impacts, and overall cost-effectiveness of these systems on a regional level is difficult for several reasons. First, long daily rainfall records are not readily available in much of the developing world, including many regions of sub-Saharan Africa. Second, significant uncertainties exist in the relevant cost, water use, and health data. Third, to estimate the potential future impacts at the regional scale, various global change scenarios should be investigated. Finally, in addition to these technical challenges, there is also a need to develop relatively simple and transparent assessment methods for informing policy makers. A procedure is presented for assessment of domestic rainwater harvesting systems using a combination of scenario, sensitivity, and trade-off analyses. Using data from West Africa, simple stochastic weather models are developed to generate rainfall sequences for the region, which are then used to estimate the reliability of providing a range of per capita water supplies. Next, a procedure is proposed for quantifying the health impacts of improved water supplies, and sensitivity analysis of cost and health data provides an indication of cost- effectiveness. Climate change impacts are assessed via weather model parameter adjustment according to statistical downscaling of general circulation model output. Future work involving the interpolation of model parameters to ungaged sites, incorporation of additional global change scenarios (e.g., population, emissions), and extension of the procedure to a full Monte Carlo analysis will be discussed as time allows.
Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting
2016-01-01
This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507
Advanced Hydrogen Liquefaction Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, Joseph; Kromer, Brian; Neu, Ben
2011-09-28
The project identified and quantified ways to reduce the cost of hydrogen liquefaction, and reduce the cost of hydrogen distribution. The goal was to reduce the power consumption by 20% and then to reduce the capital cost. Optimizing the process, improving process equipment, and improving ortho-para conversion significantly reduced the power consumption of liquefaction, but by less than 20%. Because the efficiency improvement was less than the target, the program was stopped before the capital cost was addressed. These efficiency improvements could provide a benefit to the public to improve the design of future hydrogen liquefiers. The project increased themore » understanding of hydrogen liquefaction by modeling different processes and thoroughly examining ortho-para separation and conversion. The process modeling provided a benefit to the public because the project incorporated para hydrogen into the process modeling software, so liquefaction processes can be modeled more accurately than using only normal hydrogen. Adding catalyst to the first heat exchanger, a simple method to reduce liquefaction power, was identified, analyzed, and quantified. The demonstrated performance of ortho-para separation is sufficient for at least one identified process concept to show reduced power cost when compared to hydrogen liquefaction processes using conventional ortho-para conversion. The impact of improved ortho-para conversion can be significant because ortho para conversion uses about 20-25% of the total liquefaction power, but performance improvement is necessary to realize a substantial benefit. Most of the energy used in liquefaction is for gas compression. Improvements in hydrogen compression will have a significant impact on overall liquefier efficiency. Improvements to turbines, heat exchangers, and other process equipment will have less impact.« less
Automated external defibrillators in schools?
Cornelis, Charlotte; Calle, Paul; Mpotos, Nicolas; Monsieurs, Koenraad
2015-06-01
Automated external defibrillators (AEDs) placed in public locations can save lives of cardiac arrest victims. In this paper, we try to estimate the cost-effectiveness of AED placement in Belgian schools. This would allow school policy makers to make an evidence-based decision about an on-site AED project. We developed a simple mathematical model containing literature data on the incidence of cardiac arrest with a shockable rhythm; the feasibility and effectiveness of defibrillation by on-site AEDs and the survival benefit. This was coupled to a rough estimation of the minimal costs to initiate an AED project. According to the model described above, AED projects in all Belgian schools may save 5 patients annually. A rough estimate of the minimal costs to initiate an AED project is 660 EUR per year. As there are about 6000 schools in Belgium, a national AED project in all schools would imply an annual cost of at least 3960 000 EUR, resulting in 5 lives saved. As our literature survey shows that AED use in schools is feasible and effective, the placement of these devices in all Belgian schools is undoubtedly to be considered. The major counter-arguments are the very low incidence and the high costs to set up a school-based AED programme. Our review may fuel the discussion about Whether or not school-based AED projects represent good value for money and should be preferred above other health care interventions.
Cost comparison: limb salvage versus amputation in diabetic patients with charcot foot.
Gil, Joseph; Schiff, Adam P; Pinzur, Michael S
2013-08-01
The negative impact on health-related quality of life in patients with Charcot foot has prompted operative correction of the acquired deformity. Comparative effectiveness financial models are being introduced to provide valuable information to assist clinical decision making. Seventy-six patients with Charcot foot underwent operative correction with the use of circular external fixation. Thirty-eight (50%) had osteomyelitis. A control group was created from 17 diabetic patients who successfully underwent transtibial amputation and prosthetic fitting during the same period. Cost of care during the 12 months following surgery was derived from inpatient hospitalization, placement in a rehabilitation unit or skilled nursing facility, home health care including parenteral antibiotic therapy, physical therapy, and purchase of prosthetic devices or footwear. Fifty-three of the patients with limb salvage (69.7%) did not require inpatient rehabilitation. Their average cost of care was $56,712. Fourteen of the patients with amputation (82.4%) required inpatient rehabilitation, with an average cost of $49,251. Many surgeons now favor operative correction of Charcot foot deformity. This investigation provides preliminary data on the relative cost of transtibial amputation and prosthetic limb fitting compared with limb salvage. The use of comparative effectiveness models such as this simple attempt may provide valuable information in planning resource allocation for similar complex groups of patients. Level III, economic and decision analysis.
Optimal control of hydroelectric facilities
NASA Astrophysics Data System (ADS)
Zhao, Guangzhi
This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the challenging problem of optimizing a sequence of two hydro dams sharing the same river system. The complexity of this problem is magnified and we just scratch its surface here. The thesis concludes with suggestions for future work in this fertile area. Keywords: dynamic programming, hydroelectric facility, optimization, optimal control, switching cost, turbine efficiency.
Segmenting high-cost Medicare patients into potentially actionable cohorts.
Joynt, Karen E; Figueroa, Jose F; Beaulieu, Nancy; Wild, Robert C; Orav, E John; Jha, Ashish K
2017-03-01
Providers are assuming growing responsibility for healthcare spending, and prior studies have shown that spending is concentrated in a small proportion of patients. Using simple methods to segment these patients into clinically meaningful subgroups may be a useful and accessible strategy for targeting interventions to control costs. Using Medicare fee-for-service claims from 2011 (baseline year, used to determine comorbidities and subgroups) and 2012 (spending year), we used basic demographics and comorbidities to group beneficiaries into 6 cohorts, defined by expert opinion and consultation: under-65 disabled/ESRD, frail elderly, major complex chronic, minor complex chronic, simple chronic, and relatively healthy. We considered patients in the highest 10% of spending to be "high-cost." 611,245 beneficiaries were high-cost; these patients were less often white (76.2% versus 80.9%) and more often dually-eligible (37.0% versus 18.3%). By segment, frail patients were the most likely (46.2%) to be high-cost followed by the under-65 (14.3%) and major complex chronic groups (11.1%); fewer than 5% of the beneficiaries in the other cohorts were high-cost in the spending year. The frail elderly ($70,196) and under-65 disabled/ESRD ($71,210) high-cost groups had the highest spending; spending in the frail high-cost group was driven by inpatient ($23,704) and post-acute care ($24,080), while the under 65-disabled/ESRD spent more through part D costs ($23,003). Simple criteria can segment Medicare beneficiaries into clinically meaningful subgroups with different spending profiles. Under delivery system reform, interventions that focus on frail or disabled patients may have particularly high value as providers seek to reduce spending. IV. Copyright © 2016 Elsevier Inc. All rights reserved.
Economic Evaluation of a Hybrid Desalination System Combining Forward and Reverse Osmosis
Choi, Yongjun; Cho, Hyeongrak; Shin, Yonghyun; Jang, Yongsun; Lee, Sangho
2015-01-01
This study seeks to evaluate the performance and economic feasibility of the forward osmosis (FO)–reverse osmosis (RO) hybrid process; to propose a guideline by which this hybrid process might be more price-competitive in the field. A solution-diffusion model modified with film theory was applied to analyze the effects of concentration polarization, water, and salt transport coefficient on flux, recovery, seawater concentration, and treated wastewater of the FO process of an FO-RO hybrid system. A simple cost model was applied to analyze the effects of flux; recovery of the FO process; energy; and membrane cost on the FO-RO hybrid process. The simulation results showed that the water transport coefficient and internal concentration polarization resistance are very important factors that affect performance in the FO process; however; the effect of the salt transport coefficient does not seem to be large. It was also found that the flux and recovery of the FO process, the FO membrane, and the electricity cost are very important factors that influence the water cost of an FO-RO hybrid system. This hybrid system can be price-competitive with RO systems when its recovery rate is very high, the flux and the membrane cost of the FO are similar to those of the RO, and the electricity cost is expensive. The most important thing in commercializing the FO process is enhancing performance (e.g.; flux and the recovery of FO membranes). PMID:26729176
Assessing predation risk: optimal behaviour and rules of thumb.
Welton, Nicky J; McNamara, John M; Houston, Alasdair I
2003-12-01
We look at a simple model in which an animal makes behavioural decisions over time in an environment in which all parameters are known to the animal except predation risk. In the model there is a trade-off between gaining information about predation risk and anti-predator behaviour. All predator attacks lead to death for the prey, so that the prey learns about predation risk by virtue of the fact that it is still alive. We show that it is not usually optimal to behave as if the current unbiased estimate of the predation risk is its true value. We consider two different ways to model reproduction; in the first scenario the animal reproduces throughout its life until it dies, and in the second scenario expected reproductive success depends on the level of energy reserves the animal has gained by some point in time. For both of these scenarios we find results on the form of the optimal strategy and give numerical examples which compare optimal behaviour with behaviour under simple rules of thumb. The numerical examples suggest that the value of the optimal strategy over the rules of thumb is greatest when there is little current information about predation risk, learning is not too costly in terms of predation, and it is energetically advantageous to learn about predation. We find that for the model and parameters investigated, a very simple rule of thumb such as 'use the best constant control' performs well.
Colston, Josh; Saboyá, Martha
2013-05-01
We present an example of a tool for quantifying the burden, the population in need of intervention and resources need to contribute for the control of soil-transmitted helminth (STH) infection at multiple administrative levels for the region of Latin America and the Caribbean (LAC). The tool relies on published STH prevalence data along with data on the distribution of several STH transmission determinants for 12,273 sub-national administrative units in 22 LAC countries taken from national censuses. Data on these determinants was aggregated into a single risk index based on a conceptual framework and the statistical significance of the association between this index and the STH prevalence indicators was tested using simple linear regression. The coefficient and constant from the output of this regression was then put into a regression formula that was applied to the risk index values for all of the administrative units in order to model the estimated prevalence of each STH species. We then combine these estimates with population data, treatment thresholds and unit cost data to calculate total control costs. The model predicts an annual cost for the procurement of preventive chemotherapy of around US$ 1.7 million and a total cost of US$ 47 million for implementing a comprehensive STH control programme targeting an estimated 78.7 million school-aged children according to the WHO guidelines throughout the entirety of the countries included in the study. Considerable savings to this cost could potentially be made by embedding STH control interventions within existing health programmes and systems. A study of this scope is prone to many limitations which restrict the interpretation of the results and the uses to which its findings may be put. We discuss several of these limitations.
Essays on competition in electricity markets
NASA Astrophysics Data System (ADS)
Bustos Salvagno, Ricardo Javier
The first chapter shows how technology decisions affect entry in commodity markets with oligopolistic competition, like the electricity market. I demonstrate an entry deterrence effect that works through cost uncertainty. Technology's cost uncertainty affects spot market expected profits through forward market trades. Therefore, incentives to engage in forward trading shape firms' decisions on production technologies. I show that high-cost but low-risk technologies are adopted by risk-averse incumbents to deter entry. Strategic technology adoption can end in a equilibrium where high-cost technologies prevail over low-cost but riskier ones. In the case of incumbents who are less risk-averse than entrants, entry deterrence is achieved by choosing riskier technologies. The main results do not depend on who chooses their technology first. Chapter two examines the Chilean experience on auctions for long-term supply contracts in electricity markets from 2006 to 2011. Using a divisible-good auction model, I provide a theoretical framework that explains bidding behavior in terms of expected spot prices and contracting positions. The model is extended to include potential strategic behavior on contracting decisions. Empirical estimations confirm the main determinants of bidding behavior and show heterogeneity in the marginal cost of over-contracting depending on size and incumbency. Chapter three analyzes the lag in capacity expansion in the Chilean electricity market from 2000 to 2004. Regarded as a result of regulatory uncertainty, the role of delays in the construction of a large hydro-power plant has been overlooked by the literature. We argue that those delays postponed projected investment and gave small windows of opportunity that only incumbents could take advantage of. We are able to retrace the history of investments through real-time information from the regulator's reports and a simple model enables us to explain the effect of those delays on suggested and under-construction investments.
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
Uncertainty, imprecision, and the precautionary principle in climate change assessment.
Borsuk, M E; Tomassini, L
2005-01-01
Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.
Combining Statistics and Physics to Improve Climate Downscaling
NASA Astrophysics Data System (ADS)
Gutmann, E. D.; Eidhammer, T.; Arnold, J.; Nowak, K.; Clark, M. P.
2017-12-01
Getting useful information from climate models is an ongoing problem that has plagued climate science and hydrologic prediction for decades. While it is possible to develop statistical corrections for climate models that mimic current climate almost perfectly, this does not necessarily guarantee that future changes are portrayed correctly. In contrast, convection permitting regional climate models (RCMs) have begun to provide an excellent representation of the regional climate system purely from first principles, providing greater confidence in their change signal. However, the computational cost of such RCMs prohibits the generation of ensembles of simulations or long time periods, thus limiting their applicability for hydrologic applications. Here we discuss a new approach combining statistical corrections with physical relationships for a modest computational cost. We have developed the Intermediate Complexity Atmospheric Research model (ICAR) to provide a climate and weather downscaling option that is based primarily on physics for a fraction of the computational requirements of a traditional regional climate model. ICAR also enables the incorporation of statistical adjustments directly within the model. We demonstrate that applying even simple corrections to precipitation while the model is running can improve the simulation of land atmosphere feedbacks in ICAR. For example, by incorporating statistical corrections earlier in the modeling chain, we permit the model physics to better represent the effect of mountain snowpack on air temperature changes.
NASA Astrophysics Data System (ADS)
Bergasa-Caceres, Fernando; Rabitz, Herschel A.
2013-06-01
A model of protein folding kinetics is applied to study the effects of macromolecular crowding on protein folding rate and stability. Macromolecular crowding is found to promote a decrease of the entropic cost of folding of proteins that produces an increase of both the stability and the folding rate. The acceleration of the folding rate due to macromolecular crowding is shown to be a topology-dependent effect. The model is applied to the folding dynamics of the murine prion protein (121-231). The differential effect of macromolecular crowding as a function of protein topology suffices to make non-native configurations relatively more accessible.
Forces between permanent magnets: experiments and model
NASA Astrophysics Data System (ADS)
González, Manuel I.
2017-03-01
This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r -4 at large distances, as expected.
Optimization study for the experimental configuration of CMB-S4
NASA Astrophysics Data System (ADS)
Barron, Darcy; Chinone, Yuji; Kusaka, Akito; Borril, Julian; Errard, Josquin; Feeney, Stephen; Ferraro, Simone; Keskitalo, Reijo; Lee, Adrian T.; Roe, Natalie A.; Sherwin, Blake D.; Suzuki, Aritoki
2018-02-01
The CMB Stage 4 (CMB-S4) experiment is a next-generation, ground-based experiment that will measure the cosmic microwave background (CMB) polarization to unprecedented accuracy, probing the signature of inflation, the nature of cosmic neutrinos, relativistic thermal relics in the early universe, and the evolution of the universe. CMB-S4 will consist of O(500,000) photon-noise-limited detectors that cover a wide range of angular scales in order to probe the cosmological signatures from both the early and late universe. It will measure a wide range of microwave frequencies to cleanly separate the CMB signals from galactic and extra-galactic foregrounds. To advance the progress towards designing the instrument for CMB-S4, we have established a framework to optimize the instrumental configuration to maximize its scientific output. The framework combines cost and instrumental models with a cosmology forecasting tool, and evaluates the scientific sensitivity as a function of various instrumental parameters. The cost model also allows us to perform the analysis under a fixed-cost constraint, optimizing for the scientific output of the experiment given finite resources. In this paper, we report our first results from this framework, using simplified instrumental and cost models. We have primarily studied two classes of instrumental configurations: arrays of large-aperture telescopes with diameters ranging from 2–10 m, and hybrid arrays that combine small-aperture telescopes (0.5-m diameter) with large-aperture telescopes. We explore performance as a function of telescope aperture size, distribution of the detectors into different microwave frequencies, survey strategy and survey area, low-frequency noise performance, and balance between small and large aperture telescopes for hybrid configurations. Both types of configurations must cover both large (~ degree) and small (~ arcmin) angular scales, and the performance depends on assumptions for performance vs. angular scale. The configurations with large-aperture telescopes have a shallow optimum around 4–6 m in aperture diameter, assuming that large telescopes can achieve good performance for low-frequency noise. We explore some of the uncertainties of the instrumental model and cost parameters, and we find that the optimum has a weak dependence on these parameters. The hybrid configuration shows an even broader optimum, spanning a range of 4–10 m in aperture for the large telescopes. We also present two strawperson configurations as an outcome of this optimization study, and we discuss some ideas for improving our simple cost and instrumental models used here. There are several areas of this analysis that deserve further improvement. In our forecasting framework, we adopt a simple two-component foreground model with spatially varying power-law spectral indices. We estimate de-lensing performance statistically and ignore non-idealities such as anisotropic mode coverage, boundary effect, and possible foreground residual. Instrumental systematics, which is not accounted for in our analyses, may also influence the conceptual design. Further study of the instrumental and cost models will be one of the main areas of study by the entire CMB-S4 community. We hope that our framework will be useful for estimating the influence of these improvements in the future, and we will incorporate them in order to further improve the optimization.
Paul, Nicholas A; Svensson, Carl Johan; de Nys, Rocky; Steinberg, Peter D
2014-01-01
All of the theory and most of the data on the ecology and evolution of chemical defences derive from terrestrial plants, which have considerable capacity for internal movement of resources. In contrast, most macroalgae--seaweeds--have no or very limited capacity for resource translocation, meaning that trade-offs between growth and defence, for example, should be localised rather than systemic. This may change the predictions of chemical defence theories for seaweeds. We developed a model that mimicked the simple growth pattern of the red seaweed Asparagopsis armata which is composed of repeating clusters of somatic cells and cells which contain deterrent secondary chemicals (gland cells). To do this we created a distinct growth curve for the somatic cells and another for the gland cells using empirical data. The somatic growth function was linked to the growth function for defence via differential equations modelling, which effectively generated a trade-off between growth and defence as these neighbouring cells develop. By treating growth and defence as separate functions we were also able to model a trade-off in growth of 2-3% under most circumstances. However, we found contrasting evidence for this trade-off in the empirical relationships between growth and defence, depending on the light level under which the alga was cultured. After developing a model that incorporated both branching and cell division rates, we formally demonstrated that positive correlations between growth and defence are predicted in many circumstances and also that allocation costs, if they exist, will be constrained by the intrinsic growth patterns of the seaweed. Growth patterns could therefore explain contrasting evidence for cost of constitutive chemical defence in many studies, highlighting the need to consider the fundamental biology and ontogeny of organisms when assessing the allocation theories for defence.
Estimating Colloidal Contact Model Parameters Using Quasi-Static Compression Simulations.
Bürger, Vincent; Briesen, Heiko
2016-10-05
For colloidal particles interacting in suspensions, clusters, or gels, contact models should attempt to include all physical phenomena experimentally observed. One critical point when formulating a contact model is to ensure that the interaction parameters can be easily obtained from experiments. Experimental determinations of contact parameters for particles either are based on bulk measurements for simulations on the macroscopic scale or require elaborate setups for obtaining tangential parameters such as using atomic force microscopy. However, on the colloidal scale, a simple method is required to obtain all interaction parameters simultaneously. This work demonstrates that quasi-static compression of a fractal-like particle network provides all the necessary information to obtain particle interaction parameters using a simple spring-based contact model. These springs provide resistances against all degrees of freedom associated with two-particle interactions, and include critical forces or moments where such springs break, indicating a bond-breakage event. A position-based cost function is introduced to show the identifiability of the two-particle contact parameters, and a discrete, nonlinear, and non-gradient-based global optimization method (simplex with simulated annealing, SIMPSA) is used to minimize the cost function calculated from deviations of particle positions. Results show that, in principle, all necessary contact parameters for an arbitrary particle network can be identified, although numerical efficiency as well as experimental noise must be addressed when applying this method. Such an approach lays the groundwork for identifying particle-contact parameters from a position-based particle analysis for a colloidal system using just one experiment. Spring constants also directly influence the time step of the discrete-element method, and a detailed knowledge of all necessary interaction parameters will help to improve the efficiency of colloidal particle simulations.
Dynamic coal mine model. [Generic feedback-loop model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, M.S.
1978-01-01
This study examines the determinants of the productive life cycle of a single hypothetical coal mine. The article addresses the questions of how long the mine will operate, what its annual production will be, and what percentage of the resource base will be recovered. As greatly expanded production requires capital investment, the investment decision is singled out as the principal determinant of the mine's dynamic behavior. A simple dynamic feedback loop model was constructed, the performance of which is compared with actual data to see how well the model can reproduce known behavior. Exogenous variables, such as the price ofmore » coal, the wage rate, operating costs, and the tax structure, are then changed to see how these changes affect the mine's performance.« less
NASA Astrophysics Data System (ADS)
Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.
2018-06-01
The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.
Immersion frying for the thermal drying of sewage sludge: an economic assessment.
Peregrina, Carlos; Rudolph, Victor; Lecomte, Didier; Arlabosse, Patricia
2008-01-01
This paper presents an economic study of a novel thermal fry-drying technology which transforms sewage sludge and recycled cooking oil (RCO) into a solid fuel. The process is shown to have significant potential advantage in terms of capital costs (by factors of several times) and comparable operating costs. Three potential variants of the process have been simulated and costed in terms of both capital and operating requirements for a commercial scale of operation. The differences are in the energy recovery systems, which include a simple condensation of the evaporated water and two different heat pump configurations. Simple condensation provides the simplest process, but the energy efficiency gain of an open heat pump offset this, making it economically somewhat more attractive. In terms of operating costs, current sludge dryers are dominated by maintenance and energy requirements, while for fry-drying these are comparatively small. Fry-drying running costs are dominated by provision of makeup waste oil. Cost reduction could focus on cheaper waste oil, e.g. from grease trap waste.
Wolfger, Barbara; Manns, Braden J; Barkema, Herman W; Schwartzkopf-Genswein, Karen S; Dorin, Craig; Orsel, Karin
2015-03-01
New technologies to identify diseased feedlot cattle in early stages of illness have been developed to reduce costs and welfare impacts associated with bovine respiratory disease (BRD). However, the economic value of early BRD detection has never been assessed. The objective was to simulate cost differences between two BRD detection methods during the first 61 d on feed (DOF) applied in moderate- to large-sized feedlots using an automated recording system (ARS) for feeding behavior and the current industry standard, pen-checking (visual appraisal confirmed by rectal temperature). Economic impact was assessed with a cost analysis in a simple decision model. Scenarios for Canadian and US feedlots with high- and low-risk cattle were modeled, and uncertainty was estimated using extensive sensitivity analyses. Input costs and probabilities were mainly extracted from publicly accessible market observations and a large-scale US feedlot study. In the baseline scenario, we modeled high-risk cattle with a treatment rate of 20% within the first 61 DOF in a feedlot of >8000 cattle in Canada. Early BRD detection was estimated to result in a relative risk of 0.60 in retreatment and 0.66 in mortality compared to pen-checking (based on previously published estimates). The additional cost of monitoring health with ARS in Canadian dollar (CAD) was 13.68 per steer. Scenario analysis for similar sized US feedlots and low-risk cattle with a treatment rate of 8% were included to account for variability in costs and probabilities in various cattle populations. Considering the cost of monitoring, all relevant treatment costs and sale price, ARS was more costly than visual appraisal during the first 61 DOF by CAD 9.61 and CAD 9.69 per steer in Canada and the US, respectively. This cost difference increased in low-risk cattle in Canada to CAD 12.45. Early BRD detection with ARS became less expensive if the costs for the system decreased to less than CAD 4.06/steer, or if the underlying true BRD incidence (not treatment rate) within the first 61 DOF exceeded 47%. The model was robust to variability in the remaining input variables. Some of the assumptions in the baseline analyses were conservative and may have underestimated the real value of early BRD detection. Systems such as ARS may reduce treatment costs in some scenarios, but the investment costs are currently too high to be cost-effective when used solely for BRD detection compared to pen-checking. Copyright © 2014 Elsevier B.V. All rights reserved.
Beyond Born-Mayer: Improved models for short-range repulsion in ab initio force fields
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.; ...
2016-06-23
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
A simple way to improve AGN feedback prescription in SPH simulations
NASA Astrophysics Data System (ADS)
Zubovas, Kastytis; Bourne, Martin A.; Nayakshin, Sergei
2016-03-01
Active galactic nuclei (AGN) feedback is an important ingredient in galaxy evolution, however its treatment in numerical simulations is necessarily approximate, requiring subgrid prescriptions due to the dynamical range involved in the calculations. We present a suite of smoothed particle hydrodynamics simulations designed to showcase the importance of the choice of a particular subgrid prescription for AGN feedback. We concentrate on two approaches to treating wide-angle AGN outflows: thermal feedback, where thermal and kinetic energy is injected into the gas surrounding the supermassive black hole (SMBH) particle, and virtual particle feedback, where energy is carried by tracer particles radially away from the AGN. We show that the latter model produces a far more complex structure around the SMBH, which we argue is a more physically correct outcome. We suggest a simple improvement to the thermal feedback model - injecting the energy into a cone, rather than spherically symmetrically - and show that this markedly improves the agreement between the two prescriptions, without requiring any noticeable increase in the computational cost of the simulation.
A simple ion implanter for material modifications in agriculture and gemmology
NASA Astrophysics Data System (ADS)
Singkarat, S.; Wijaikhum, A.; Suwannakachorn, D.; Tippawan, U.; Intarasiri, S.; Bootkul, D.; Phanchaisri, B.; Techarung, J.; Rhodes, M. W.; Suwankosum, R.; Rattanarin, S.; Yu, L. D.
2015-12-01
In our efforts in developing ion beam technology for novel applications in biology and gemmology, an economic simple compact ion implanter especially for the purpose was constructed. The designing of the machine was aimed at providing our users with a simple, economic, user friendly, convenient and easily operateable ion implanter for ion implantation of biological living materials and gemstones for biotechnological applications and modification of gemstones, which would eventually contribute to the national agriculture, biomedicine and gem-industry developments. The machine was in a vertical setup so that the samples could be placed horizontally and even without fixing; in a non-mass-analyzing ion implanter style using mixed molecular and atomic nitrogen (N) ions so that material modifications could be more effective; equipped with a focusing/defocusing lens and an X-Y beam scanner so that a broad beam could be possible; and also equipped with a relatively small target chamber so that living biological samples could survive from the vacuum period during ion implantation. To save equipment materials and costs, most of the components of the machine were taken from decommissioned ion beam facilities. The maximum accelerating voltage of the accelerator was 100 kV, ideally necessary for crop mutation induction and gem modification by ion beams from our experience. N-ion implantation of local rice seeds and cut gemstones was carried out. Various phenotype changes of grown rice from the ion-implanted seeds and improvements in gemmological quality of the ion-bombarded gemstones were observed. The success in development of such a low-cost and simple-structured ion implanter provides developing countries with a model of utilizing our limited resources to develop novel accelerator-based technologies and applications.
Predicting the propagation of concentration and saturation fronts in fixed-bed filters.
Callery, O; Healy, M G
2017-10-15
The phenomenon of adsorption is widely exploited across a range of industries to remove contaminants from gases and liquids. Much recent research has focused on identifying low-cost adsorbents which have the potential to be used as alternatives to expensive industry standards like activated carbons. Evaluating these emerging adsorbents entails a considerable amount of labor intensive and costly testing and analysis. This study proposes a simple, low-cost method to rapidly assess the potential of novel media for potential use in large-scale adsorption filters. The filter media investigated in this study were low-cost adsorbents which have been found to be capable of removing dissolved phosphorus from solution, namely: i) aluminum drinking water treatment residual, and ii) crushed concrete. Data collected from multiple small-scale column tests was used to construct a model capable of describing and predicting the progression of adsorbent saturation and the associated effluent concentration breakthrough curves. This model was used to predict the performance of long-term, large-scale filter columns packed with the same media. The approach proved highly successful, and just 24-36 h of experimental data from the small-scale column experiments were found to provide sufficient information to predict the performance of the large-scale filters for up to three months. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Foot’s Arch and the Energetics of Human Locomotion
Stearne, Sarah M.; McDonald, Kirsty A.; Alderson, Jacqueline A.; North, Ian; Oxnard, Charles E.; Rubenson, Jonas
2016-01-01
The energy-sparing spring theory of the foot’s arch has become central to interpretations of the foot’s mechanical function and evolution. Using a novel insole technique that restricted compression of the foot’s longitudinal arch, this study provides the first direct evidence that arch compression/recoil during locomotion contributes to lowering energy cost. Restricting arch compression near maximally (~80%) during moderate-speed (2.7 ms−1) level running increased metabolic cost by + 6.0% (p < 0.001, d = 0.67; unaffected by foot strike technique). A simple model shows that the metabolic energy saved by the arch is largely explained by the passive-elastic work it supplies that would otherwise be done by active muscle. Both experimental and model data confirm that it is the end-range of arch compression that dictates the energy-saving role of the arch. Restricting arch compression had no effect on the cost of walking or incline running (3°), commensurate with the smaller role of passive-elastic mechanics in these gaits. These findings substantiate the elastic energy-saving role of the longitudinal arch during running, and suggest that arch supports used in some footwear and orthotics may increase the cost of running. PMID:26783259
Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.
2016-01-01
Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679
A Comprehensive and Cost-Effective Computer Infrastructure for K-12 Schools
NASA Technical Reports Server (NTRS)
Warren, G. P.; Seaton, J. M.
1996-01-01
Since 1993, NASA Langley Research Center has been developing and implementing a low-cost Internet connection model, including system architecture, training, and support, to provide Internet access for an entire network of computers. This infrastructure allows local area networks which exceed 50 machines per school to independently access the complete functionality of the Internet by connecting to a central site, using state-of-the-art commercial modem technology, through a single standard telephone line. By locating high-cost resources at this central site and sharing these resources and their costs among the school districts throughout a region, a practical, efficient, and affordable infrastructure for providing scale-able Internet connectivity has been developed. As the demand for faster Internet access grows, the model has a simple expansion path that eliminates the need to replace major system components and re-train personnel. Observations of optical Internet usage within an environment, particularly school classrooms, have shown that after an initial period of 'surfing,' the Internet traffic becomes repetitive. By automatically storing requested Internet information on a high-capacity networked disk drive at the local site (network based disk caching), then updating this information only when it changes, well over 80 percent of the Internet traffic that leaves a location can be eliminated by retrieving the information from the local disk cache.
INCORPORATING ENVIRONMENTAL OUTCOMES INTO A HEALTH ECONOMIC MODEL.
Marsh, Kevin; Ganz, Michael; Nørtoft, Emil; Lund, Niels; Graff-Zivin, Joshua
2016-01-01
Traditional economic evaluations for most health technology assessments (HTAs) have previously not included environmental outcomes. With the growing interest in reducing the environmental impact of human activities, the need to consider how to include environmental outcomes into HTAs has increased. We present a simple method of doing so. We adapted an existing clinical-economic model to include environmental outcomes (carbon dioxide [CO2] emissions) to predict the consequences of adding insulin to an oral antidiabetic (OAD) regimen for patients with type 2 diabetes mellitus (T2DM) over 30 years, from the United Kingdom payer perspective. Epidemiological, efficacy, healthcare costs, utility, and carbon emissions data were derived from published literature. A scenario analysis was performed to explore the impact of parameter uncertainty. The addition of insulin to an OAD regimen increases costs by 2,668 British pounds per patient and is associated with 0.36 additional quality-adjusted life-years per patient. The insulin-OAD combination regimen generates more treatment and disease management-related CO2 emissions per patient (1,686 kg) than the OAD-only regimen (310 kg), but generates fewer emissions associated with treating complications (3,019 kg versus 3,337 kg). Overall, adding insulin to OAD therapy generates an extra 1,057 kg of CO2 emissions per patient over 30 years. The model offers a simple approach for incorporating environmental outcomes into health economic analyses, to support a decision-maker's objective of reducing the environmental impact of health care. Further work is required to improve the accuracy of the approach; in particular, the generation of resource-specific environmental impacts.
Predicting dietary intakes with simple food recall information: a case study from rural Mozambique.
Rose, D; Tschirley, D
2003-10-01
Improving dietary status is an important development objective, but monitoring of progress in this area can be too costly for many low-income countries. This paper demonstrates a simple, inexpensive technique for monitoring household diets in Mozambique. Secondary analysis of data from an intensive field survey on household food consumption and agricultural practices, known as the Nampula/Cabo Delgado Study (NCD). In total, 388 households in 16 villages from a stratified random sample of rural areas in Nampula and Cabo Delgado provinces in northern Mozambique. The NCD employed a quantitative 24-h food recall on two nonconsecutive days in each of the three different seasons. A dietary intake prediction model was developed with linear regression techniques based on NCD nutrient intake data and easy-to-collect variables, such as food group consumption and household size The model was used to predict the prevalence of low intakes among subsamples from the field study using only easy-to-collect variables. Using empirical data for the harvest season from the original NCD study, 40% of the observations on households had low-energy intakes, whereas rates of low intake for protein, vitamin A, and iron, were 14, 94, and 39, respectively. The model developed here predicted that 42% would have low-energy intakes and that 12, 93, and 35% would have low-protein, vitamin A, and iron intakes, respectively. Similarly, close predictions were found using an aggregate index of overall diet quality. This work demonstrates the potential for using low-cost methods for monitoring dietary intake in Mozambique.
The impact of Pulpwood Rail Freight Costs on the Minnesota-Wisconsin Pulpwood Market
David C. Lothner
1976-01-01
Transportation costs affect the marketing and utilization of pulpwood. Their impact on the procurement and utilization of pulpwood often prove difficult to measure because deriving an average annual measure of the transportation cost is difficult. This note, by means of a simple index method for measuring regional interstate pulpwood rail freight costs, illustrates...
NASA Astrophysics Data System (ADS)
Strassmann, Kuno M.; Joos, Fortunat
2018-05-01
The Bern Simple Climate Model (BernSCM) is a free open-source re-implementation of a reduced-form carbon cycle-climate model which has been used widely in previous scientific work and IPCC assessments. BernSCM represents the carbon cycle and climate system with a small set of equations for the heat and carbon budget, the parametrization of major nonlinearities, and the substitution of complex component systems with impulse response functions (IRFs). The IRF approach allows cost-efficient yet accurate substitution of detailed parent models of climate system components with near-linear behavior. Illustrative simulations of scenarios from previous multimodel studies show that BernSCM is broadly representative of the range of the climate-carbon cycle response simulated by more complex and detailed models. Model code (in Fortran) was written from scratch with transparency and extensibility in mind, and is provided open source. BernSCM makes scientifically sound carbon cycle-climate modeling available for many applications. Supporting up to decadal time steps with high accuracy, it is suitable for studies with high computational load and for coupling with integrated assessment models (IAMs), for example. Further applications include climate risk assessment in a business, public, or educational context and the estimation of CO2 and climate benefits of emission mitigation options.
Invisible Cost Effective Mechanics for Anterior Space Closure.
Jumle, Aatish Vinod; Bagrecha, Saurabh; Gharat, Ninad; Misal, Abhijit; Toshniwal, N G
2015-01-01
The shifting paradigm towards invisible orthodontic treatment and also awareness in patients has allured their focus towards the most esthetic treatment approach. Also the lingual treatment is proved successful and is very well accepted by the patients. The problem that persist is its high expenses, which is not affordable by all patients. This article is a effort to treat a simple Class I malocclusion with anterior spacing using a simple, esthetic, Cost effective approach with acceptable results when esthetics plays a priority role.
A novel 360-degree shape measurement using a simple setup with two mirrors and a laser MEMS scanner
NASA Astrophysics Data System (ADS)
Jin, Rui; Zhou, Xiang; Yang, Tao; Li, Dong; Wang, Chao
2017-09-01
There is no denying that 360-degree shape measurement technology plays an important role in the field of threedimensional optical metrology. Traditional optical 360-degree shape measurement methods are mainly two kinds: the first kind, by placing multiple scanners to achieve 360-degree measurements; the second kind, through the high-precision rotating device to get 360-degree shape model. The former increases the number of scanners and costly, while the latter using rotating devices lead to time consuming. This paper presents a low cost and fast optical 360-degree shape measurement method, which possesses the advantages of full static, fast and low cost. The measuring system consists of two mirrors with a certain angle, a laser projection system, a stereoscopic calibration block, and two cameras. And most of all, laser MEMS scanner can achieve precise movement of laser stripes without any movement mechanism, improving the measurement accuracy and efficiency. What's more, a novel stereo calibration technology presented in this paper can achieve point clouds data registration, and then get the 360-degree model of objects. A stereoscopic calibration block with special coded patterns on six sides is used in this novel stereo calibration method. Through this novel stereo calibration technology we can quickly get the 360-degree models of objects.
When could a stigma program to address mental illness in the workplace break even?
Dewa, Carolyn S; Hoch, Jeffrey S
2014-10-01
To explore basic requirements for a stigma program to produce sufficient savings to pay for itself (that is, break even). A simple economic model was developed to compare reductions in total short-term disability (SDIS) cost relative to a stigma program's costs. A 2-way sensitivity analysis is used to illustrate conditions under which this break-even scenario occurs. Using estimates from the literature for the SDIS costs, this analysis shows that a stigma program can provide value added even if there is no reduction in the length of an SDIS leave. To break even, a stigma program with no reduction in the length of an SDIS leave would need to prevent at least 2.5 SDIS claims in an organization of 1000 workers. Similarly, a stigma program can break even with no reduction in the number of SDIS claims if it is able to reduce SDIS episodes by at least 7 days in an organization of 1000 employees. Modelling results, such as those presented in our paper, provide information to help occupational health payers become prudent buyers in the mental health market place. While in most cases, the required reductions seem modest, the real test of both the model and the program occurs once a stigma program is piloted and evaluated in a real-world setting.
Time- and cost-saving apparatus for analytical sample filtration
William R. Kenealy; Joseph C. Destree
2005-01-01
Simple and cost-effective protocols were developed for removing particulates from samples prior to analysis by high performance liquid chromatography and gas chromatography. A filter and vial holder were developed for use with a 96-well filtration plate. The device saves preparation time and costs.
Kapoor, Ritika; Martinez-Vega, Rosario; Dong, Di; Tan, Sharlene Yanying; Leo, Yee-Sin; Lee, Cheng-Chuan; Sung, Cynthia; Ng, Oon-Tek; Archuleta, Sophia; Teo, Yik-Ying
2015-02-01
Abacavir (ABC) is one of the more affordable antiretroviral drugs used for controlling HIV. Although with similar efficacy to current first-line drugs, its limited usage in Singapore can be attributed to its possible side effect of adverse hypersensitivity reactions (HSRs). HLA-B*5701 genotyping is a clinically relevant procedure for avoiding abacavir-induced HSRs. As patients who do not carry the risk allele are unlikely to develop HSRs, a simple rule can be developed to allow abacavir prescription for patients who are B*5701 negative. Here, we carry out a cost-effectiveness analysis of HLA-B*5701 genotyping before abacavir prescription in the context of the Singapore healthcare system, which caters predominantly to Han Chinese, Southeast-asian Malays, and South-asian Indians. In addition, we aim to identify the most cost-effective treatment regimen for HIV patients. A decision tree model was developed in TreeAge. The model considers medical treatment and genotyping costs, genotyping test characteristics, the prevalence of the risk allele, reduction in the quality of life, and increased expenditure due to side effects and other factors, evaluating independently over early-stage and late-stage HIV patients segmented by drug contraindications. The study indicates that genotyping is not cost-effective for any ethnicity irrespective of the disease stage, except for Indian patients with early-stage HIV who are contraindicated to tenofovir. Abacavir (as first-line) without genotyping is the cheapest and most cost-effective treatment for all ethnicities except for early-stage Indian HIV patients contraindicated to tenofovir. The HLA-B*5701 frequency, the mortality rate from abacavir-induced HSRs, and genotyping costs are among the major factors influencing the cost-effectiveness.
Psota, Marek; Psenkova, Maria Bucek; Racekova, Natalia; Ramirez de Arellano, Antonio; Vandebrouck, Tom; Hunt, Barnaby
2017-01-01
Aims To investigate the cost-effectiveness of once-daily insulin degludec/liraglutide (IDegLira) versus basal-bolus therapy in patients with type 2 diabetes not meeting glycemic targets on basal insulin from a healthcare payer perspective in Slovakia. Methods Long-term clinical and economic outcomes for patients receiving IDegLira and basal-bolus therapy were estimated using the IMS CORE Diabetes Model based on a published pooled analysis of patient-level data. Results IDegLira was associated with an improvement in quality-adjusted life expectancy of 0.29 quality-adjusted life years (QALYs) compared with basal-bolus therapy. The average lifetime cost per patient in the IDegLira arm was EUR 2,449 higher than in the basal-bolus therapy arm. Increased treatment costs with IDegLira were partially offset by cost savings from avoided diabetes-related complications. IDegLira was highly cost-effective versus basal-bolus therapy with an incremental cost-effectiveness ratio of EUR 8,590 per QALY gained, which is well below the cost-effectiveness threshold set by the law in Slovakia. Conclusion IDegLira is cost-effective in Slovakia, providing a simple option for intensification of basal insulin therapy without increasing the risk of hypoglycemia or weight gain and with fewer daily injections than a basal-bolus regimen. PMID:29276398
A Simple and Reliable Method of Design for Standalone Photovoltaic Systems
NASA Astrophysics Data System (ADS)
Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.
2017-06-01
Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.
Hadorn, Daniela C; Racloz, Vanessa; Schwermer, Heinzpeter; Stärk, Katharina D C
2009-01-01
Vector-borne diseases pose a special challenge to veterinary authorities due to complex and time-consuming surveillance programs taking into account vector habitat. Using stochastic scenario tree modelling, each possible surveillance activity of a future surveillance system can be evaluated with regard to its sensitivity and the expected cost. The overall sensitivity of various potential surveillance systems, composed of different combinations of surveillance activities, is calculated and the proposed surveillance system is optimized with respect to the considered surveillance activities, the sensitivity and the cost. The objective of this project was to use stochastic scenario tree modelling in combination with a simple cost analysis in order to develop the national surveillance system for Bluetongue in Switzerland. This surveillance system was established due to the emerging outbreak of Bluetongue virus serotype 8 (BTV-8) in Northern Europe in 2006. Based on the modelling results, it was decided to implement an improved passive clinical surveillance in cattle and sheep through campaigns in order to increase disease awareness alongside a targeted bulk milk testing strategy in 200 dairy cattle herds located in high-risk areas. The estimated median probability of detection of cases (i.e. sensitivity) of the surveillance system in this combined approach was 96.4%. The evaluation of the prospective national surveillance system predicted that passive clinical surveillance in cattle would provide the highest probability to detect BTV-8 infected animals, followed by passive clinical surveillance in sheep and bulk milk testing of 200 dairy cattle farms in high-risk areas. This approach is also applicable in other countries and to other epidemic diseases.
Natural wind variability triggered drop in German redispatch volume and costs from 2015 to 2016
Reyers, Mark; Märker, Carolin; Witthaut, Dirk
2018-01-01
Avoiding dangerous climate change necessitates the decarbonization of electricity systems within the next few decades. In Germany, this decarbonization is based on an increased exploitation of variable renewable electricity sources such as wind and solar power. While system security has remained constantly high, the integration of renewables causes additional costs. In 2015, the costs of grid management saw an all time high of about € 1 billion. Despite the addition of renewable capacity, these costs dropped substantially in 2016. We thus investigate the effect of natural climate variability on grid management costs in this study. We show that the decline is triggered by natural wind variability focusing on redispatch as a main cost driver. In particular, we find that 2016 was a weak year in terms of wind generation averages and the occurrence of westerly circulation weather types. Moreover, we show that a simple model based on the wind generation time series is skillful in detecting redispatch events on timescales of weeks and beyond. As a consequence, alterations in annual redispatch costs in the order of hundreds of millions of euros need to be understood and communicated as a normal feature of the current system due to natural wind variability. PMID:29329349
Cost-effectiveness of surgery in low- and middle-income countries: a systematic review.
Grimes, Caris E; Henry, Jaymie Ang; Maraka, Jane; Mkandawire, Nyengo C; Cotton, Michael
2014-01-01
There is increasing interest in provision of essential surgical care as part of public health policy in low- and middle-income countries (LMIC). Relatively simple interventions have been shown to prevent death and disability. We reviewed the published literature to examine the cost-effectiveness of simple surgical interventions which could be made available at any district hospital, and compared these to standard public health interventions. PubMed and EMBASE were searched using single and combinations of the search terms "disability adjusted life year" (DALY), "quality adjusted life year," "cost-effectiveness," and "surgery." Articles were included if they detailed the cost-effectiveness of a surgical intervention of relevance to a LMIC, which could be made available at any district hospital. Suitable articles with both cost and effectiveness data were identified and, where possible, data were extrapolated to enable comparison across studies. Twenty-seven articles met our inclusion criteria, representing 64 LMIC over 16 years of study. Interventions that were found to be cost-effective included cataract surgery (cost/DALY averted range US$5.06-$106.00), elective inguinal hernia repair (cost/DALY averted range US$12.88-$78.18), male circumcision (cost/DALY averted range US$7.38-$319.29), emergency cesarean section (cost/DALY averted range US$18-$3,462.00), and cleft lip and palate repair (cost/DALY averted range US$15.44-$96.04). A small district hospital with basic surgical services was also found to be highly cost-effective (cost/DALY averted 1 US$0.93), as were larger hospitals offering emergency and trauma surgery (cost/DALY averted US$32.78-$223.00). This compares favorably with other standard public health interventions, such as oral rehydration therapy (US$1,062.00), vitamin A supplementation (US$6.00-$12.00), breast feeding promotion (US$930.00), and highly active anti-retroviral therapy for HIV (US$922.00). Simple surgical interventions that are life-saving and disability-preventing should be considered as part of public health policy in LMIC. We recommend an investment in surgical care and its integration with other public health measures at the district hospital level, rather than investment in single disease strategies.
Marty, Rémi; Roze, Stéphane; Kurth, Hannah
2012-01-01
Long-acting somatostatin receptor ligands (SRL) with product-specific formulation and means of administration are injected periodically in patients with acromegaly and neuroendocrine tumors. A simple decision-tree model aimed at comparing cost savings with ready-to-use Somatuline Autogel(®) (lanreotide) and Sandostatin LAR(®) (octreotide) for the UK, France, and Germany. The drivers of cost savings studied were the reduction of time to administer as well as a reduced baseline risk of clogging during product administration reported for Somatuline Autogel(®). The decision-tree model assumed two settings for SRL administration, ie, by either hospital-based or community-based nurses. In the case of clogging, the first dose was assumed to be lost and a second injection performed. Successful injection depended on the probability of clogging. Direct medical costs were included. A set of scenarios were run, varying the cost drivers, such as the baseline risk of clogging, SRL administration time, and percentage of patients injected during a hospital stay. Costs per successful injection were less for Somatuline Autogel(®)/Depot, ranging from Euros (EUR) 13-45, EUR 52-108, and EUR 127-151, respectively, for France, Germany, and the UK. The prices for both long-acting SRL were the same in France, and cost savings came to 100% from differences other than drug prices. For Germany and the UK, the proportion of savings due to less clogging and shorter administration time was estimated to be around 32% and 20%, respectively. Based on low and high country-specific patient cohort size estimations of individuals eligible for SRL treatment among the patient population with acromegaly and neuroendocrine tumors, annual savings were estimated to be up to EUR 2,000,000 for France, EUR 6,000,000 for Germany, and EUR 7,000,000 for the UK. This model suggests that increasing usage of the Somatuline device for injection of SRL might lead to substantial savings for health care providers across Europe.
NASA Astrophysics Data System (ADS)
Reynerson, Charles Martin
This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.
Broadband moth-eye antireflection coatings on silicon
NASA Astrophysics Data System (ADS)
Sun, Chih-Hung; Jiang, Peng; Jiang, Bin
2008-02-01
We report a bioinspired templating technique for fabricating broadband antireflection coatings that mimic antireflective moth eyes. Wafer-scale, subwavelength-structured nipple arrays are directly patterned on silicon using spin-coated silica colloidal monolayers as etching masks. The templated gratings exhibit excellent broadband antireflection properties and the normal-incidence specular reflection matches with the theoretical prediction using a rigorous coupled-wave analysis (RCWA) model. We further demonstrate that two common simulation methods, RCWA and thin-film multilayer models, generate almost identical prediction for the templated nipple arrays. This simple bottom-up technique is compatible with standard microfabrication, promising for reducing the manufacturing cost of crystalline silicon solar cells.
7 CFR 4288.21 - Application review and scoring.
Code of Federal Regulations, 2013 CFR
2013-01-01
... projects based on the cost, cost-effectiveness, and capacity of projects to reduce fossil fuels. The cost... economically produce energy from renewable biomass to replace its dependence on fossil fuels. Projects with... projects on simple payback as well as the percentage of fossil fuel reduction. (a) Review. The Agency will...
7 CFR 4288.21 - Application review and scoring.
Code of Federal Regulations, 2014 CFR
2014-01-01
... projects based on the cost, cost-effectiveness, and capacity of projects to reduce fossil fuels. The cost... economically produce energy from renewable biomass to replace its dependence on fossil fuels. Projects with... projects on simple payback as well as the percentage of fossil fuel reduction. (a) Review. The Agency will...
7 CFR 4288.21 - Application review and scoring.
Code of Federal Regulations, 2012 CFR
2012-01-01
... projects based on the cost, cost-effectiveness, and capacity of projects to reduce fossil fuels. The cost... economically produce energy from renewable biomass to replace its dependence on fossil fuels. Projects with... projects on simple payback as well as the percentage of fossil fuel reduction. (a) Review. The Agency will...
24 CFR 200.97 - Adjustments resulting from cost certification.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Adjustments resulting from cost certification. 200.97 Section 200.97 Housing and Urban Development Regulations Relating to Housing and Urban... Adjustments resulting from cost certification. (a) Fee simple site. Upon receipt of the mortgagor's...
24 CFR 200.97 - Adjustments resulting from cost certification.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Adjustments resulting from cost certification. 200.97 Section 200.97 Housing and Urban Development Regulations Relating to Housing and Urban... Adjustments resulting from cost certification. (a) Fee simple site. Upon receipt of the mortgagor's...
24 CFR 200.97 - Adjustments resulting from cost certification.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Adjustments resulting from cost certification. 200.97 Section 200.97 Housing and Urban Development Regulations Relating to Housing and Urban... Adjustments resulting from cost certification. (a) Fee simple site. Upon receipt of the mortgagor's...
24 CFR 200.97 - Adjustments resulting from cost certification.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Adjustments resulting from cost certification. 200.97 Section 200.97 Housing and Urban Development Regulations Relating to Housing and Urban... Adjustments resulting from cost certification. (a) Fee simple site. Upon receipt of the mortgagor's...
49 CFR 92.35 - Interest, penalties and administrative costs.
Code of Federal Regulations, 2010 CFR
2010-10-01
... accrue until payment is received. Interest shall be calculated only on the principal of the debt (simple... 49 Transportation 1 2010-10-01 2010-10-01 false Interest, penalties and administrative costs. 92... UNITED STATES BY SALARY OFFSET § 92.35 Interest, penalties and administrative costs. (a) Where a DOT...
Adapting Gel Wax into an Ultrasound-Guided Pericardiocentesis Model at Low Cost
Daly, Robert; Planas, Jason H.; Edens, Mary Ann
2017-01-01
Cardiac tamponade is a life-threatening emergency for which pericardiocentesis may be required. Real-time bedside ultrasound has obviated the need for routine blind procedures in cardiac arrest, and the number of pericardiocenteses being performed has declined. Despite this fact, pericardiocentesis remains an essential skill in emergency medicine. While commercially available training models exist, cost, durability, and lack of anatomical landmarks limit their usefulness. We sought to create a pericardiocentesis model that is realistic, simple to build, reusable, and cost efficient. We constructed the model using a red dye-filled ping pong ball (simulating the right ventricle) and a 250cc normal saline bag (simulating the effusion) encased in an artificial rib cage and held in place by gel wax. The inner saline bag was connected to a 1L saline bag outside of the main assembly to act as a fluid reservoir for repeat uses. The entire construction process takes approximately 16–20 hours, most of which is attributed to cooling of the gel wax. Actual construction time is approximately four hours at a cost of less than $200. The model was introduced to emergency medicine residents and medical students during a procedure simulation lab and compared to a model previously described by dell’Orto.1 The learners performed ultrasound-guided pericardiocentesis using both models. Learners who completed a survey comparing realism of the two models felt our model was more realistic than the previously described model. On a scale of 1–9, with 9 being very realistic, the previous model was rated a 4.5. Our model was rated a 7.8. There was also a marked improvement in the perceived recognition of the pericardium, the heart, and the pericardial sac. Additionally, 100% of the students were successful at performing the procedure using our model. In simulation, our model provided both palpable and ultrasound landmarks and held up to several months of repeated use. It was less expensive than commercial models ($200 vs up to $16,500) while being more realistic in simulation than other described “do-it-yourself models.” This model can be easily replicated to teach the necessary skill of pericardiocentesis. PMID:28116020
Zhao, Jianshi; Cai, Ximing; Wang, Zhongjing
2013-07-15
Water allocation can be undertaken through administered systems (AS), market-based systems (MS), or a combination of the two. The debate on the performance of the two systems has lasted for decades but still calls for attention in both research and practice. This paper compares water users' behavior under AS and MS through a consistent agent-based modeling framework for water allocation analysis that incorporates variables particular to both MS (e.g., water trade and trading prices) and AS (water use violations and penalties/subsidies). Analogous to the economic theory of water markets under MS, the theory of rational violation justifies the exchange of entitled water under AS through the use of cross-subsidies. Under water stress conditions, a unique water allocation equilibrium can be achieved by following a simple bargaining rule that does not depend upon initial market prices under MS, or initial economic incentives under AS. The modeling analysis shows that the behavior of water users (agents) depends on transaction, or administrative, costs, as well as their autonomy. Reducing transaction costs under MS or administrative costs under AS will mitigate the effect that equity constraints (originating with primary water allocation) have on the system's total net economic benefits. Moreover, hydrologic uncertainty is shown to increase market prices under MS and penalties/subsidies under AS and, in most cases, also increases transaction, or administrative, costs. Copyright © 2013 Elsevier Ltd. All rights reserved.
Simplified Life-Cycle Cost Estimation
NASA Technical Reports Server (NTRS)
Remer, D. S.; Lorden, G.; Eisenberger, I.
1983-01-01
Simple method for life-cycle cost (LCC) estimation avoids pitfalls inherent in formulations requiring separate estimates of inflation and interest rates. Method depends for validity observation that interest and inflation rates closely track each other.
Hierarchical Control Using Networks Trained with Higher-Level Forward Models
Wayne, Greg; Abbott, L.F.
2015-01-01
We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a “plant,” the system that performs the task. However, the low-level controller may only be able to solve fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations that are generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are only used during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable sub-tasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks, or to be extended for more complex tasks without retraining lower-levels. PMID:25058706
A simple and low-cost biofilm quantification method using LED and CMOS image sensor.
Kwak, Yeon Hwa; Lee, Junhee; Lee, Junghoon; Kwak, Soo Hwan; Oh, Sangwoo; Paek, Se-Hwan; Ha, Un-Hwan; Seo, Sungkyu
2014-12-01
A novel biofilm detection platform, which consists of a cost-effective red, green, and blue light-emitting diode (RGB LED) as a light source and a lens-free CMOS image sensor as a detector, is designed. This system can measure the diffraction patterns of cells from their shadow images, and gather light absorbance information according to the concentration of biofilms through a simple image processing procedure. Compared to a bulky and expensive commercial spectrophotometer, this platform can provide accurate and reproducible biofilm concentration detection and is simple, compact, and inexpensive. Biofilms originating from various bacterial strains, including Pseudomonas aeruginosa (P. aeruginosa), were tested to demonstrate the efficacy of this new biofilm detection approach. The results were compared with the results obtained from a commercial spectrophotometer. To utilize a cost-effective light source (i.e., an LED) for biofilm detection, the illumination conditions were optimized. For accurate and reproducible biofilm detection, a simple, custom-coded image processing algorithm was developed and applied to a five-megapixel CMOS image sensor, which is a cost-effective detector. The concentration of biofilms formed by P. aeruginosa was detected and quantified by varying the indole concentration, and the results were compared with the results obtained from a commercial spectrophotometer. The correlation value of the results from those two systems was 0.981 (N = 9, P < 0.01) and the coefficients of variation (CVs) were approximately threefold lower at the CMOS image-sensor platform. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Millar, Richard J.; Nicholls, Zebedee R.; Friedlingstein, Pierre; Allen, Myles R.
2017-06-01
Projections of the response to anthropogenic emission scenarios, evaluation of some greenhouse gas metrics, and estimates of the social cost of carbon often require a simple model that links emissions of carbon dioxide (CO2) to atmospheric concentrations and global temperature changes. An essential requirement of such a model is to reproduce typical global surface temperature and atmospheric CO2 responses displayed by more complex Earth system models (ESMs) under a range of emission scenarios, as well as an ability to sample the range of ESM response in a transparent, accessible and reproducible form. Here we adapt the simple model of the Intergovernmental Panel on Climate Change 5th Assessment Report (IPCC AR5) to explicitly represent the state dependence of the CO2 airborne fraction. Our adapted model (FAIR) reproduces the range of behaviour shown in full and intermediate complexity ESMs under several idealised carbon pulse and exponential concentration increase experiments. We find that the inclusion of a linear increase in 100-year integrated airborne fraction with cumulative carbon uptake and global temperature change substantially improves the representation of the response of the climate system to CO2 on a range of timescales and under a range of experimental designs.
Costs of postabortion care in public sector health facilities in Malawi: a cross-sectional survey.
Benson, Janie; Gebreselassie, Hailemichael; Mañibo, Maribel Amor; Raisanen, Keris; Johnston, Heidi Bart; Mhango, Chisale; Levandowski, Brooke A
2015-12-17
Health systems could obtain substantial cost savings by providing safe abortion care rather than providing expensive treatment for complications of unsafely performed abortions. This study estimates current health system costs of treating unsafe abortion complications and compares these findings with newly-projected costs for providing safe abortion in Malawi. We conducted in-depth surveys of medications, supplies, and time spent by clinical personnel dedicated to postabortion care (PAC) for three treatment categories (simple, severe non-surgical, and severe surgical complications) and three uterine evacuation (UE) procedure types (manual vacuum aspiration (MVA), dilation and curettage (D&C) and misoprostol-alone) at 15 purposively-selected public health facilities. Per-case treatment costs were calculated and applied to national, annual PAC caseload data. The median cost per D&C case ($63) was 29% higher than MVA treatment ($49). Costs to treat severe non-surgical complications ($63) were almost five times higher than those of a simple PAC case ($13). Severe surgical complications were especially costly to treat at $128. PAC treatment in public facilities cost an estimated $314,000 annually. Transition to safe, legal abortion would yield an estimated cost reduction of 20%-30%. The method of UE and severity of complications have a large impact on overall costs. With a liberalized abortion law and implementation of induced abortion services with WHO-recommended UE methods, current PAC costs to the health system could markedly decrease.
NASA Astrophysics Data System (ADS)
Teng, Jinn-Tsair; Chang, Chun-Tao; Chern, Maw-Sheng
2012-11-01
Most researchers studied vendor-buyer supply chain inventory policies only from the perspective of an integrated model, which provides us the best cooperative solution. However, in reality, not many vendors and buyers are wholly integrated. Hence, it is necessary to study the optimal policies not only under an integrated environment but also under a non-cooperative environment. In this article, we develop a supply chain vendor-buyer inventory model with trade credit financing linked to order quantity. We then study the optimal policies for both the vendor and the buyer under a non-cooperative environment first, and then under a cooperative integrated situation. Further, we provide some numerical examples to illustrate the theoretical results, compare the differences between these two distinct solutions, and obtain some managerial insights. For example, in a cooperative environment, to reduce the total cost for both parties, the vendor should either provide a simple permissible delay without order quantity restriction or offer a long permissible delay linked order quantity. By contrast, in a non-cooperative environment, the vendor should provide a short permissible delay to reduce its total cost.
Two-voice fundamental frequency estimation
NASA Astrophysics Data System (ADS)
de Cheveigné, Alain
2002-05-01
An algorithm is presented that estimates the fundamental frequencies of two concurrent voices or instruments. The algorithm models each voice as a periodic function of time, and jointly estimates both periods by cancellation according to a previously proposed method [de Cheveigné and Kawahara, Speech Commun. 27, 175-185 (1999)]. The new algorithm improves on the old in several respects; it allows an unrestricted search range, effectively avoids harmonic and subharmonic errors, is more accurate (it uses two-dimensional parabolic interpolation), and is computationally less costly. It remains subject to unavoidable errors when periods are in certain simple ratios and the task is inherently ambiguous. The algorithm is evaluated on a small database including speech, singing voice, and instrumental sounds. It can be extended in several ways; to decide the number of voices, to handle amplitude variations, and to estimate more than two voices (at the expense of increased processing cost and decreased reliability). It makes no use of instrument models, learned or otherwise, although it could usefully be combined with such models. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.
Following a trend with an exponential moving average: Analytical results for a Gaussian model
NASA Astrophysics Data System (ADS)
Grebenkov, Denis S.; Serror, Jeremy
2014-01-01
We investigate how price variations of a stock are transformed into profits and losses (P&Ls) of a trend following strategy. In the frame of a Gaussian model, we derive the probability distribution of P&Ls and analyze its moments (mean, variance, skewness and kurtosis) and asymptotic behavior (quantiles). We show that the asymmetry of the distribution (with often small losses and less frequent but significant profits) is reminiscent to trend following strategies and less dependent on peculiarities of price variations. At short times, trend following strategies admit larger losses than one may anticipate from standard Gaussian estimates, while smaller losses are ensured at longer times. Simple explicit formulas characterizing the distribution of P&Ls illustrate the basic mechanisms of momentum trading, while general matrix representations can be applied to arbitrary Gaussian models. We also compute explicitly annualized risk adjusted P&L and strategy turnover to account for transaction costs. We deduce the trend following optimal timescale and its dependence on both auto-correlation level and transaction costs. Theoretical results are illustrated on the Dow Jones index.
Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments
NASA Astrophysics Data System (ADS)
Munsky, Brian; Shepherd, Douglas
2014-03-01
Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.
Disease-induced resource constraints can trigger explosive epidemics
NASA Astrophysics Data System (ADS)
Böttcher, L.; Woolley-Meza, O.; Araújo, N. A. M.; Herrmann, H. J.; Helbing, D.
2015-11-01
Advances in mathematical epidemiology have led to a better understanding of the risks posed by epidemic spreading and informed strategies to contain disease spread. However, a challenge that has been overlooked is that, as a disease becomes more prevalent, it can limit the availability of the capital needed to effectively treat those who have fallen ill. Here we use a simple mathematical model to gain insight into the dynamics of an epidemic when the recovery of sick individuals depends on the availability of healing resources that are generated by the healthy population. We find that epidemics spiral out of control into “explosive” spread if the cost of recovery is above a critical cost. This can occur even when the disease would die out without the resource constraint. The onset of explosive epidemics is very sudden, exhibiting a discontinuous transition under very general assumptions. We find analytical expressions for the critical cost and the size of the explosive jump in infection levels in terms of the parameters that characterize the spreading process. Our model and results apply beyond epidemics to contagion dynamics that self-induce constraints on recovery, thereby amplifying the spreading process.
Goh, Wei Jiang; Zou, Shui; Ong, Wei Yi; Torta, Federico; Alexandra, Alvarez Fernandez; Schiffelers, Raymond M; Storm, Gert; Wang, Jiong-Wei; Czarny, Bertrand; Pastorin, Giorgia
2017-10-30
Cell Derived Nanovesicles (CDNs) have been developed from the rapidly expanding field of exosomes, representing a class of bioinspired Drug Delivery Systems (DDS). However, translation to clinical applications is limited by the low yield and multi-step approach in isolating naturally secreted exosomes. Here, we show the first demonstration of a simple and rapid production method of CDNs using spin cups via a cell shearing approach, which offers clear advantages in terms of yield and cost-effectiveness over both traditional exosomes isolation, and also existing CDNs fabrication techniques. The CDNs obtained were of a higher protein yield and showed similarities in terms of physical characterization, protein and lipid analysis to both exosomes and CDNs previously reported in the literature. In addition, we investigated the mechanisms of cellular uptake of CDNs in vitro and their biodistribution in an in vivo mouse tumour model. Colocalization of the CDNs at the tumour site in a cancer mouse model was demonstrated, highlighting the potential for CDNs as anti-cancer strategy. Taken together, the results suggest that CDNs could provide a cost-effective alternative to exosomes as an ideal drug nanocarrier.
Disease-induced resource constraints can trigger explosive epidemics.
Böttcher, L; Woolley-Meza, O; Araújo, N A M; Herrmann, H J; Helbing, D
2015-11-16
Advances in mathematical epidemiology have led to a better understanding of the risks posed by epidemic spreading and informed strategies to contain disease spread. However, a challenge that has been overlooked is that, as a disease becomes more prevalent, it can limit the availability of the capital needed to effectively treat those who have fallen ill. Here we use a simple mathematical model to gain insight into the dynamics of an epidemic when the recovery of sick individuals depends on the availability of healing resources that are generated by the healthy population. We find that epidemics spiral out of control into "explosive" spread if the cost of recovery is above a critical cost. This can occur even when the disease would die out without the resource constraint. The onset of explosive epidemics is very sudden, exhibiting a discontinuous transition under very general assumptions. We find analytical expressions for the critical cost and the size of the explosive jump in infection levels in terms of the parameters that characterize the spreading process. Our model and results apply beyond epidemics to contagion dynamics that self-induce constraints on recovery, thereby amplifying the spreading process.
Disease-induced resource constraints can trigger explosive epidemics
Böttcher, L.; Woolley-Meza, O.; Araújo, N. A. M.; Herrmann, H. J.; Helbing, D.
2015-01-01
Advances in mathematical epidemiology have led to a better understanding of the risks posed by epidemic spreading and informed strategies to contain disease spread. However, a challenge that has been overlooked is that, as a disease becomes more prevalent, it can limit the availability of the capital needed to effectively treat those who have fallen ill. Here we use a simple mathematical model to gain insight into the dynamics of an epidemic when the recovery of sick individuals depends on the availability of healing resources that are generated by the healthy population. We find that epidemics spiral out of control into “explosive” spread if the cost of recovery is above a critical cost. This can occur even when the disease would die out without the resource constraint. The onset of explosive epidemics is very sudden, exhibiting a discontinuous transition under very general assumptions. We find analytical expressions for the critical cost and the size of the explosive jump in infection levels in terms of the parameters that characterize the spreading process. Our model and results apply beyond epidemics to contagion dynamics that self-induce constraints on recovery, thereby amplifying the spreading process. PMID:26568377
Small-Scale and Low Cost Electrodes for "Standard" Reduction Potential Measurements
ERIC Educational Resources Information Center
Eggen, Per-Odd; Kvittingen, Lise
2007-01-01
The construction of three simple and inexpensive electrodes, hydrogen, and chlorine and copper electrode is described. This simple method will encourage students to construct their own electrode and better help in understanding precipitation and other electrochemistry concepts.
Laplace transform analysis of a multiplicative asset transfer model
NASA Astrophysics Data System (ADS)
Sokolov, Andrey; Melatos, Andrew; Kieu, Tien
2010-07-01
We analyze a simple asset transfer model in which the transfer amount is a fixed fraction f of the giver’s wealth. The model is analyzed in a new way by Laplace transforming the master equation, solving it analytically and numerically for the steady-state distribution, and exploring the solutions for various values of f∈(0,1). The Laplace transform analysis is superior to agent-based simulations as it does not depend on the number of agents, enabling us to study entropy and inequality in regimes that are costly to address with simulations. We demonstrate that Boltzmann entropy is not a suitable (e.g. non-monotonic) measure of disorder in a multiplicative asset transfer system and suggest an asymmetric stochastic process that is equivalent to the asset transfer model.
Value of the distant future: Model-independent results
NASA Astrophysics Data System (ADS)
Katz, Yuri A.
2017-01-01
This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.
NASA Astrophysics Data System (ADS)
Katsura, Yasufumi; Attaviriyanupap, Pathom; Kataoka, Yoshihiko
In this research, the fundamental premises for deregulation of the electric power industry are reevaluated. The authors develop a simple model to represent wholesale electricity market with highly congested network. The model is developed by simplifying the power system and market in New York ISO based on available data of New York ISO in 2004 with some estimation. Based on the developed model and construction cost data from the past, the economic impact of transmission line addition on market participants and the impact of deregulation on power plant additions under market with transmission congestion are studied. Simulation results show that the market signals may fail to facilitate proper capacity additions and results in the undesirable over-construction and insufficient-construction cycle of capacity addition.
Bovolenta, Tânia M; de Azevedo Silva, Sônia Maria Cesar; Saba, Roberta Arb; Borges, Vanderci; Ferraz, Henrique Ballalai; Felicio, Andre C
2017-01-01
Background Although Parkinson’s disease is the second most prevalent neurodegenerative disease worldwide, its cost in Brazil – South America’s largest country – is unknown. Objective The goal of this study was to calculate the average annual cost of Parkinson’s disease in the city of São Paulo (Brazil), with a focus on disease-related motor symptoms. Subjects and methods This was a retrospective, cross-sectional analysis using a bottom-up approach (ie, from the society’s perspective). Patients (N=260) at two tertiary public health centers, who were residents of the São Paulo metropolitan area, completed standardized questionnaires regarding their disease-related expenses. We used simple and multiple generalized linear models to assess the correlations between total cost and patient-related, as well as disease-related variables. Results The total average annual cost of Parkinson’s disease was estimated at US$5,853.50 per person, including US$3,172.00 in direct costs (medical and nonmedical) and US$2,681.50 in indirect costs. Costs were directly correlated with disease severity (including the degree of motor symptoms), patients’ age, and time since disease onset. Conclusion In this study, we determined the cost of Parkinson’s disease in Brazil and observed that disease-related motor symptoms are a significant component of the costs incurred on the public health system, patients, and society in general. PMID:29276379
All-Iron Redox Flow Battery Tailored for Off-Grid Portable Applications.
Tucker, Michael C; Phillips, Adam; Weber, Adam Z
2015-12-07
An all-iron redox flow battery is proposed and developed for end users without access to an electricity grid. The concept is a low-cost battery which the user assembles, discharges, and then disposes of the active materials. The design goals are: (1) minimize upfront cost, (2) maximize discharge energy, and (3) utilize non-toxic and environmentally benign materials. These are different goals than typically considered for electrochemical battery technology, which provides the opportunity for a novel solution. The selected materials are: low-carbon-steel negative electrode, paper separator, porous-carbon-paper positive electrode, and electrolyte solution containing 0.5 m Fe2 (SO4 )3 active material and 1.2 m NaCl supporting electrolyte. With these materials, an average power density around 20 mW cm(-2) and a maximum energy density of 11.5 Wh L(-1) are achieved. A simple cost model indicates the consumable materials cost US$6.45 per kWh(-1) , or only US$0.034 per mobile phone charge. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
All-Iron Redox Flow Battery Tailored for Off-Grid Portable Applications
Tucker, Michael C.; Phillips, Adam; Weber, Adam Z.
2015-11-20
We proposed and developed an all-iron redox flow battery for end users without access to an electricity grid. The concept is a low-cost battery which the user assembles, discharges, and then disposes of the active materials. Our design goals are: (1) minimize upfront cost, (2) maximize discharge energy, and (3) utilize non-toxic and environmentally benign materials. These are different goals than typically considered for electrochemical battery technology, which provides the opportunity for a novel solution. The selected materials are: low-carbon-steel negative electrode, paper separator, porous-carbon-paper positive electrode, and electrolyte solution containing 0.5 m Fe 2 (SO 4 ) 3 activemore » material and 1.2 m NaCl supporting electrolyte. Furthermore, with these materials, an average power density around 20 mW cm -2 and a maximum energy density of 11.5 Wh L -1 are achieved. A simple cost model indicates the consumable materials cost US$6.45 per kWh -1 , or only US$0.034 per mobile phone charge.« less
New statistical scission-point model to predict fission fragment observables
NASA Astrophysics Data System (ADS)
Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie
2015-09-01
The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.
Optimal joint management of a coastal aquifer and a substitute resource
NASA Astrophysics Data System (ADS)
Moreaux, M.; Reynaud, A.
2004-06-01
This article characterizes the optimal joint management of a coastal aquifer and a costly water substitute. For this purpose we use a mathematical representation of the aquifer that incorporates the displacement of the interface between the seawater and the freshwater of the aquifer. We identify the spatial cost externalities created by users on each other and we show that the optimal water supply depends on the location of users. Users located in the coastal zone exclusively use the costly substitute. Those located in the more upstream area are supplied from the aquifer. At the optimum their withdrawal must take into account the cost externalities they generate on users located downstream. Last, users located in a median zone use the aquifer with a surface transportation cost. We show that the optimum can be implemented in a decentralized economy through a very simple Pigouvian tax. Finally, the optimal and decentralized extraction policies are simulated on a very simple example.
Broadband network selection issues
NASA Astrophysics Data System (ADS)
Leimer, Michael E.
1996-01-01
Selecting the best network for a given cable or telephone company provider is not as obvious as it appears. The cost and performance trades between Hybrid Fiber Coax (HFC), Fiber to the Curb (FTTC) and Asymmetric Digital Subscriber Line networks lead to very different choices based on the existing plant and the expected interactive subscriber usage model. This paper presents some of the issues and trades that drive network selection. The majority of the Interactive Television trials currently underway or planned are based on HFC networks. As a throw away market trial or a short term strategic incursion into a cable market, HFC may make sense. In the long run, if interactive services see high demand, HFC costs per node and an ever shrinking neighborhood node size to service large numbers of subscribers make FTTC appear attractive. For example, thirty-three 64-QAM modulators are required to fill the 550 MHz to 750 MHz spectrum with compressed video streams in 6 MHz channels. This large amount of hardware at each node drives not only initial build-out costs, but operations and maintenance costs as well. FTTC, with its potential for digitally switching large amounts of bandwidth to an given home, offers the potential to grow with the interactive subscriber base with less downstream cost. Integrated telephony on these networks is an issue that appears to be an afterthought for most of the networks being selected at the present time. The major players seem to be videocentric and include telephony as a simple add-on later. This may be a reasonable view point for the telephone companies that plan to leave their existing phone networks untouched. However, a phone company planning a network upgrade or a cable company jumping into the telephony business needs to carefully weigh the cost and performance issues of the various network choices. Each network type provides varying capability in both upstream and downstream bandwidth for voice channels. The noise characteristics vary as well. Cellular quality will not be tolerated by the home or business consumer. The network choices are not simple or obvious. Careful consideration of the cost and performance trades along with cable or telephone company strategic plans is required to ensure selecting the best network.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
Incentives for solar energy in industry
NASA Astrophysics Data System (ADS)
Bergeron, K. D.
1981-05-01
Several issues are analyzed on the effects that government subsidies and other incentives have on the use of solar energy in industry, as well as on other capital-intensive alternative energy supplies. Discounted cash flow analysis is used to compare tax deductions for fuel expenses with tax credits for capital investments for energy. The result is a simple expression for tax equity. The effects that market penetration of solar energy has on conventional energy prices are analyzed with a free market model. It is shown that net costs of a subsidy program to the society can be significantly reduced by price. Several government loan guarantee concepts are evaluated as incentives that may not require direct outlays of government funds; their relative effectiveness in achieving loan leverage through project financing, and their cost and practicality, are discussed.
Optimal design of reverse osmosis module networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maskan, F.; Wiley, D.E.; Johnston, L.P.M.
2000-05-01
The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found thatmore » optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.« less
Creation of a High-fidelity, Low-cost Pediatric Skull Fracture Ultrasound Phantom.
Soucy, Zachary P; Mills, Lisa; Rose, John S; Kelley, Kenneth; Ramirez, Francisco; Kuppermann, Nathan
2015-08-01
Over the past decade, point-of-care ultrasound has become a common tool used for both procedures and diagnosis. Developing high-fidelity phantoms is critical for training in new and novel point-of-care ultrasound applications. Detecting skull fractures on ultrasound imaging in the younger-than-2-year-old patient is an emerging area of point-of-care ultrasound research. Identifying a skull fracture on ultrasound imaging in this age group requires knowledge of the appearance and location of sutures to distinguish them from fractures. There are currently no commercially available pediatric skull fracture models. We outline a novel approach to building a cost-effective, simple, high-fidelity pediatric skull fracture phantom to meet a unique training requirement. © 2015 by the American Institute of Ultrasound in Medicine.
A Cost Estimation Tool for Charter Schools
ERIC Educational Resources Information Center
Hayes, Cheryl D.; Keller, Eric
2009-01-01
To align their financing strategies and fundraising efforts with their fiscal needs, charter school leaders need to know how much funding they need and what that funding will support. This cost estimation tool offers a simple set of worksheets to help start-up charter school operators identify and estimate the range of costs and timing of…
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
ABC estimation of unit costs for emergency department services.
Holmes, R L; Schroeder, R E
1996-04-01
Rapid evolution of the health care industry forces managers to make cost-effective decisions. Typical hospital cost accounting systems do not provide emergency department managers with the information needed, but emergency department settings are so complex and dynamic as to make the more accurate activity-based costing (ABC) system prohibitively expensive. Through judicious use of the available traditional cost accounting information and simple computer spreadsheets. managers may approximate the decision-guiding information that would result from the much more costly and time-consuming implementation of ABC.
Hinze, Jacob F.; Nellis, Gregory F.; Anderson, Mark H.
2017-09-21
Supercritical Carbon Dioxide (sCO 2) power cycles have the potential to deliver high efficiency at low cost. However, in order for an sCO 2 cycle to reach high efficiency, highly effective recuperators are needed. These recuperative heat exchangers must transfer heat at a rate that is substantially larger than the heat transfer to the cycle itself and can therefore represent a significant portion of the power block costs. Regenerators are proposed as a cost saving alternative to high cost printed circuit recuperators for this application. A regenerator is an indirect heat exchanger which periodically stores and releases heat to themore » working fluid. The simple design of a regenerator can be made more inexpensively compared to current options. The objective of this paper is a detailed evaluation of regenerators as a competing technology for recuperators within an sCO 2 Brayton cycle. The level of the analysis presented here is sufficient to identify issues with the regenerator system in order to direct future work and also to clarify the potential advantage of pursuing this technology. A reduced order model of a regenerator is implemented into a cycle model of an sCO 2 Brayton cycle. An economic analysis investigates the cost savings that is possible by switching from recuperative heat exchangers to switched-bed regenerators. The cost of the regenerators was estimated using the amount of material required if the pressure vessel is sized using ASME Boiler Pressure Vessel Code (BPVC) requirements. The cost of the associated valves is found to be substantial for the regenerator system and is estimated in collaboration with an industrial valve supplier. The result of this analysis suggests that a 21.2% reduction in the contribution to the Levelized Cost of Electricity (LCoE) from the power block can be realized by switching to a regenerator-based system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinze, Jacob F.; Nellis, Gregory F.; Anderson, Mark H.
Supercritical Carbon Dioxide (sCO 2) power cycles have the potential to deliver high efficiency at low cost. However, in order for an sCO 2 cycle to reach high efficiency, highly effective recuperators are needed. These recuperative heat exchangers must transfer heat at a rate that is substantially larger than the heat transfer to the cycle itself and can therefore represent a significant portion of the power block costs. Regenerators are proposed as a cost saving alternative to high cost printed circuit recuperators for this application. A regenerator is an indirect heat exchanger which periodically stores and releases heat to themore » working fluid. The simple design of a regenerator can be made more inexpensively compared to current options. The objective of this paper is a detailed evaluation of regenerators as a competing technology for recuperators within an sCO 2 Brayton cycle. The level of the analysis presented here is sufficient to identify issues with the regenerator system in order to direct future work and also to clarify the potential advantage of pursuing this technology. A reduced order model of a regenerator is implemented into a cycle model of an sCO 2 Brayton cycle. An economic analysis investigates the cost savings that is possible by switching from recuperative heat exchangers to switched-bed regenerators. The cost of the regenerators was estimated using the amount of material required if the pressure vessel is sized using ASME Boiler Pressure Vessel Code (BPVC) requirements. The cost of the associated valves is found to be substantial for the regenerator system and is estimated in collaboration with an industrial valve supplier. The result of this analysis suggests that a 21.2% reduction in the contribution to the Levelized Cost of Electricity (LCoE) from the power block can be realized by switching to a regenerator-based system.« less
ERIC Educational Resources Information Center
Chen, Huai-Yi; Nieh, Hwa-Ming; Yang, Ming-Feng; Chou, Yu-Kung; Chung, Jui-Hsu; Liou, Je-Wen
2016-01-01
This study proposes a home-assembled, low-cost blue light-emitting diode (LED) photometer that uses simple and low-cost hardware and software, costing about US $150. This 425-nm wavelength photometer is controlled by an 89C51 microcontroller chip. Glucose concentration detection experiments involving enzyme coupling reactions were carried out to…
Frischer, Robert; Penhaker, Marek; Krejcar, Ondrej; Kacerovsky, Marian; Selamat, Ali
2014-01-01
Precise temperature measurement is essential in a wide range of applications in the medical environment, however the regarding the problem of temperature measurement inside a simple incubator, neither a simple nor a low cost solution have been proposed yet. Given that standard temperature sensors don't satisfy the necessary expectations, the problem is not measuring temperature, but rather achieving the desired sensitivity. In response, this paper introduces a novel hardware design as well as the implementation that increases measurement sensitivity in defined temperature intervals at low cost. PMID:25494352
A study of two statistical methods as applied to shuttle solid rocket booster expenditures
NASA Technical Reports Server (NTRS)
Perlmutter, M.; Huang, Y.; Graves, M.
1974-01-01
The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.
An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.
Chen, Lei; Wei, Guoyuan; Shen, Zhenyao
2015-10-21
To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.
A model for the sustainable selection of building envelope assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huedo, Patricia, E-mail: huedo@uji.es; Mulet, Elena, E-mail: emulet@uji.es; López-Mesa, Belinda, E-mail: belinda@unizar.es
2016-02-15
The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate themore » impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.« less
Modeling the cost and benefit of proteome regulation in a growing bacterial cell
NASA Astrophysics Data System (ADS)
Sharma, Pooja; Pratim Pandey, Parth; Jain, Sanjay
2018-07-01
Escherichia coli cells differentially regulate the production of metabolic and ribosomal proteins in order to stay close to an optimal growth rate in different environments, and exhibit the bacterial growth laws as a consequence. We present a simple mathematical model of a growing-dividing cell in which an internal dynamical mechanism regulates the allocation of proteomic resources between different protein sectors. The model allows an endogenous determination of the growth rate of the cell as a function of cellular and environmental parameters, and reproduces the bacterial growth laws. We use the model and its variants to study the balance between the cost and benefit of regulation. A cost is incurred because cellular resources are diverted to produce the regulatory apparatus. We show that there is a window of environments or a ‘niche’ in which the unregulated cell has a higher fitness than the regulated cell. Outside this niche there is a large space of constant and time varying environments in which regulation is an advantage. A knowledge of the ‘niche boundaries’ allows one to gain an intuitive understanding of the class of environments in which regulation is an advantage for the organism and which would therefore favour the evolution of regulation. The model allows us to determine the ‘niche boundaries’ as a function of cellular parameters such as the size of the burden of the regulatory apparatus. This class of models may be useful in elucidating various tradeoffs in cells and in making in-silico predictions relevant for synthetic biology.
Economic and environmental optimization of waste treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Münster, M.; Ravn, H.; Hedegaard, K.
2015-04-15
Highlights: • Optimizing waste treatment by incorporating LCA methodology. • Applying different objectives (minimizing costs or GHG emissions). • Prioritizing multiple objectives given different weights. • Optimum depends on objective and assumed displaced electricity production. - Abstract: This article presents the new systems engineering optimization model, OptiWaste, which incorporates a life cycle assessment (LCA) methodology and captures important characteristics of waste management systems. As part of the optimization, the model identifies the most attractive waste management options. The model renders it possible to apply different optimization objectives such as minimizing costs or greenhouse gas emissions or to prioritize several objectivesmore » given different weights. A simple illustrative case is analysed, covering alternative treatments of one tonne of residual household waste: incineration of the full amount or sorting out organic waste for biogas production for either combined heat and power generation or as fuel in vehicles. The case study illustrates that the optimal solution depends on the objective and assumptions regarding the background system – illustrated with different assumptions regarding displaced electricity production. The article shows that it is feasible to combine LCA methodology with optimization. Furthermore, it highlights the need for including the integrated waste and energy system into the model.« less
Dynamics of postmarital residence among the Hadza: a kin investment model.
Wood, Brian M; Marlowe, Frank W
2011-07-01
When we have asked Hadza whether married couples should live with the family of the wife (uxorilocally) or the family of the husband (virilocally), we are often told that young couples should spend the first years of a marriage living with the wife's family, and then later, after a few children have been born, the couple has more freedom--they can continue to reside with the wife's kin, or else they could join the husband's kin, or perhaps live in a camp where there are no close kin. In this paper, we address why shifts in kin coresidence patterns may arise in the later years of a marriage, after the birth of children. To do so, we model the inclusive fitness costs that wives might experience from leaving their own kin and joining their husband's kin as a function of the number of children in their nuclear family. Our model suggests that such shifts should become less costly to wives as their families grow. This simple model may help explain some of the dynamics of postmarital residence among the Hadza and offer insight into the dynamics of multilocal residence, the most prevalent form of postmarital residence among foragers.
Collision-based energetic comparison of rolling and hopping over obstacles
Iida, Fumiya
2018-01-01
Locomotion of machines and robots operating in rough terrain is strongly influenced by the mechanics of the ground-machine interactions. A rolling wheel in terrain with obstacles is subject to collisional energy losses, which is governed by mechanics comparable to hopping or walking locomotion. Here we investigate the energetic cost associated with overcoming an obstacle for rolling and hopping locomotion, using a simple mechanics model. The model considers collision-based interactions with the ground and the obstacle, without frictional losses, and we quantify, analyse, and compare the sources of energetic costs for three locomotion strategies. Our results show that the energetic advantages of the locomotion strategies are uniquely defined given the moment of inertia and the Froude number associated with the system. We find that hopping outperforms rolling at larger Froude numbers and vice versa. The analysis is further extended for a comparative study with animals. By applying size and inertial properties through an allometric scaling law of hopping and trotting animals to our models, we found that the conditions at which hopping becomes energetically advantageous to rolling roughly corresponds to animals’ preferred gait transition speeds. The energetic collision losses as predicted by the model are largely verified experimentally. PMID:29538459
NASA Astrophysics Data System (ADS)
Riva, Fabio; Milanese, Lucio; Ricci, Paolo
2017-10-01
To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.
NASA Astrophysics Data System (ADS)
de Vieilleville, F.; Ristorcelli, T.; Delvit, J.-M.
2016-06-01
This paper presents a method for dense DSM reconstruction from high resolution, mono sensor, passive imagery, spatial panchromatic image sequence. The interest of our approach is four-fold. Firstly, we extend the core of light field approaches using an explicit BRDF model from the Image Synthesis community which is more realistic than the Lambertian model. The chosen model is the Cook-Torrance BRDF which enables us to model rough surfaces with specular effects using specific material parameters. Secondly, we extend light field approaches for non-pinhole sensors and non-rectilinear motion by using a proper geometric transformation on the image sequence. Thirdly, we produce a 3D volume cost embodying all the tested possible heights and filter it using simple methods such as Volume Cost Filtering or variational optimal methods. We have tested our method on a Pleiades image sequence on various locations with dense urban buildings and report encouraging results with respect to classic multi-label methods such as MIC-MAC, or more recent pipelines such as S2P. Last but not least, our method also produces maps of material parameters on the estimated points, allowing us to simplify building classification or road extraction.
HEAVY-DUTY GREENHOUSE GAS EMISSIONS MODEL ...
Class 2b-8 vocational truck manufacturers and Class 7/8 tractor manufacturers would be subject to vehicle-based fuel economy and emission standards that would use a truck simulation model to evaluate the impact of the truck tires and/or tractor cab design on vehicle compliance with any new standards. The EPA has created a model called “GHG Emissions Model (GEM)”, which is specifically tailored to predict truck GHG emissions. As the model is designed for the express purpose of vehicle compliance demonstration, it is less configurable than similar commercial products and its only outputs are GHG emissions and fuel consumption. This approach gives a simple and compact tool for vehicle compliance without the overhead and costs of a more sophisticated model. Evaluation of both fuel consumption and CO2 emissions from heavy-duty highway vehicles through a whole-vehicle operation simulation model.
NASA Astrophysics Data System (ADS)
Oleksandrov, Sergiy; Kwon, Jung Ho; Lee, Ki-chang; Sujin-Ku; Paek, Mun Cheol
2014-09-01
This work introduces a novel chip to be used in the future as a simple and cost-effective method for creating DNA arrays using light emission diode (LED) photolithography. The DNA chip platform contains 24 independent reaction sites, which allows for the testing of a corresponding amount of patients' samples in hospital. An array of commercial UV LEDs and lens systems was combined with a microfluidic flow system to provide patterning of 24 individual reaction sites, each with 64 independent probes. Using the LED array instead of conventional laser exposure systems or micro-mirror systems significantly reduces the cost of equipment. The microfluidic system together with microfluidic flow cells drastically reduces the amount of used reagents, which is important due to the high cost of commercial reagents. The DNA synthesis efficiency was verified by fluorescence labeling and conventional hybridization.
Evaluation of the CEAS model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
A simple method for the extraction and identification of light density microplastics from soil.
Zhang, Shaoliang; Yang, Xiaomei; Gertsen, Hennie; Peters, Piet; Salánki, Tamás; Geissen, Violette
2018-03-01
This article introduces a simple and cost-saving method developed to extract, distinguish and quantify light density microplastics of polyethylene (PE) and polypropylene (PP) in soil. A floatation method using distilled water was used to extract the light density microplastics from soil samples. Microplastics and impurities were identified using a heating method (3-5s at 130°C). The number and size of particles were determined using a camera (Leica DFC 425) connected to a microscope (Leica wild M3C, Type S, simple light, 6.4×). Quantification of the microplastics was conducted using a developed model. Results showed that the floatation method was effective in extracting microplastics from soils, with recovery rates of approximately 90%. After being exposed to heat, the microplastics in the soil samples melted and were transformed into circular transparent particles while other impurities, such as organic matter and silicates were not changed by the heat. Regression analysis of microplastics weight and particle volume (a calculation based on image J software analysis) after heating showed the best fit (y=1.14x+0.46, R 2 =99%, p<0.001). Recovery rates based on the empirical model method were >80%. Results from field samples collected from North-western China prove that our method of repetitive floatation and heating can be used to extract, distinguish and quantify light density polyethylene microplastics in soils. Microplastics mass can be evaluated using the empirical model. Copyright © 2017 Elsevier B.V. All rights reserved.
Profitable capitation requires accurate costing.
West, D A; Hicks, L L; Balas, E A; West, T D
1996-01-01
In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.
A simple, low-cost, data logging pendulum built from a computer mouse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gintautas, Vadas; Hubler, Alfred
Lessons and homework problems involving a pendulum are often a big part of introductory physics classes and laboratory courses from high school to undergraduate levels. Although laboratory equipment for pendulum experiments is commercially available, it is often expensive and may not be affordable for teachers on fixed budgets, particularly in developing countries. We present a low-cost, easy-to-build rotary sensor pendulum using the existing hardware in a ball-type computer mouse. We demonstrate how this apparatus may be used to measure both the frequency and coefficient of damping of a simple physical pendulum. This easily constructed laboratory equipment makes it possible formore » all students to have hands-on experience with one of the most important simple physical systems.« less
NASA Technical Reports Server (NTRS)
Manousiouthakis, Vasilios
1995-01-01
We developed simple mathematical models for many of the technologies constituting the water reclamation system in a space station. These models were employed for subsystem optimization and for the evaluation of the performance of individual water reclamation technologies, by quantifying their operational 'cost' as a linear function of weight, volume, and power consumption. Then we performed preliminary investigations on the performance improvements attainable by simple hybrid systems involving parallel combinations of technologies. We are developing a software tool for synthesizing a hybrid water recovery system (WRS) for long term space missions. As conceptual framework, we are employing the state space approach. Given a number of available technologies and the mission specifications, the state space approach would help design flowsheets featuring optimal process configurations, including those that feature stream connections in parallel, series, or recycles. We visualize this software tool to function as follows: given the mission duration, the crew size, water quality specifications, and the cost coefficients, the software will synthesize a water recovery system for the space station. It should require minimal user intervention. The following tasks need to be solved for achieving this goal: (1) formulate a problem statement that will be used to evaluate the advantages of a hybrid WRS over a single technology WBS; (2) model several WRS technologies that can be employed in the space station; (3) propose a recycling network design methodology (since the WRS synthesis task is a recycling network design problem, it is essential to employ a systematic method in synthesizing this network); (4) develop a software implementation for this design methodology, design a hybrid system using this software, and compare the resulting WRS with a base-case WRS; and (5) create a user-friendly interface for this software tool.
Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code.
Möller, Jörgen; Davis, Sarah; Stevenson, Matt; Caro, J Jaime
2017-10-01
Modeling is an essential tool for health technology assessment, and various techniques for conceptualizing and implementing such models have been described. Recently, a new method has been proposed-the discretely integrated condition event or DICE simulation-that enables frequently employed approaches to be specified using a common, simple structure that can be entirely contained and executed within widely available spreadsheet software. To assess if a DICE simulation provides equivalent results to an existing discrete event simulation, a comparison was undertaken. A model of osteoporosis and its management programmed entirely in Visual Basic for Applications and made public by the National Institute for Health and Care Excellence (NICE) Decision Support Unit was downloaded and used to guide construction of its DICE version in Microsoft Excel ® . The DICE model was then run using the same inputs and settings, and the results were compared. The DICE version produced results that are nearly identical to the original ones, with differences that would not affect the decision direction of the incremental cost-effectiveness ratios (<1% discrepancy), despite the stochastic nature of the models. The main limitation of the simple DICE version is its slow execution speed. DICE simulation did not alter the results and, thus, should provide a valid way to design and implement decision-analytic models without requiring specialized software or custom programming. Additional efforts need to be made to speed up execution.
Solares, Santiago D.
2015-11-26
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less
Solares, Santiago D
2015-01-01
This paper introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tapping-mode imaging, for both of which the force curves exhibit the expected features. Finally, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.
Adaptive behaviour and multiple equilibrium states in a predator-prey model.
Pimenov, Alexander; Kelly, Thomas C; Korobeinikov, Andrei; O'Callaghan, Michael J A; Rachinskii, Dmitrii
2015-05-01
There is evidence that multiple stable equilibrium states are possible in real-life ecological systems. Phenomenological mathematical models which exhibit such properties can be constructed rather straightforwardly. For instance, for a predator-prey system this result can be achieved through the use of non-monotonic functional response for the predator. However, while formal formulation of such a model is not a problem, the biological justification for such functional responses and models is usually inconclusive. In this note, we explore a conjecture that a multitude of equilibrium states can be caused by an adaptation of animal behaviour to changes of environmental conditions. In order to verify this hypothesis, we consider a simple predator-prey model, which is a straightforward extension of the classic Lotka-Volterra predator-prey model. In this model, we made an intuitively transparent assumption that the prey can change a mode of behaviour in response to the pressure of predation, choosing either "safe" of "risky" (or "business as usual") behaviour. In order to avoid a situation where one of the modes gives an absolute advantage, we introduce the concept of the "cost of a policy" into the model. A simple conceptual two-dimensional predator-prey model, which is minimal with this property, and is not relying on odd functional responses, higher dimensionality or behaviour change for the predator, exhibits two stable co-existing equilibrium states with basins of attraction separated by a separatrix of a saddle point. Copyright © 2015 Elsevier Inc. All rights reserved.
Naval Surface Forces Real-Time Reutilization Asset Management Warehouses: A Cost-Benefit Analysis
2008-12-01
shared their valuable insights. And finally, to our advisors Dr . Ken Euske and CDR Brett Wagner who kept us on track throughout the project. Nick could...of $206,368,657. SURFOR’s East and West Coast warehouses accounted for $146,975,108 of the list value. The team identified potential cost...obligating the funds. A relatively simple cost benefit analysis of the reprogramming costs versus the cost savings would justify the expense. In the
Role of design complexity in technology improvement.
McNerney, James; Farmer, J Doyne; Redner, Sidney; Trancik, Jessika E
2011-05-31
We study a simple model for the evolution of the cost (or more generally the performance) of a technology or production process. The technology can be decomposed into n components, each of which interacts with a cluster of d - 1 other components. Innovation occurs through a series of trial-and-error events, each of which consists of randomly changing the cost of each component in a cluster, and accepting the changes only if the total cost of the cluster is lowered. We show that the relationship between the cost of the whole technology and the number of innovation attempts is asymptotically a power law, matching the functional form often observed for empirical data. The exponent α of the power law depends on the intrinsic difficulty of finding better components, and on what we term the design complexity: the more complex the design, the slower the rate of improvement. Letting d as defined above be the connectivity, in the special case in which the connectivity is constant, the design complexity is simply the connectivity. When the connectivity varies, bottlenecks can arise in which a few components limit progress. In this case the design complexity depends on the details of the design. The number of bottlenecks also determines whether progress is steady, or whether there are periods of stasis punctuated by occasional large changes. Our model connects the engineering properties of a design to historical studies of technology improvement.
Technology commercialization cost model and component case study
NASA Astrophysics Data System (ADS)
1991-12-01
Fuel cells seem poised to emerge as a clean, efficient, and cost competitive source of fossil fuel based electric power and thermal energy. Sponsors of fuel cell technology development need to determine the validity and the attractiveness of a technology to the market in terms of meeting requirements and providing value which exceeds the total cost of ownership. Sponsors of fuel cell development have addressed this issue by requiring the developers to prepare projections of the future production cost of their fuel cells in commercial quantities. These projected costs, together with performance and life projections, provide a preliminary measure of the total value and cost of the product to the customer. Booz-Allen & Hamilton Inc. and Michael A. Cobb & Company have been retained in several assignments over the years to audit these cost projections. The audits have gone well beyond a simple review of the numbers. They have probed the underlying technical and financial assumptions, the sources of data on material and equipment costs, and explored issues such as the realistic manufacturing yields which can be expected in various processes. Based on the experience gained from these audits, DOE gave Booz-Allen and Michael A. Cobb & company the task to develop a criteria to be used in the execution of future fuel cell manufacturing cost studies. It was thought that such a criteria would make it easier to execute such studies in the future as well as to cause such studies to be more understandable and comparable.
Lamberts, Mark P; Özdemir, Cihan; Drenth, Joost P H; van Laarhoven, Cornelis J H M; Westert, Gert P; Kievit, Wietske
2017-06-01
The aim of this study was to determine the cost-effectiveness of a new strategy for the preoperative detection of patients that will likely benefit from a cholecystectomy, using simple criteria that can be applied by surgeons. Criteria for a cholecystectomy indication are: (1) having episodic pain; (2) onset of pain 1 year or less before the outpatient clinic visit. The cost-effectiveness of the new strategy was evaluated against current practice using a decision analytic model. The incremental cost-effectiveness of applying criteria for a cholecystectomy for a patient with abdominal pain and gallstones was compared to applying no criteria. The incremental cost-effectiveness ratio (ICER) was expressed as extra costs to be invested to gain one more patient with absence of pain. Scenarios were analyzed to assess the influence of applying different criteria. The new strategy of applying one out of two criteria resulted in a 4 % higher mean proportion of patients with absence of pain compared to current practice with similar costs. The 95 % upper limit of the ICER was €4114 ($4633) per extra patient with relief of upper abdominal pain. Application of two out of two criteria resulted in a 3 % lower mean proportion of patients with absence of pain with lower costs. The new strategy of using one out of two strict selection criteria may be an effective but also a cost-effective method to reduce the proportion of patients with pain after cholecystectomy.
Reallocating attention during multiple object tracking.
Ericson, Justin M; Christensen, James C
2012-07-01
Wolfe, Place, and Horowitz (Psychonomic Bulletin & Review 14:344-349, 2007) found that participants were relatively unaffected by selecting and deselecting targets while performing a multiple object tracking task, such that maintaining tracking was possible for longer durations than the few seconds typically studied. Though this result was generally consistent with other findings on tracking duration (Franconeri, Jonathon, & Scimeca Psychological Science 21:920-925, 2010), it was inconsistent with research involving cuing paradigms, specifically precues (Pylyshyn & Annan Spatial Vision 19:485-504, 2006). In the present research, we broke down the addition and removal of targets into separate conditions and incorporated a simple performance model to evaluate the costs associated with the selection and deselection of moving targets. Across three experiments, we demonstrated evidence against a cost being associated with any shift in attention, but rather that varying the type of cue used for target deselection produces no additional cost to performance and that hysteresis effects are not induced by a reduction in tracking load.
Low cost voice compression for mobile digital radios
NASA Technical Reports Server (NTRS)
Omura, J. K.
1985-01-01
A new technique for low cost rubust voice compression at 4800 bits per second was studied. The approach was based on using a cascade of digital biquad adaptive filters with simplified multipulse excitation followed by simple bit sequence compression.
Understanding quantum tunneling using diffusion Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.
2018-03-01
In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay
2013-01-01
Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.
Tani, Yuji; Ogasawara, Katsuhiko
2012-01-01
This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.
Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
Direct diode-pumped Kerr-lens mode-locked Ti:sapphire laser
Durfee, Charles G.; Storz, Tristan; Garlick, Jonathan; Hill, Steven; Squier, Jeff A.; Kirchner, Matthew; Taft, Greg; Shea, Kevin; Kapteyn, Henry; Murnane, Margaret; Backus, Sterling
2012-01-01
We describe a Ti:sapphire laser pumped directly with a pair of 1.2W 445nm laser diodes. With over 30mW average power at 800 nm and a measured pulsewidth of 15fs, Kerr-lens-modelocked pulses are available with dramatically decreased pump cost. We propose a simple model to explain the observed highly stable Kerr-lens modelocking in spite of the fact that both the mode-locked and continuous-wave modes are smaller than the pump mode in the crystal. PMID:22714433
Surgical audit in the developing countries.
Bankole, J O; Lawal, O O; Adejuyigbe, O
2003-01-01
Audit assures provision of good quality health service at affordable cost. To be complete therefore, surgical practice in the young developing countries, as elsewhere, must incorporate auditing. Peculiarities of the developing countries and insufficient understanding of auditing may be, however, responsible for its been little practised. This article, therefore, reviews the objectives, the commonly evaluated aspects, and the method of audit, and includes a simple model of audit cycle. It is hoped that it will kindle the idea of regular practice of quality assurance by surgeons working in the young developing nations and engender a sustainable interest.
Inverse kinematics of a dual linear actuator pitch/roll heliostat
NASA Astrophysics Data System (ADS)
Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh
2017-06-01
This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.
Abdullah, Asnawi; Hort, Krishna; Abidin, Azwar Zaenal; Amin, Fadilah M
2012-01-01
Despite significant investment in improving service infrastructure and training of staff, public primary healthcare services in low-income and middle-income countries tend to perform poorly in reaching coverage targets. One of the factors identified in Aceh, Indonesia was the lack of operational funds for service provision. The objective of this study was to develop a simple and transparent costing tool that enables health planners to calculate the unit costs of providing basic health services to estimate additional budgets required to deliver services in accordance with national targets. The tool was developed using a standard economic approach that linked the input activities to achieving six national priority programs at primary healthcare level: health promotion, sanitation and environment health, maternal and child health and family planning, nutrition, immunization and communicable diseases control, and treatment of common illness. Costing was focused on costs of delivery of the programs that need to be funded by local government budgets. The costing tool consisting of 16 linked Microsoft Excel worksheets was developed and tested in several districts enabled the calculation of the unit costs of delivering of the six national priority programs per coverage target of each program (such as unit costs of delivering of maternal and child health program per pregnant mother). This costing tool can be used by health planners to estimate additional money required to achieve a certain level of coverage of programs, and it can be adjusted for different costs and program delivery parameters in different settings. Copyright © 2012 John Wiley & Sons, Ltd.
When Could a Stigma Program to Address Mental Illness in the Workplace Break Even?
Dewa, Carolyn S; Hoch, Jeffrey S
2014-01-01
Objective: To explore basic requirements for a stigma program to produce sufficient savings to pay for itself (that is, break even). Methods: A simple economic model was developed to compare reductions in total short-term disability (SDIS) cost relative to a stigma program’s costs. A 2-way sensitivity analysis is used to illustrate conditions under which this break-even scenario occurs. Results: Using estimates from the literature for the SDIS costs, this analysis shows that a stigma program can provide value added even if there is no reduction in the length of an SDIS leave. To break even, a stigma program with no reduction in the length of an SDIS leave would need to prevent at least 2.5 SDIS claims in an organization of 1000 workers. Similarly, a stigma program can break even with no reduction in the number of SDIS claims if it is able to reduce SDIS episodes by at least 7 days in an organization of 1000 employees. Conclusions: Modelling results, such as those presented in our paper, provide information to help occupational health payers become prudent buyers in the mental health market place. While in most cases, the required reductions seem modest, the real test of both the model and the program occurs once a stigma program is piloted and evaluated in a real-world setting. PMID:25565701
Joshi, Varun; Srinivasan, Manoj
2015-02-08
Understanding how humans walk on a surface that can move might provide insights into, for instance, whether walking humans prioritize energy use or stability. Here, motivated by the famous human-driven oscillations observed in the London Millennium Bridge, we introduce a minimal mathematical model of a biped, walking on a platform (bridge or treadmill) capable of lateral movement. This biped model consists of a point-mass upper body with legs that can exert force and perform mechanical work on the upper body. Using numerical optimization, we obtain energy-optimal walking motions for this biped, deriving the periodic body and platform motions that minimize a simple metabolic energy cost. When the platform has an externally imposed sinusoidal displacement of appropriate frequency and amplitude, we predict that body motion entrained to platform motion consumes less energy than walking on a fixed surface. When the platform has finite inertia, a mass- spring-damper with similar parameters to the Millennium Bridge, we show that the optimal biped walking motion sustains a large lateral platform oscillation when sufficiently many people walk on the bridge. Here, the biped model reduces walking metabolic cost by storing and recovering energy from the platform, demonstrating energy benefits for two features observed for walking on the Millennium Bridge: crowd synchrony and large lateral oscillations.
Joshi, Varun; Srinivasan, Manoj
2015-01-01
Understanding how humans walk on a surface that can move might provide insights into, for instance, whether walking humans prioritize energy use or stability. Here, motivated by the famous human-driven oscillations observed in the London Millennium Bridge, we introduce a minimal mathematical model of a biped, walking on a platform (bridge or treadmill) capable of lateral movement. This biped model consists of a point-mass upper body with legs that can exert force and perform mechanical work on the upper body. Using numerical optimization, we obtain energy-optimal walking motions for this biped, deriving the periodic body and platform motions that minimize a simple metabolic energy cost. When the platform has an externally imposed sinusoidal displacement of appropriate frequency and amplitude, we predict that body motion entrained to platform motion consumes less energy than walking on a fixed surface. When the platform has finite inertia, a mass- spring-damper with similar parameters to the Millennium Bridge, we show that the optimal biped walking motion sustains a large lateral platform oscillation when sufficiently many people walk on the bridge. Here, the biped model reduces walking metabolic cost by storing and recovering energy from the platform, demonstrating energy benefits for two features observed for walking on the Millennium Bridge: crowd synchrony and large lateral oscillations. PMID:25663810
Decision-theoretic approach to data acquisition for transit operations planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, S.G.
The most costly element of transportation planning and modeling activities in the past has usually been that of data acquisition. This is even truer today when the unit costs of data collection are increasing rapidly and at the same time budgets are severely limited by continuing policies of fiscal austerity in the public sector. The overall objectives of this research were to improve the decisions and decision-making capabilities of transit operators or planners in short-range transit planning, and to improve the quality and cost-effectiveness of associated route or corridor-level data collection and service monitoring activities. A new approach was presentedmore » for sequentially updating the parameters of both simple and multiple linear regression models with stochastic regressors, and for determining the expected value of sample information and expected net gain of sampling for associated sample designs. A new approach was also presented for estimating and updating (both spatially and temporally) the parameters of multinomial logit discrete choice models, and for determining associated optimal sample designs for attribute-based and choice-based sampling methods. The approach provides an effective framework for addressing the issue of optimal sampling method and sample size, which to date have been largely unresolved. The application of these methodologies and the feasibility of the decision-theoretic approach was illustrated with a hypothetical case study example.« less
Successful and stable orthodontic camouflage of a mandibular asymmetry with sliding jigs.
Oliveira, Dauro Douglas; Oliveira, Bruno Franco de; Mordente, Carolina Morsani; Godoy, Gabriela Martins; Soares, Rodrigo Villamarim; Seraidarian, Paulo Isaías
2018-03-12
The purpose of this paper is to present and discuss a simple and low-cost clinical approach to correct an asymmetric skeletal Class III combined to an extensive dental open bite that significantly compromised the occlusal function and smile aesthetics of an adult male patient. The patient did not accept the idealistic surgical-orthodontic treatment option, neither the use of temporary anchorage devices to facilitate the camouflage of the asymmetrical skeletal Class III/open bite. Therefore, a very simple and inexpensive biomechanical approach using sliding jigs in the mandibular arch was implemented as the compensatory treatment of the malocclusion. Although minor enhancements in facial aesthetics were obtained, the occlusal function and dental aesthetics were significantly improved. Furthermore, the patient was very satisfied with his new smile appearance. Some advantages of this treatment option included the small invasiveness and the remarkably low financial costs involved. Moreover, the final results fulfilled all realistic treatment objectives and the patient's expectations. Results remained stable 5 years post-treatment demonstrating that excellent results can be obtained when simple and low cost, but well-controlled mechanics are conducted.
Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach
NASA Astrophysics Data System (ADS)
Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto
2017-12-01
In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \
Fu, Pengcheng; Johnson, Scott M.; Carrigan, Charles R.
2012-01-31
This paper documents our effort to use a fully coupled hydro-geomechanical numerical test bed to study using low hydraulic pressure to stimulate geothermal reservoirs with existing fracture network. In this low pressure stimulation strategy, fluid pressure is lower than the minimum in situ compressive stress, so the fractures are not completely open but permeability improvement can be achieved through shear dilation. We found that in this low pressure regime, the coupling between the fluid phase and the rock solid phase becomes very simple, and the numerical model can achieve a low computational cost. Using this modified model, we study the behavior of a single fracture and a random fracture network.
Sebastián, Eduardo; Armiens, Carlos; Gómez-Elvira, Javier; Zorzano, María P; Martinez-Frias, Jesus; Esteban, Blanca; Ramos, Miguel
2010-01-01
We describe the parameters that drive the design and modeling of the Rover Environmental Monitoring Station (REMS) Ground Temperature Sensor (GTS), an instrument aboard NASA's Mars Science Laboratory, and report preliminary test results. REMS GTS is a lightweight, low-power, and low cost pyrometer for measuring the Martian surface kinematic temperature. The sensor's main feature is its innovative design, based on a simple mechanical structure with no moving parts. It includes an in-flight calibration system that permits sensor recalibration when sensor sensitivity has been degraded by deposition of dust over the optics. This paper provides the first results of a GTS engineering model working in a Martian-like, extreme environment.
[Caenorhabditis elegans: a powerful tool for drug discovery].
Jia, Xi-Hua; Cao, Cheng
2009-07-01
A simple model organism Caenorhabditis elegans has contributed substantially to the fundamental researches in biology. In an era of functional genomics, nematode worm has been developed into a multi-purpose tool that can be exploited to identify disease-causing or disease-associated genes, validate potential drug targets. This, coupled with its genetic amenability, low cost experimental manipulation and compatibility with high throughput screening in an intact physiological condition, makes the model organism into an effective toolbox for drug discovery. This review shows the unique features of C. elegans, how it can play a valuable role in our understanding of the molecular mechanism of human diseases and finding drug leads in drug development process.
Electrostatics of electron-hole interactions in van der Waals heterostructures
NASA Astrophysics Data System (ADS)
Cavalcante, L. S. R.; Chaves, A.; Van Duppen, B.; Peeters, F. M.; Reichman, D. R.
2018-03-01
The role of dielectric screening of electron-hole interaction in van der Waals heterostructures is theoretically investigated. A comparison between models available in the literature for describing these interactions is made and the limitations of these approaches are discussed. A simple numerical solution of Poisson's equation for a stack of dielectric slabs based on a transfer matrix method is developed, enabling the calculation of the electron-hole interaction potential at very low computational cost and with reasonable accuracy. Using different potential models, direct and indirect exciton binding energies in these systems are calculated within Wannier-Mott theory, and a comparison of theoretical results with recent experiments on excitons in two-dimensional materials is discussed.
NASA Astrophysics Data System (ADS)
Kuo, Cynthia; Walker, Jesse; Perrig, Adrian
Bluetooth Simple Pairing and Wi-Fi Protected Setup specify mechanisms for exchanging authentication credentials in wireless networks. Both Simple Pairing and Protected Setup support multiple setup mechanisms, which increases security risks and hurts the user experience. To improve the security and usability of these specifications, we suggest defining a common baseline for hardware features and a consistent, interoperable user experience across devices.
ERIC Educational Resources Information Center
Namwong, Pithakpong; Jarujamrus, Purim; Amatatongchai, Maliwan; Chairam, Sanoe
2018-01-01
In this article, a low-cost, simple, and rapid fabrication of paper-based analytical devices (PADs) using a wax screen-printing method is reported here. The acid-base reaction is implemented in the simple PADs to demonstrate to students the chemistry concept of a limiting reagent. When a fixed concentration of base reacts with a gradually…
McAfee, Stephanie A.; Pederson, Gregory T.; Woodhouse, Connie A.; McCabe, Gregory
2017-01-01
Water managers are increasingly interested in better understanding and planning for projected resource impacts from climate change. In this management-guided study, we use a very large suite of synthetic climate scenarios in a statistical modeling framework to simultaneously evaluate how (1) average temperature and precipitation changes, (2) initial basin conditions, and (3) temporal characteristics of the input climate data influence water-year flow in the Upper Colorado River. The results here suggest that existing studies may underestimate the degree of uncertainty in future streamflow, particularly under moderate temperature and precipitation changes. However, we also find that the relative severity of future flow projections within a given climate scenario can be estimated with simple metrics that characterize the input climate data and basin conditions. These results suggest that simple testing, like the analyses presented in this paper, may be helpful in understanding differences between existing studies or in identifying specific conditions for physically based mechanistic modeling. Both options could reduce overall cost and improve the efficiency of conducting climate change impacts studies.
Kagan, Ari; Rand, David G.
2017-01-01
How does cognitive sophistication impact cooperation? We explore this question using a model of the co-evolution of cooperation and cognition. In our model, agents confront social dilemmas and coordination games, and make decisions using intuition or deliberation. Intuition is automatic and effortless, but relatively (although not necessarily completely) insensitive to context. Deliberation, conversely, is costly but relatively (although not necessarily perfectly) sensitive to context. We find that regardless of the sensitivity of intuition and imperfection of deliberation, deliberating undermines cooperation in social dilemmas, whereas deliberating can increase cooperation in coordination games if intuition is sufficiently sensitive. Furthermore, when coordination games are sufficiently likely, selection favours a strategy whose intuitive response ignores the contextual cues available and cooperates across contexts. Thus, we see how simple cognition can arise from active selection for simplicity, rather than just be forced to be simple due to cognitive constraints. Finally, we find that when deliberation is imperfect, the favoured strategy increases cooperation in social dilemmas (as a result of reducing deliberation) as the benefit of cooperation to the recipient increases. PMID:28330915
[Disposable nursing applicator-pocket of indwelling central venous catheter].
Wei, Congli; Ma, Chunyuan
2017-11-01
Catheter related infection is the most common complication of central venous catheter, which pathogen mainly originate from the pipe joint and the skin around puncture site. How to prevent catheter infection is an important issue in clinical nursing. The utility model disclosed a "disposable nursing applicator-pocket of indwelling central venous catheter", which is mainly used for the fixation and the protection. The main structure consists of two parts, one is medical applicator to protect the skin around puncture site, and the other is gauze pocket to protect the catheter external connector. When in use, the catheter connector is fitted into the pocket, and then the applicator is applied to cover the puncture point of the skin. Integrated design of medical applicator and gauze pocket was designed to realize double functions of fixation and protection. The disposable nursing applicator-pocket is made of medical absorbent gauze (outer layer) and non-woven fabric (inner layer), which has the characteristics of comfortable, breathable, dust filtered, bacteria filtered, waterproof, antiperspirant and anti-pollution. The utility model has the advantages of simple structure, low cost, simple operation, effective protection, easy realization and popularization.
Colard, Stéphane; O’Connell, Grant; Verron, Thomas; Cahours, Xavier; Pritchard, John D.
2014-01-01
There has been rapid growth in the use of electronic cigarettes (“vaping”) in Europe, North America and elsewhere. With such increased prevalence, there is currently a debate on whether the aerosol exhaled following the use of e-cigarettes has implications for the quality of air breathed by bystanders. Conducting chemical analysis of the indoor environment can be costly and resource intensive, limiting the number of studies which can be conducted. However, this can be modelled reasonably accurately based on empirical emissions data and using some basic assumptions. Here, we present a simplified model, based on physical principles, which considers aerosol propagation, dilution and extraction to determine the potential contribution of a single puff from an e-cigarette to indoor air. From this, it was then possible to simulate the cumulative effect of vaping over time. The model was applied to a virtual, but plausible, scenario considering an e-cigarette user and a non-user working in the same office space. The model was also used to reproduce published experimental studies and showed good agreement with the published values of indoor air nicotine concentration. With some additional refinements, such an approach may be a cost-effective and rapid way of assessing the potential exposure of bystanders to exhaled e-cigarette aerosol constituents. PMID:25547398
Takahashi, K; Sengoku, S; Kimura, H
2011-02-01
A fundamental management imperative of pharmaceutical companies is to contain surging costs of developing and launching drugs globally. Clinical studies are a research and development (R&D) cost driver. The objective of this study was to develop a productivity breakdown model, or a key performance indicator (KPI) tree, for an entire clinical study and to use it to compare a global clinical study with a similar Japanese study. We, thereby, hope to identify means of improving study productivity. We developed the new clinical study productivity breakdown model, covering operational aspects and cost factors. Elements for improving clinical study productivity were assessed from a management viewpoint by comparing empirical tracking data from a global clinical study with a Japanese study with similar protocols. The following unique and material differences, beyond simple international difference in cost of living, that could affect the efficiency of future clinical trials were identified: (i) more frequent site visits in the Japanese study, (ii) head counts at the Japanese study sites more than double those of the global study and (iii) a shorter enrollment time window of about a third that of the global study at the Japanese study sites. We identified major differences in the performance of the two studies. These findings demonstrate the potential of the KPI tree for improving clinical study productivity. Trade-offs, such as those between reduction in head count at study sites and expansion of the enrollment time window, must be considered carefully. © 2010 Blackwell Publishing Ltd.
Dynamical implications of bi-directional resource exchange within a meta-ecosystem.
Messan, Marisabel Rodriguez; Kopp, Darin; Allen, Daniel C; Kang, Yun
2018-05-05
The exchange of resources across ecosystem boundaries can have large impacts on ecosystem structures and functions at local and regional scales. In this article, we develop a simple model to investigate dynamical implications of bi-directional resource exchanges between two local ecosystems in a meta-ecosystem framework. In our model, we assume that (1) Each local ecosystem acts as both a resource donor and recipient, such that one ecosystem donating resources to another results in a cost to the donating system and a benefit to the recipient; and (2) The costs and benefits of the bi-directional resource exchange between two ecosystems are correlated in a nonlinear fashion. Our model could apply to the resource interactions between terrestrial and aquatic ecosystems that are supported by the literature. Our theoretical results show that bi-directional resource exchange between two ecosystems can indeed generate complicated dynamical outcomes, including the coupled ecosystems having amensalistic, antagonistic, competitive, or mutualistic interactions, with multiple alternative stable states depending on the relative costs and benefits. In addition, if the relative cost for resource exchange for an ecosystem is decreased or the relative benefit for resource exchange for an ecosystem is increased, the production of that ecosystem would increase; however, depending on the local environment, the production of the other ecosystem may increase or decrease. We expect that our work, by evaluating the potential outcomes of resource exchange theoretically, can facilitate empirical evaluations and advance the understanding of spatial ecosystem ecology where resource exchanges occur in varied ecosystems through a complicated network. Copyright © 2018 Elsevier Inc. All rights reserved.