Doherty, Kathleen; Essajee, Shaffiq; Penazzato, Martina; Holmes, Charles; Resch, Stephen; Ciaranello, Andrea
2014-05-02
Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0-13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments.
Cost-Conscious of Anesthesia Physicians: An awareness survey.
Hakimoglu, Sedat; Hancı, Volkan; Karcıoglu, Murat; Tuzcu, Kasım; Davarcı, Isıl; Kiraz, Hasan Ali; Turhanoglu, Selim
2015-01-01
Increasing competitive pressure and health performance system in the hospitals result in pressure to reduce the resources allocated. The aim of this study was to evaluate the anesthesiology and intensive care physicians awareness of the cost of the materials used and to determine the factors that influence it. This survey was conducted between September 2012 and September 2013 after the approval of the local ethics committee. Overall 149 anesthetists were included in the study. Participants were asked to estimate the cost of 30 products used by anesthesiology and intensive care units. One hundred forty nine doctors, 45% female and 55% male, participated in this study. Of the total 30 questions the averages of cost estimations were 5.8% accurate estimation, 35.13% underestimation and 59.16% overestimation. When the participants were divided into the different groups of institution, duration of working in this profession and sex, there were no statistically significant differences regarding accurate estimation. However, there was statistically significant difference in underestimation. In underestimation, there was no significant difference between 16-20 year group and >20 year group but these two groups have more price overestimation than the other groups (p=0.031). Furthermore, when all the participants were evaluated there were no significant difference between age-accurate cost estimation and profession time-accurate cost estimation. Anesthesiology and intensive care physicians in this survey have an insufficient awareness of the cost of the drugs and materials that they use. The institution and experience are not effective factors for accurate estimate. Programs for improving the health workers knowledge creating awareness of cost should be planned in order to use the resources more efficiently and cost effectively.
2014-01-01
Background Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. Methods We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0–13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. Results The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. Conclusions The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments. PMID:24885453
Product line cost estimation: a standard cost approach.
Cooper, J C; Suver, J D
1988-04-01
Product line managers often must make decisions based on inaccurate cost information. A method is needed to determine costs more accurately. By using a standard costing model, product line managers can better estimate the cost of intermediate and end products, and hence better estimate the costs of the product line.
Public Perceptions of Regulatory Costs, Their Uncertainty and Interindividual Distribution.
Johnson, Branden B; Finkel, Adam M
2016-06-01
Public perceptions of both risks and regulatory costs shape rational regulatory choices. Despite decades of risk perception studies, this article is the first on regulatory cost perceptions. A survey of 744 U.S. residents probed: (1) How knowledgeable are laypeople about regulatory costs incurred to reduce risks? (2) Do laypeople see official estimates of cost and benefit (lives saved) as accurate? (3) (How) do preferences for hypothetical regulations change when mean-preserving spreads of uncertainty replace certain cost or benefit? and (4) (How) do preferences change when unequal interindividual distributions of hypothetical regulatory costs replace equal distributions? Respondents overestimated costs of regulatory compliance, while assuming agencies underestimate costs. Most assumed agency estimates of benefits are accurate; a third believed both cost and benefit estimates are accurate. Cost and benefit estimates presented without uncertainty were slightly preferred to those surrounded by "narrow uncertainty" (a range of costs or lives entirely within a personally-calibrated zone without clear acceptance or rejection of tradeoffs). Certain estimates were more preferred than "wide uncertainty" (a range of agency estimates extending beyond these personal bounds, thus posing a gamble between favored and unacceptable tradeoffs), particularly for costs as opposed to benefits (but even for costs a quarter of respondents preferred wide uncertainty to certainty). Agency-acknowledged uncertainty in general elicited mixed judgments of honesty and trustworthiness. People preferred egalitarian distributions of regulatory costs, despite skewed actual cost distributions, and preferred progressive cost distributions (the rich pay a greater than proportional share) to regressive ones. Efficient and socially responsive regulations require disclosure of much more information about regulatory costs and risks. © 2016 Society for Risk Analysis.
Highway Cost Index Estimator Tool
DOT National Transportation Integrated Search
2017-10-01
To plan and program highway construction projects, the Texas Department of Transportation requires accurate construction cost data. However, due to the number of, and uncertainty of, variables that affect highway construction costs, estimating future...
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
Cross-Sectional HIV Incidence Estimation in HIV Prevention Research
Brookmeyer, Ron; Laeyendecker, Oliver; Donnell, Deborah; Eshleman, Susan H.
2013-01-01
Accurate methods for estimating HIV incidence from cross-sectional samples would have great utility in prevention research. This report describes recent improvements in cross-sectional methods that significantly improve their accuracy. These improvements are based on the use of multiple biomarkers to identify recent HIV infections. These multi-assay algorithms (MAAs) use assays in a hierarchical approach for testing that minimizes the effort and cost of incidence estimation. These MAAs do not require mathematical adjustments for accurate estimation of the incidence rates in study populations in the year prior to sample collection. MAAs provide a practical, accurate, and cost-effective approach for cross-sectional HIV incidence estimation that can be used for HIV prevention research and global epidemic monitoring. PMID:23764641
Stokes, Elizabeth A; Wordsworth, Sarah; Staves, Julie; Mundy, Nicola; Skelly, Jane; Radford, Kelly; Stanworth, Simon J
2018-04-01
In an environment of limited health care resources, it is crucial for health care systems which provide blood transfusion to have accurate and comprehensive information on the costs of transfusion, incorporating not only the costs of blood products, but also their administration. Unfortunately, in many countries accurate costs for administering blood are not available. Our study aimed to generate comprehensive estimates of the costs of administering transfusions for the UK National Health Service. A detailed microcosting study was used to cost two key inputs into transfusion: transfusion laboratory and nursing inputs. For each input, data collection forms were developed to capture staff time, equipment, and consumables associated with each step in the transfusion process. Costing results were combined with costs of blood product wastage to calculate the cost per unit transfused, separately for different blood products. Data were collected in 2014/15 British pounds and converted to US dollars. A total of 438 data collection forms were completed by 74 staff. The cost of administering blood was $71 (£49) per unit for red blood cells, $84 (£58) for platelets, $55 (£38) for fresh-frozen plasma, and $72 (£49) for cryoprecipitate. Blood administration costs add substantially to the costs of the blood products themselves. These are frequently incurred costs; applying estimates to the blood components supplied to UK hospitals in 2015, the annual cost of blood administration, excluding blood products, exceeds $175 (£120) million. These results provide more accurate estimates of the total costs of transfusion than those previously available. © 2018 AABB.
Development of regional stump-to-mill logging cost estimators
Chris B. LeDoux; John E. Baumgras
1989-01-01
Planning logging operations requires estimating the logging costs for the sale or tract being harvested. Decisions need to be made on equipment selection and its application to terrain. In this paper a methodology is described that has been developed and implemented to solve the problem of accurately estimating logging costs by region. The methodology blends field time...
A Framework for Automating Cost Estimates in Assembly Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calton, T.L.; Peters, R.R.
1998-12-09
When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less
Resource costing for multinational neurologic clinical trials: methods and results.
Schulman, K; Burke, J; Drummond, M; Davies, L; Carlsson, P; Gruger, J; Harris, A; Lucioni, C; Gisbert, R; Llana, T; Tom, E; Bloom, B; Willke, R; Glick, H
1998-11-01
We present the results of a multinational resource costing study for a prospective economic evaluation of a new medical technology for treatment of subarachnoid hemorrhage within a clinical trial. The study describes a framework for the collection and analysis of international resource cost data that can contribute to a consistent and accurate intercountry estimation of cost. Of the 15 countries that participated in the clinical trial, we collected cost information in the following seven: Australia, France, Germany, the UK, Italy, Spain, and Sweden. The collection of cost data in these countries was structured through the use of worksheets to provide accurate and efficient cost reporting. We converted total average costs to average variable costs and then aggregated the data to develop study unit costs. When unit costs were unavailable, we developed an index table, based on a market-basket approach, to estimate unit costs. To estimate the cost of a given procedure, the market-basket estimation process required that cost information be available for at least one country. When cost information was unavailable in all countries for a given procedure, we estimated costs using a method based on physician-work and practice-expense resource-based relative value units. Finally, we converted study unit costs to a common currency using purchasing power parity measures. Through this costing exercise we developed a set of unit costs for patient services and per diem hospital services. We conclude by discussing the implications of our costing exercise and suggest guidelines to facilitate more effective multinational costing exercises.
Environmental Liabilities: DoD Training Range Cleanup Cost Estimates Are Likely Understated
2001-04-01
1Federal accounting standards define environmental cleanup costs as...report will not be complete or accurate. Federal financial accounting standards have required that DOD report a liability for the estimated cost of...within the range is better than any other amount. SFFAS No. 6, Accounting for Property, Plant, and Equipment, further defines cleanup costs as costs for
A model for the cost of doing a cost estimate
NASA Technical Reports Server (NTRS)
Remer, D. S.; Buchanan, H. R.
1992-01-01
A model for estimating the cost required to do a cost estimate for Deep Space Network (DSN) projects that range from $0.1 to $100 million is presented. The cost of the cost estimate in thousands of dollars, C(sub E), is found to be approximately given by C(sub E) = K((C(sub p))(sup 0.35)) where C(sub p) is the cost of the project being estimated in millions of dollars and K is a constant depending on the accuracy of the estimate. For an order-of-magnitude estimate, K = 24; for a budget estimate, K = 60; and for a definitive estimate, K = 115. That is, for a specific project, the cost of doing a budget estimate is about 2.5 times as much as that for an order-of-magnitude estimate, and a definitive estimate costs about twice as much as a budget estimate. Use of this model should help provide the level of resources required for doing cost estimates and, as a result, provide insights towards more accurate estimates with less potential for cost overruns.
Robust stereo matching with trinary cross color census and triple image-based refinements
NASA Astrophysics Data System (ADS)
Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr
2017-12-01
For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.
Synthesis on construction unit cost development : technical report.
DOT National Transportation Integrated Search
2009-01-01
Availability of historical unit cost data is an important factor in developing accurate project cost estimates. : State highway agencies (SHAs) collect data on historical bids and/or production rates, crew sizes and mixes, : material costs, and equip...
Henriques, Dora; Browne, Keith A; Barnett, Mark W; Parejo, Melanie; Kryger, Per; Freeman, Tom C; Muñoz, Irene; Garnery, Lionel; Highet, Fiona; Jonhston, J Spencer; McCormack, Grace P; Pinto, M Alice
2018-06-04
The natural distribution of the honeybee (Apis mellifera L.) has been changed by humans in recent decades to such an extent that the formerly widest-spread European subspecies, Apis mellifera mellifera, is threatened by extinction through introgression from highly divergent commercial strains in large tracts of its range. Conservation efforts for A. m. mellifera are underway in multiple European countries requiring reliable and cost-efficient molecular tools to identify purebred colonies. Here, we developed four ancestry-informative SNP assays for high sample throughput genotyping using the iPLEX Mass Array system. Our customized assays were tested on DNA from individual and pooled, haploid and diploid honeybee samples extracted from different tissues using a diverse range of protocols. The assays had a high genotyping success rate and yielded accurate genotypes. Performance assessed against whole-genome data showed that individual assays behaved well, although the most accurate introgression estimates were obtained for the four assays combined (117 SNPs). The best compromise between accuracy and genotyping costs was achieved when combining two assays (62 SNPs). We provide a ready-to-use cost-effective tool for accurate molecular identification and estimation of introgression levels to more effectively monitor and manage A. m. mellifera conservatories.
Space Programs: Nasa’s Independent Cost Estimating Capability Needs Improvement
1992-11-01
AD--A2?t59 263 DTJC 93-01281 I I !:ig’ i ~I1 V:II oz ’~ -A e•, 2.JQ For United States NTISAO General Accounting Office Wto faB Washington, D.C...advisory committee’s recommendation to strengthen NASA’s independent cost estimating capability. Congress and the executive branch need accurate cost ...estimates in deciding whether to undertake or continue space programs which often cost millions or even billions of dollars. In December 1990, the
Estimating pharmacy level prescription drug acquisition costs for third-party reimbursement.
Kreling, D H; Kirk, K W
1986-07-01
Accurate payment for the acquisition costs of drug products dispensed is an important consideration in a third-party prescription drug program. Two alternative methods of estimating these costs among pharmacies were derived and compared. First, pharmacists were surveyed to determine the purchase discounts offered to them by wholesalers. A 10.00% modal and 11.35% mean discount resulted for 73 responding pharmacists. Second, cost-plus percents derived from gross profit margins of wholesalers were calculated and applied to wholesaler product costs to estimate pharmacy level acquisition costs. Cost-plus percents derived from National Median and Southwestern Region wholesaler figures were 9.27% and 10.10%, respectively. A comparison showed the two methods of estimating acquisition costs would result in similar acquisition cost estimates. Adopting a cost-plus estimating approach is recommended because it avoids potential pricing manipulations by wholesalers and manufacturers that would negate improvements in drug product reimbursement accuracy.
Spacecraft platform cost estimating relationships
NASA Technical Reports Server (NTRS)
Gruhl, W. M.
1972-01-01
The three main cost areas of unmanned satellite development are discussed. The areas are identified as: (1) the spacecraft platform (SCP), (2) the payload or experiments, and (3) the postlaunch ground equipment and operations. The SCP normally accounts for over half of the total project cost and accurate estimates of SCP costs are required early in project planning as a basis for determining total project budget requirements. The development of single formula SCP cost estimating relationships (CER) from readily available data by statistical linear regression analysis is described. The advantages of single formula CER are presented.
Young, David W
2003-11-01
Current computing methods impede determining the real cost of graduate medical education. However, a more accurate estimate could be obtained if policy makers would allow for the application of basic cost-accounting principles, including consideration of department-level costs, unbundling of joint costs, and other factors.
An Alternative Procedure for Estimating Unit Learning Curves,
1985-09-01
the model accurately describes the real-life situation, i.e., when the model is properly applied to the data, it can be a powerful tool for...predicting unit production costs. There are, however, some unique estimation problems inherent in the model . The usual method of generating predicted unit...production costs attempts to extend properties of least squares estimators to non- linear functions of these estimators. The result is biased estimates of
The social costs of dangerous products: an empirical investigation.
Shapiro, Sidney; Ruttenberg, Ruth; Leigh, Paul
2009-01-01
Defective consumer products impose significant costs on consumers and third parties when they cause fatalities and injuries. This Article develops a novel approach to measuring the true extent of such costs, which may not be accurately captured under current methods of estimating the cost of dangerous products. Current analysis rests on a narrowly defined set of costs, excluding certain types of costs. The cost-of-injury estimates utilized in this Article address this omission by quantifying and incorporating these costs to provide a more complete picture of the true impact of defective consumer products. The new estimates help to gauge the true value of the civil liability system.
Nixon, Richard M; Wonderling, David; Grieve, Richard D
2010-03-01
Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.
Cost-estimating relationships for space programs
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.
1992-01-01
Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.
1984-05-23
Because the cost accounting reports provide the historical cost information for the cost estimating reports, we also tested the reasonableness of... accounting and cost estimating reports must be based on timely and accurate infor- mation. The reports, therefore, require the continual attention of... accounting system reported less than half the value of site direct charges (labor, materials, equipment usage, and other costs ) that should have been
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
Maximizing mitigation benefits: research to support a mitigation cost framework-final report.
DOT National Transportation Integrated Search
2016-08-01
Tracking environmental costs in the project development process has been a challenging task for state : departments of transportation (DOTs). Previous research identified the need to accurately track and : subsequently estimate project costs resultin...
An evaluation of contractor projected and actual costs
NASA Technical Reports Server (NTRS)
Kwiatkowski, K. A.; Buffalano, C.
1974-01-01
GSFC contractors with cost-plus contracts provide cost estimates for each of the next four quarters on a quarterly basis. Actual expenditures over a two-year period were compared to the estimates, and the data were sorted in different ways to answer several questions and give quantification to observations, such as how much does the accuracy of estimates degrade as they are made further into the future? Are estimates made for small dollar amounts more accurate than for large dollar estimates? Other government agencies and private companies with cost-plus contracts may be interested in this analysis as potential methods of contract management for their organizations. It provides them with the different methods one organization is beginning to use to control costs.
Standard cost elements for technology programs
NASA Technical Reports Server (NTRS)
Christensen, Carisa B.; Wagenfuehrer, Carl
1992-01-01
The suitable structure for an effective and accurate cost estimate for general purposes is discussed in the context of a NASA technology program. Cost elements are defined for research, management, and facility-construction portions of technology programs. Attention is given to the mechanisms for insuring the viability of spending programs, and the need for program managers is established for effecting timely fund disbursement. Formal, structures, and intuitive techniques are discussed for cost-estimate development, and cost-estimate defensibility can be improved with increased documentation. NASA policies for cash management are examined to demonstrate the importance of the ability to obligate funds and the ability to cost contracted funds. The NASA approach to consistent cost justification is set forth with a list of standard cost-element definitions. The cost elements reflect the three primary concerns of cost estimates: the identification of major assumptions, the specification of secondary analytic assumptions, and the status of program factors.
Gulati, Sanchita; During, David; Mainland, Jeff; Wong, Agnes M F
2018-01-01
One of the key challenges to healthcare organizations is the development of relevant and accurate cost information. In this paper, we used time-driven activity-based costing (TDABC) method to calculate the costs of treating individual patients with specific medical conditions over their full cycle of care. We discussed how TDABC provides a critical, systematic and data-driven approach to estimate costs accurately and dynamically, as well as its potential to enable structural and rational cost reduction to bring about a sustainable healthcare system. © 2018 Longwoods Publishing.
Ellison, Aaron M.; Jackson, Scott
2015-01-01
Herpetologists and conservation biologists frequently use convenient and cost-effective, but less accurate, abundance indices (e.g., number of individuals collected under artificial cover boards or during natural objects surveys) in lieu of more accurate, but costly and destructive, population size estimators to detect and monitor size, state, and trends of amphibian populations. Although there are advantages and disadvantages to each approach, reliable use of abundance indices requires that they be calibrated with accurate population estimators. Such calibrations, however, are rare. The red back salamander, Plethodon cinereus, is an ecologically useful indicator species of forest dynamics, and accurate calibration of indices of salamander abundance could increase the reliability of abundance indices used in monitoring programs. We calibrated abundance indices derived from surveys of P. cinereus under artificial cover boards or natural objects with a more accurate estimator of their population size in a New England forest. Average densities/m2 and capture probabilities of P. cinereus under natural objects or cover boards in independent, replicate sites at the Harvard Forest (Petersham, Massachusetts, USA) were similar in stands dominated by Tsuga canadensis (eastern hemlock) and deciduous hardwood species (predominantly Quercus rubra [red oak] and Acer rubrum [red maple]). The abundance index based on salamanders surveyed under natural objects was significantly associated with density estimates of P. cinereus derived from depletion (removal) surveys, but underestimated true density by 50%. In contrast, the abundance index based on cover-board surveys overestimated true density by a factor of 8 and the association between the cover-board index and the density estimates was not statistically significant. We conclude that when calibrated and used appropriately, some abundance indices may provide cost-effective and reliable measures of P. cinereus abundance that could be used in conservation assessments and long-term monitoring at Harvard Forest and other northeastern USA forests. PMID:26020008
Wang, Angela; Dybul, Stephanie L.; Patel, Parag J.; Tutton, Sean M.; Lee, Cheong J.; White, Sarah B.
2016-01-01
Purpose To evaluate knowledge of interventional radiologists (IRs) and vascular surgeons (VSs) on the cost of common devices and procedures and to determine factors associated with differences in understanding. Materials and Methods An online survey was administered to US faculty IRs and VSs. Demographic information and physicians’ opinions on hospital costs were elicited. Respondents were asked to estimate the average price of 15 commonly used devices and to estimate the work relative value units (wRVUs) and average Medicare reimbursements for 10 procedures. Answer estimates were deemed correct if values were ± 25% of the actual costs. Multivariate logistical regression was used to calculate odds ratios and 95% confidence intervals. Results Of the 4,926 participants contacted, 1,090 (22.1%) completed the questionnaire. Overall, 19.8%, 22.8%, and 31.9% were accurate in price estimations of devices, Medicare reimbursement, and wRVUs for procedures. Physicians who thought themselves adequately educated about wRVUs were more accurate in predicting procedural costs in wRVUs than physicians who responded otherwise (odds ratio = 1.40, 95% confidence interval, 1.29–1.52; P < .0001). Estimation accuracies for procedures showed a positive trend in more experienced physicians (≥ 16 y), private practice physicians, and physicians who practice in rural areas. Conclusions This study suggests that IRs and VSs have limited knowledge regarding device costs. Given the current health care environment, more attention should be placed on cost education and awareness so that physicians can provide the most cost-effective care. PMID:26706189
Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra
2013-03-01
Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Dan Loeffler; David E. Calkin; Robin P. Silverstein
2006-01-01
Utilizing timber harvest residues (biomass) for renewable energy production provides an alternative disposal method to onsite burning that may improve the economic viability of hazardous fuels treatments. Due to the relatively low value of biomass, accurate estimates of biomass volumes and costs of collection and delivery are essential if investment in renewable energy...
Forecasting Construction Cost Index based on visibility graph: A network approach
NASA Astrophysics Data System (ADS)
Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong
2018-03-01
Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.
Physician awareness of drug cost: a systematic review.
Allan, G Michael; Lexchin, Joel; Wiebe, Natasha
2007-09-01
Pharmaceutical costs are the fastest-growing health-care expense in most developed countries. Higher drug costs have been shown to negatively impact patient outcomes. Studies suggest that doctors have a poor understanding of pharmaceutical costs, but the data are variable and there is no consistent pattern in awareness. We designed this systematic review to investigate doctors' knowledge of the relative and absolute costs of medications and to determine the factors that influence awareness. Our search strategy included The Cochrane Library, EconoLit, EMBASE, and MEDLINE as well as reference lists and contact with authors who had published two or more articles on the topic or who had published within 10 y of the commencement of our review. Studies were included if: either doctors, trainees (interns or residents), or medical students were surveyed; there were more than ten survey respondents; cost of pharmaceuticals was estimated; results were expressed quantitatively; there was a clear description of how authors defined "accurate estimates"; and there was a description of how the true cost was determined. Two authors reviewed each article for eligibility and extracted data independently. Cost accuracy outcomes were summarized, but data were not combined in meta-analysis because of extensive heterogeneity. Qualitative data related to physicians and drug costs were also extracted. The final analysis included 24 articles. Cost accuracy was low; 31% of estimates were within 20% or 25% of the true cost, and fewer than 50% were accurate by any definition of cost accuracy. Methodological weaknesses were common, and studies of low methodological quality showed better cost awareness. The most important factor influencing the pattern and accuracy of estimation was the true cost of therapy. High-cost drugs were estimated more accurately than inexpensive ones (74% versus 31%, Chi-square p < 0.001). Doctors consistently overestimated the cost of inexpensive products and underestimated the cost of expensive ones (binomial test, 89/101, p < 0.001). When asked, doctors indicated that they want cost information and feel it would improve their prescribing but that it is not accessible. Doctors' ignorance of costs, combined with their tendency to underestimate the price of expensive drugs and overestimate the price of inexpensive ones, demonstrate a lack of appreciation of the large difference in cost between inexpensive and expensive drugs. This discrepancy in turn could have profound implications for overall drug expenditures. Much more focus is required in the education of physicians about costs and the access to cost information. Future research should focus on the accessibility and reliability of medical cost information and whether the provision of this information is used by doctors and makes a difference to physician prescribing. Additionally, future work should strive for higher methodological standards to avoid the biases we found in the current literature, including attention to the method of assessing accuracy that allows larger absolute estimation ranges for expensive drugs.
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
Costing the supply chain for delivery of ACT and RDTs in the public sector in Benin and Kenya.
Shretta, Rima; Johnson, Brittany; Smith, Lisa; Doumbia, Seydou; de Savigny, Don; Anupindi, Ravi; Yadav, Prashant
2015-02-05
Studies have shown that supply chain costs are a significant proportion of total programme costs. Nevertheless, the costs of delivering specific products are poorly understood and ballpark estimates are often used to inadequately plan for the budgetary implications of supply chain expenses. The purpose of this research was to estimate the country level costs of the public sector supply chain for artemisinin-based combination therapy (ACT) and rapid diagnostic tests (RDTs) from the central to the peripheral levels in Benin and Kenya. A micro-costing approach was used and primary data on the various cost components of the supply chain was collected at the central, intermediate, and facility levels between September and November 2013. Information sources included central warehouse databases, health facility records, transport schedules, and expenditure reports. Data from document reviews and semi-structured interviews were used to identify cost inputs and estimate actual costs. Sampling was purposive to isolate key variables of interest. Survey guides were developed and administered electronically. Data were extracted into Microsoft Excel, and the supply chain cost per unit of ACT and RDT distributed by function and level of system was calculated. In Benin, supply chain costs added USD 0.2011 to the initial acquisition cost of ACT and USD 0.3375 to RDTs (normalized to USD 1). In Kenya, they added USD 0.2443 to the acquisition cost of ACT and USD 0.1895 to RDTs (normalized to USD 1). Total supply chain costs accounted for more than 30% of the initial acquisition cost of the products in some cases and these costs were highly sensitive to product volumes. The major cost drivers were found to be labour, transport, and utilities with health facilities carrying the majority of the cost per unit of product. Accurate cost estimates are needed to ensure adequate resources are available for supply chain activities. Product volumes should be considered when costing supply chain functions rather than dollar value. Further work is needed to develop extrapolative costing models that can be applied at country level without extensive micro-costing exercises. This will allow other countries to generate more accurate estimates in the future.
Asymptotic Analysis Of The Total Least Squares ESPRIT Algorithm'
NASA Astrophysics Data System (ADS)
Ottersten, B. E.; Viberg, M.; Kailath, T.
1989-11-01
This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.
Cost Analysis of Instructional Technology.
ERIC Educational Resources Information Center
Johnson, F. Craig; Dietrich, John E.
Although some serious limitations in the cost analysis technique do exist, the need for cost data in decision making is so great that every effort should be made to obtain accurate estimates. This paper discusses the several issues which arise when an attempt is made to make quality, trade-off, or scope decisions based on cost data. Three methods…
Development of Star Tracker System for Accurate Estimation of Spacecraft Attitude
2009-12-01
For a high- cost spacecraft with accurate pointing requirements, the use of a star tracker is the preferred method for attitude determination. The...solutions, however there are certain costs with using this algorithm. There are significantly more features a triangle can provide when compared to an...to the other. The non-rotating geocentric equatorial frame provides an inertial frame for the two-body problem of a satellite in orbit. In this
Nagata, Tomohisa; Mori, Koji; Aratake, Yutaka; Ide, Hiroshi; Ishida, Hiromi; Nobori, Junichiro; Kojima, Reiko; Odagami, Kiminori; Kato, Anna; Tsutsumi, Akizumi; Matsuda, Shinya
2014-01-01
The aim of the present study was to develop standardized cost estimation tools that provide information to employers about occupational safety and health (OSH) activities for effective and efficient decision making in Japanese companies. We interviewed OSH staff members including full-time professional occupational physicians to list all OSH activities. Using activity-based costing, cost data were obtained from retrospective analyses of occupational safety and health costs over a 1-year period in three manufacturing workplaces and were obtained from retrospective analyses of occupational health services costs in four manufacturing workplaces. We verified the tools additionally in four workplaces including service businesses. We created the OSH and occupational health standardized cost estimation tools. OSH costs consisted of personnel costs, expenses, outsourcing costs and investments for 15 OSH activities. The tools provided accurate, relevant information on OSH activities and occupational health services. The standardized information obtained from our OSH and occupational health cost estimation tools can be used to manage OSH costs, make comparisons of OSH costs between companies and organizations and help occupational health physicians and employers to determine the best course of action.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
Estimating the effectiveness of further sampling in species inventories
Keating, K.A.; Quinn, J.F.; Ivie, M.A.; Ivie, L.L.
1998-01-01
Estimators of the number of additional species expected in the next ??n samples offer a potentially important tool for improving cost-effectiveness of species inventories but are largely untested. We used Monte Carlo methods to compare 11 such estimators, across a range of community structures and sampling regimes, and validated our results, where possible, using empirical data from vascular plant and beetle inventories from Glacier National Park, Montana, USA. We found that B. Efron and R. Thisted's 1976 negative binomial estimator was most robust to differences in community structure and that it was among the most accurate estimators when sampling was from model communities with structures resembling the large, heterogeneous communities that are the likely targets of major inventory efforts. Other estimators may be preferred under specific conditions, however. For example, when sampling was from model communities with highly even species-abundance distributions, estimates based on the Michaelis-Menten model were most accurate; when sampling was from moderately even model communities with S=10 species or communities with highly uneven species-abundance distributions, estimates based on Gleason's (1922) species-area model were most accurate. We suggest that use of such methods in species inventories can help improve cost-effectiveness by providing an objective basis for redirecting sampling to more-productive sites, methods, or time periods as the expectation of detecting additional species becomes unacceptably low.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1994-01-01
NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.
Communications Support for National Flight Data Center Information System.
1980-11-01
funtions : 0 Establishment and termination, * Message transfer, 0 Retransmission of blocks, Establishment and Termination: the establishment procedure...relate to hardware components, transmission facilities and cost relationships . The costs are grouped into one-time and recurring costs. L.2 HARDWARE...the NADIN switching center in Atlanta. The purchase and installation costs are estimated to be $1000. L.4 COST RELATIONSHIPS In order to accurately
Equivalent Mass versus Life Cycle Cost for Life Support Technology Selection
NASA Technical Reports Server (NTRS)
Jones, Harry
2003-01-01
The decision to develop a particular life support technology or to select it for flight usually depends on the cost to develop and fly it. Other criteria - performance, safety, reliability, crew time, and risk - are considered, but cost is always an important factor. Because launch cost accounts for most of the cost of planetary missions, and because launch cost is directly proportional to the mass launched, equivalent mass has been used instead of cost to select life support technology. The equivalent mass of a life support system includes the estimated masses of the hardware and of the pressurized volume, power supply, and cooling system that the hardware requires. The equivalent mass is defined as the total payload launch mass needed to provide and support the system. An extension of equivalent mass, Equivalent System Mass (ESM), has been established for use in Advanced Life Support. A crew time mass-equivalent and sometimes other non-mass factors are added to equivalent mass to create ESM. Equivalent mass is an estimate of the launch cost only. For earth orbit rather than planetary missions, the launch cost is usually exceeded by the cost of Design, Development, Test, and Evaluation (DDT&E). Equivalent mass is used only in life support analysis. Life Cycle Cost (LCC) is much more commonly used. LCC includes DDT&E, launch, and operations costs. Since LCC includes launch cost, it is always a more accurate cost estimator than equivalent mass. The relative costs of development, launch, and operations vary depending on the mission design, destination, and duration. Since DDT&E or operations may cost more than launch, LCC may give a more accurate cost ranking than equivalent mass. To be sure of identifying the lowest cost technology for a particular mission, we should use LCC rather than equivalent mass.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, L.T.; Hickey, M.
This paper summarizes the progress to date by CH2M HILL and the UKAEA in development of a parametric modelling capability for estimating the costs of large nuclear decommissioning projects in the United Kingdom (UK) and Europe. The ability to successfully apply parametric cost estimating techniques will be a key factor to commercial success in the UK and European multi-billion dollar waste management, decommissioning and environmental restoration markets. The most useful parametric models will be those that incorporate individual components representing major elements of work: reactor decommissioning, fuel cycle facility decommissioning, waste management facility decommissioning and environmental restoration. Models must bemore » sufficiently robust to estimate indirect costs and overheads, permit pricing analysis and adjustment, and accommodate the intricacies of international monetary exchange, currency fluctuations and contingency. The development of a parametric cost estimating capability is also a key component in building a forward estimating strategy. The forward estimating strategy will enable the preparation of accurate and cost-effective out-year estimates, even when work scope is poorly defined or as yet indeterminate. Preparation of cost estimates for work outside the organizations current sites, for which detailed measurement is not possible and historical cost data does not exist, will also be facilitated. (authors)« less
Land use, forest density, soil mapping, erosion, drainage, salinity limitations
NASA Technical Reports Server (NTRS)
Yassoglou, N. J. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The results of analyses show that it is possible to obtain information of practical significance as follows: (1) A quick and accurate estimate of the proper use of the valuable land can be made on the basis of temporal and spectral characteristics of the land features. (2) A rather accurate delineation of the major forest formations in the test areas was achieved on the basis of spatial and spectral characteristics of the studied areas. The forest stands were separated into two density classes; dense forest, and broken forest. On the basis of ERTS-1 data and the existing ground truth information a rather accurate mapping of the major vegetational forms of the mountain ranges can be made. (3) Major soil formations are mapable from ERTS-1 data: recent alluvial soils; soil on quarternary deposits; severely eroded soil and lithosol; and wet soils. (4) An estimation of cost benefits cannot be made accurately at this stage of the investigation. However, a rough estimate of the ratio of the cost for obtaining the same amount information from ERTS-1 data and from conventional operations would be approximately 1:6 to 1:10, in favor of the ERTS-1.
French, Michael T.; Popovici, Ioana; Tapsell, Lauren
2008-01-01
Federal, State, and local government agencies require current and accurate cost information for publicly funded substance abuse treatment programs to guide program assessments and reimbursement decisions. The Center for Substance Abuse Treatment (CSAT) published a list of modality-specific cost bands for this purpose in 2002. However, the upper and lower values in these ranges are so wide that they offer little practical guidance for funding agencies. Thus, the dual purpose of this investigation was to assemble the most current and comprehensive set of economic cost estimates from the readily-available literature and then use these estimates to develop updated modality-specific cost bands for more reasonable reimbursement policies. Although cost estimates were scant for some modalities, the recommended cost bands are based on the best available economic research, and we believe these new ranges will be more useful and pertinent for all stakeholders of publicly-funded substance abuse treatment. PMID:18294803
Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron
2015-02-01
This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.
Physician Awareness of Drug Cost: A Systematic Review
Allan, G. Michael; Lexchin, Joel; Wiebe, Natasha
2007-01-01
Background Pharmaceutical costs are the fastest-growing health-care expense in most developed countries. Higher drug costs have been shown to negatively impact patient outcomes. Studies suggest that doctors have a poor understanding of pharmaceutical costs, but the data are variable and there is no consistent pattern in awareness. We designed this systematic review to investigate doctors' knowledge of the relative and absolute costs of medications and to determine the factors that influence awareness. Methods and Findings Our search strategy included The Cochrane Library, EconoLit, EMBASE, and MEDLINE as well as reference lists and contact with authors who had published two or more articles on the topic or who had published within 10 y of the commencement of our review. Studies were included if: either doctors, trainees (interns or residents), or medical students were surveyed; there were more than ten survey respondents; cost of pharmaceuticals was estimated; results were expressed quantitatively; there was a clear description of how authors defined “accurate estimates”; and there was a description of how the true cost was determined. Two authors reviewed each article for eligibility and extracted data independently. Cost accuracy outcomes were summarized, but data were not combined in meta-analysis because of extensive heterogeneity. Qualitative data related to physicians and drug costs were also extracted. The final analysis included 24 articles. Cost accuracy was low; 31% of estimates were within 20% or 25% of the true cost, and fewer than 50% were accurate by any definition of cost accuracy. Methodological weaknesses were common, and studies of low methodological quality showed better cost awareness. The most important factor influencing the pattern and accuracy of estimation was the true cost of therapy. High-cost drugs were estimated more accurately than inexpensive ones (74% versus 31%, Chi-square p < 0.001). Doctors consistently overestimated the cost of inexpensive products and underestimated the cost of expensive ones (binomial test, 89/101, p < 0.001). When asked, doctors indicated that they want cost information and feel it would improve their prescribing but that it is not accessible. Conclusions Doctors' ignorance of costs, combined with their tendency to underestimate the price of expensive drugs and overestimate the price of inexpensive ones, demonstrate a lack of appreciation of the large difference in cost between inexpensive and expensive drugs. This discrepancy in turn could have profound implications for overall drug expenditures. Much more focus is required in the education of physicians about costs and the access to cost information. Future research should focus on the accessibility and reliability of medical cost information and whether the provision of this information is used by doctors and makes a difference to physician prescribing. Additionally, future work should strive for higher methodological standards to avoid the biases we found in the current literature, including attention to the method of assessing accuracy that allows larger absolute estimation ranges for expensive drugs. PMID:17896856
A LEO Satellite Navigation Algorithm Based on GPS and Magnetometer Data
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack; Harman, Rick; Bauer, Frank H. (Technical Monitor)
2000-01-01
The Global Positioning System (GPS) has become a standard method for low cost onboard satellite orbit determination. The use of a GPS receiver as an attitude and rate sensor has also been developed in the recent past. Additionally, focus has been given to attitude and orbit estimation using the magnetometer, a low cost, reliable sensor. Combining measurements from both GPS and a magnetometer can provide a robust navigation system that takes advantage of the estimation qualities of both measurements. Ultimately a low cost, accurate navigation system can result, potentially eliminating the need for more costly sensors, including gyroscopes.
Cost Analysis In A Multi-Mission Operations Environment
NASA Technical Reports Server (NTRS)
Newhouse, M.; Felton, L.; Bornas, N.; Botts, D.; Roth, K.; Ijames, G.; Montgomery, P.
2014-01-01
Spacecraft control centers have evolved from dedicated, single-mission or single missiontype support to multi-mission, service-oriented support for operating a variety of mission types. At the same time, available money for projects is shrinking and competition for new missions is increasing. These factors drive the need for an accurate and flexible model to support estimating service costs for new or extended missions; the cost model in turn drives the need for an accurate and efficient approach to service cost analysis. The National Aeronautics and Space Administration (NASA) Huntsville Operations Support Center (HOSC) at Marshall Space Flight Center (MSFC) provides operations services to a variety of customers around the world. HOSC customers range from launch vehicle test flights; to International Space Station (ISS) payloads; to small, short duration missions; and has included long duration flagship missions. The HOSC recently completed a detailed analysis of service costs as part of the development of a complete service cost model. The cost analysis process required the team to address a number of issues. One of the primary issues involves the difficulty of reverse engineering individual mission costs in a highly efficient multimission environment, along with a related issue of the value of detailed metrics or data to the cost model versus the cost of obtaining accurate data. Another concern is the difficulty of balancing costs between missions of different types and size and extrapolating costs to different mission types. The cost analysis also had to address issues relating to providing shared, cloud-like services in a government environment, and then assigning an uncertainty or risk factor to cost estimates that are based on current technology, but will be executed using future technology. Finally the cost analysis needed to consider how to validate the resulting cost models taking into account the non-homogeneous nature of the available cost data and the decreasing flight rate. This paper presents the issues encountered during the HOSC cost analysis process, and the associated lessons learned. These lessons can be used when planning for a new multi-mission operations center or in the transformation from a dedicated control center to multi-center operations, as an aid in defining processes that support future cost analysis and estimation. The lessons can also be used by mature serviceoriented, multi-mission control centers to streamline or refine their cost analysis process.
Cost Analysis in a Multi-Mission Operations Environment
NASA Technical Reports Server (NTRS)
Felton, Larry; Newhouse, Marilyn; Bornas, Nick; Botts, Dennis; Ijames, Gayleen; Montgomery, Patty; Roth, Karl
2014-01-01
Spacecraft control centers have evolved from dedicated, single-mission or single mission-type support to multi-mission, service-oriented support for operating a variety of mission types. At the same time, available money for projects is shrinking and competition for new missions is increasing. These factors drive the need for an accurate and flexible model to support estimating service costs for new or extended missions; the cost model in turn drives the need for an accurate and efficient approach to service cost analysis. The National Aeronautics and Space Administration (NASA) Huntsville Operations Support Center (HOSC) at Marshall Space Flight Center (MSFC) provides operations services to a variety of customers around the world. HOSC customers range from launch vehicle test flights; to International Space Station (ISS) payloads; to small, short duration missions; and has included long duration flagship missions. The HOSC recently completed a detailed analysis of service costs as part of the development of a complete service cost model. The cost analysis process required the team to address a number of issues. One of the primary issues involves the difficulty of reverse engineering individual mission costs in a highly efficient multi-mission environment, along with a related issue of the value of detailed metrics or data to the cost model versus the cost of obtaining accurate data. Another concern is the difficulty of balancing costs between missions of different types and size and extrapolating costs to different mission types. The cost analysis also had to address issues relating to providing shared, cloud-like services in a government environment, and then assigning an uncertainty or risk factor to cost estimates that are based on current technology, but will be executed using future technology. Finally the cost analysis needed to consider how to validate the resulting cost models taking into account the non-homogeneous nature of the available cost data and the decreasing flight rate. This paper presents the issues encountered during the HOSC cost analysis process, and the associated lessons learned. These lessons can be used when planning for a new multi-mission operations center or in the transformation from a dedicated control center to multi-center operations, as an aid in defining processes that support future cost analysis and estimation. The lessons can also be used by mature service-oriented, multi-mission control centers to streamline or refine their cost analysis process.
Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.
USDA-ARS?s Scientific Manuscript database
Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...
ERIC Educational Resources Information Center
Test Service Bulletin, 1951
1951-01-01
Before a business adopts tests in personnel selection, it should be decided that the increased cost and bother are likely to be offset by the savings that come from this additional selection procedure. Fairly accurate estimates of the cost of testing can be made, but in educational testing it is harder than in businesses to measure the results in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruth, M.; Timbario, T. A.; Timbario, T. J.
2011-01-01
Currently, several cost-per-mile calculators exist that can provide estimates of acquisition and operating costs for consumers and fleets. However, these calculators are limited in their ability to determine the difference in cost per mile for consumer versus fleet ownership, to calculate the costs beyond one ownership period, to show the sensitivity of the cost per mile to the annual vehicle miles traveled (VMT), and to estimate future increases in operating and ownership costs. Oftentimes, these tools apply a constant percentage increase over the time period of vehicle operation, or in some cases, no increase in direct costs at all overmore » time. A more accurate cost-per-mile calculator has been developed that allows the user to analyze these costs for both consumers and fleets. The calculator was developed to allow simultaneous comparisons of conventional light-duty internal combustion engine (ICE) vehicles, mild and full hybrid electric vehicles (HEVs), and fuel cell vehicles (FCVs). This paper is a summary of the development by the authors of a more accurate cost-per-mile calculator that allows the user to analyze vehicle acquisition and operating costs for both consumer and fleets. Cost-per-mile results are reported for consumer-operated vehicles travelling 15,000 miles per year and for fleets travelling 25,000 miles per year.« less
Jones, Terry L; Schlegel, Cara
2014-02-01
Accurate, precise, unbiased, reliable, and cost-effective estimates of nursing time use are needed to insure safe staffing levels. Direct observation of nurses is costly, and conventional surrogate measures have limitations. To test the potential of electronic capture of time and motion through real time location systems (RTLS), a pilot study was conducted to assess efficacy (method agreement) of RTLS time use; inter-rater reliability of RTLS time-use estimates; and associated costs. Method agreement was high (mean absolute difference = 28 seconds); inter-rater reliability was high (ICC = 0.81-0.95; mean absolute difference = 2 seconds); and costs for obtaining RTLS time-use estimates on a single nursing unit exceeded $25,000. Continued experimentation with RTLS to obtain time-use estimates for nursing staff is warranted. © 2013 Wiley Periodicals, Inc.
Genie M. Fleming; Joseph M. Wunderle; David N. Ewert; Joseph O' Brien
2014-01-01
Aim: Non-destructive methods for quantifying above-ground plant biomass are important tools in many ecological studies and management endeavours, but estimation methods can be labour intensive and particularly difficult in structurally diverse vegetation types. We aimed to develop a low-cost, but reasonably accurate, estimation technique within early-successional...
USDA-ARS?s Scientific Manuscript database
Accurate estimates of daily crop evapotranspiration (ET) are needed for efficient irrigation management, especially in arid and semi-arid irrigated regions where crop water demand exceeds rainfall. The impact of inaccurate ET estimates can be tremendous in both irrigation cost and the increased dema...
Assembly-line Simulation Program
NASA Technical Reports Server (NTRS)
Chamberlain, Robert G.; Zendejas, Silvino; Malhotra, Shan
1987-01-01
Costs and profits estimated for models based on user inputs. Standard Assembly-line Manufacturing Industry Simulation (SAMIS) program generalized so useful for production-line manufacturing companies. Provides accurate and reliable means of comparing alternative manufacturing processes. Used to assess impact of changes in financial parameters as cost of resources and services, inflation rates, interest rates, tax policies, and required rate of return of equity. Most important capability is ability to estimate prices manufacturer would have to receive for its products to recover all of costs of production and make specified profit. Written in TURBO PASCAL.
Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod
Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.
2008-01-01
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.
A 6-year trend of the healthcare costs of arthritis in a population-based cohort of older women.
Lo, Tkt; Parkinson, Lynne; Cunich, Michelle; Byles, Julie
2016-06-01
To provide an accurate representation of the economic burden of arthritis by estimating the adjusted incremental healthcare cost of arthritis at multiple percentiles and reporting the cost trends across time. A healthcare cost study based on health survey and linked administrative data, where costs were estimated from the government's perspective in dollars per person per year. Quantile regression was used to estimate the adjusted incremental cost at the 25th, 50th, 75th, 90th, and 95th percentiles. Data from 4287 older Australian women were included. The median incremental healthcare cost of arthritis was, in 2012 Australian dollars, $480 (95% CI: $498-759) in 2009; however, 5% of individuals had 5-times higher costs than the 'average individual' with arthritis. Healthcare cost of arthritis did not increase significantly from 2003 to 2009. Healthcare cost of arthritis represents a substantial burden for the governments. Future research should continue to monitor the economic burden of arthritis.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica; Le Moigne, Jacqueline; de Weck, Oliver
2016-01-01
Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the shortcomings of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-the-art in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority weaknesses within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities must exist; this paper offers insights critical to the future development and implementation of DSM cost estimating tools.
A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions
NASA Technical Reports Server (NTRS)
Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver
2016-01-01
Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities must exist; this paper offers insights critical to the future development and implementation of DSM cost estimating tools.
Parametric cost estimation for space science missions
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
Empirical cost models for estimating power and energy consumption in database servers
NASA Astrophysics Data System (ADS)
Valdivia Garcia, Harold Dwight
The explosive growth in the size of data centers, coupled with the widespread use of virtualization technology has brought power and energy consumption as major concerns for data center administrators. Provisioning decisions must take into consideration not only target application performance but also the power demands and total energy consumption incurred by the hardware and software to be deployed at the data center. Failure to do so will result in damaged equipment, power outages, and inefficient operation. Since database servers comprise one of the most popular and important server applications deployed in such facilities, it becomes necessary to have accurate cost models that can predict the power and energy demands that each database workloads will impose in the system. In this work we present an empirical methodology to estimate the power and energy cost of database operations. Our methodology uses multiple-linear regression to derive accurate cost models that depend only on readily available statistics such as selectivity factors, tuple size, numbers columns and relational cardinality. Moreover, our method does not need measurement of individual hardware components, but rather total power and energy consumption measured at a server. We have implemented our methodology, and ran experiments with several server configurations. Our experiments indicate that we can predict power and energy more accurately than alternative methods found in the literature.
An overall estimation of losses caused by diseases in the Brazilian fish farms.
Tavares-Dias, Marcos; Martins, Maurício Laterça
2017-12-01
Parasitic and infectious diseases are common in finfish, but are difficult to accurately estimate the economic impacts on the production in a country with large dimensions like Brazil. The aim of this study was to estimate the costs caused by economic losses of finfish due to mortality by diseases in Brazil. A model for estimating the costs related to parasitic and bacterial diseases in farmed fish and an estimative of these economic impacts are presented. We used official data of production and mortality of finfish for rough estimation of economic losses. The losses herein presented are related to direct and indirect economic costs for freshwater farmed fish, which were estimated in US$ 84 million per year. Finally, it was possible to establish by the first time an estimative of overall losses in finfish production in Brazil using data available from production. Therefore, this current estimative must help researchers and policy makers to approximate the economic costs of diseases for fish farming industry, as well as for developing of public policies on the control measures of diseases and priority research lines.
Report #16-P-0122, March 29, 2016. Inaccurate or incomplete documentation of the EPA's cost modeling could prevent a third party from obtaining a full and accurate understanding of how the EPA arrived at its cost estimate for the Tier 3 rule.
Comparing methodologies for the allocation of overhead and capital costs to hospital services.
Tan, Siok Swan; van Ineveld, Bastianus Martinus; Redekop, William Ken; Hakkaart-van Roijen, Leona
2009-06-01
Typically, little consideration is given to the allocation of indirect costs (overheads and capital) to hospital services, compared to the allocation of direct costs. Weighted service allocation is believed to provide the most accurate indirect cost estimation, but the method is time consuming. To determine whether hourly rate, inpatient day, and marginal mark-up allocation are reliable alternatives for weighted service allocation. The cost approaches were compared independently for appendectomy, hip replacement, cataract, and stroke in representative general hospitals in The Netherlands for 2005. Hourly rate allocation and inpatient day allocation produce estimates that are not significantly different from weighted service allocation. Hourly rate allocation may be a strong alternative to weighted service allocation for hospital services with a relatively short inpatient stay. The use of inpatient day allocation would likely most closely reflect the indirect cost estimates obtained by the weighted service method.
Kriging Direct and Indirect Estimates of Sulfate Deposition: A Comparison
Gregory A. Reams; Manuela M.P. Huso; Richard J. Vong; Joseph M. McCollum
1997-01-01
Due to logistical and cost constraints, acidic deposition is rarely measured at forest research or sampling locations. A crucial first step to assessing the effects of acid rain on forests is an accurate estimate of acidic deposition at forest sample sites. We examine two methods (direct and indirect) for estimating sulfate deposition at atmospherically unmonitored...
Remote estimation of a managed pine forest evapotranspiration with geospatial technology
S. Panda; D.M. Amatya; G Sun; A. Bowman
2016-01-01
Remote sensing has increasingly been used to estimate evapotranspiration (ET) and its supporting parameters in a rapid, accurate, and cost-effective manner. The goal of this study was to develop remote sensing-based models for estimating ET and the biophysical parameters canopy conductance (gc), upper-canopy temperature, and soil moisture for a mature loblolly pine...
ABC estimation of unit costs for emergency department services.
Holmes, R L; Schroeder, R E
1996-04-01
Rapid evolution of the health care industry forces managers to make cost-effective decisions. Typical hospital cost accounting systems do not provide emergency department managers with the information needed, but emergency department settings are so complex and dynamic as to make the more accurate activity-based costing (ABC) system prohibitively expensive. Through judicious use of the available traditional cost accounting information and simple computer spreadsheets. managers may approximate the decision-guiding information that would result from the much more costly and time-consuming implementation of ABC.
The Psychology of Cost Estimating
NASA Technical Reports Server (NTRS)
Price, Andy
2016-01-01
Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.
The economic burden of child sexual abuse in the United States.
Letourneau, Elizabeth J; Brown, Derek S; Fang, Xiangming; Hassan, Ahmed; Mercy, James A
2018-05-01
The present study provides an estimate of the U.S. economic impact of child sexual abuse (CSA). Costs of CSA were measured from the societal perspective and include health care costs, productivity losses, child welfare costs, violence/crime costs, special education costs, and suicide death costs. We separately estimated quality-adjusted life year (QALY) losses. For each category, we used the best available secondary data to develop cost per case estimates. All costs were estimated in U.S. dollars and adjusted to the reference year 2015. Estimating 20 new cases of fatal and 40,387 new substantiated cases of nonfatal CSA that occurred in 2015, the lifetime economic burden of CSA is approximately $9.3 billion, the lifetime cost for victims of fatal CSA per female and male victim is on average $1,128,334 and $1,482,933, respectively, and the average lifetime cost for victims of nonfatal CSA is of $282,734 per female victim. For male victims of nonfatal CSA, there was insufficient information on productivity losses, contributing to a lower average estimated lifetime cost of $74,691 per male victim. If we included QALYs, these costs would increase by approximately $40,000 per victim. With the exception of male productivity losses, all estimates were based on robust, replicable incidence-based costing methods. The availability of accurate, up-to-date estimates should contribute to policy analysis, facilitate comparisons with other public health problems, and support future economic evaluations of CSA-specific policy and practice. In particular, we hope the availability of credible and contemporary estimates will support increased attention to primary prevention of CSA. Copyright © 2018. Published by Elsevier Ltd.
Menon, Purnima; McDonald, Christine M; Chakrabarti, Suman
2016-05-01
India's national nutrition and health programmes are largely designed to provide evidence-based nutrition-specific interventions, but intervention coverage is low due to a combination of implementation challenges, capacity and financing gaps. Global cost estimates for nutrition are available but national and subnational costs are not. We estimated national and subnational costs of delivering recommended nutrition-specific interventions using the Scaling Up Nutrition (SUN) costing approach. We compared costs of delivering the SUN interventions at 100% scale with those of nationally recommended interventions. Target populations (TP) for interventions were estimated using national population and nutrition data. Unit costs (UC) were derived from programmatic data. The cost of delivering an intervention at 100% coverage was calculated as (UC*projected TP). Cost estimates varied; estimates for SUN interventions were lower than estimates for nationally recommended interventions because of differences in choice of intervention, target group or unit cost. US$5.9bn/year are required to deliver a set of nationally recommended nutrition interventions at scale in India, while US$4.2bn are required for the SUN interventions. Cash transfers (49%) and food supplements (40%) contribute most to costs of nationally recommended interventions, while food supplements to prevent and treat malnutrition contribute most to the SUN costs. We conclude that although such costing is useful to generate broad estimates, there is an urgent need for further costing studies on the true unit costs of the delivery of nutrition-specific interventions in different local contexts to be able to project accurate national and subnational budgets for nutrition in India. © 2016 The Authors. Maternal & Child Nutrition published by John Wiley & Sons Ltd.
Remote sensing support for national forest inventories
Ronald E. McRoberts; Erkki O. Tomppo
2007-01-01
National forest inventory programs are tasked to produce timely and accurate estimates for a wide range of forest resource variables for a variety of users and applications. Time, cost, and precision constraints cause these programs to seek technological innovations that contribute to measurement and estimation efficiencies and that facilitate the production and...
Animal board invited review: Dairy cow lameness expenditures, losses and total cost.
Dolecheck, K; Bewley, J
2018-07-01
Lameness is one of the most costly dairy cow diseases, yet adoption of lameness prevention strategies remains low. Low lameness prevention adoption might be attributable to a lack of understanding regarding total lameness costs. In this review, we evaluated the contribution of different expenditures and losses to total lameness costs. Evaluated expenditures included labor for treatment, therapeutic supplies, lameness detection and lameness control and prevention. Evaluated losses included non-saleable milk, reduced milk production, reduced reproductive performance, increased animal death, increased animal culling, disease interrelationships, lameness recurrence and reduced animal welfare. The previous literature on total lameness cost estimates was also summarized. The reviewed studies indicated that previous estimates of total lameness costs are variable and inconsistent in the expenditures and losses they include. Many of the identified expenditure and loss categories require further research to accurately include in total lameness cost estimates. Future research should focus on identifying costs associated with specific lameness conditions, differing lameness severity levels, and differing stages of lactation at onset of lameness to provide better total lameness cost estimates that can be useful for decision making at both the herd and individual cow level.
Compile-time estimation of communication costs in multicomputers
NASA Technical Reports Server (NTRS)
Gupta, Manish; Banerjee, Prithviraj
1991-01-01
An important problem facing numerous research projects on parallelizing compilers for distributed memory machines is that of automatically determining a suitable data partitioning scheme for a program. Any strategy for automatic data partitioning needs a mechanism for estimating the performance of a program under a given partitioning scheme, the most crucial part of which involves determining the communication costs incurred by the program. A methodology is described for estimating the communication costs at compile-time as functions of the numbers of processors over which various arrays are distributed. A strategy is described along with its theoretical basis, for making program transformations that expose opportunities for combining of messages, leading to considerable savings in the communication costs. For certain loops with regular dependences, the compiler can detect the possibility of pipelining, and thus estimate communication costs more accurately than it could otherwise. These results are of great significance to any parallelization system supporting numeric applications on multicomputers. In particular, they lay down a framework for effective synthesis of communication on multicomputers from sequential program references.
Time-driven Activity-based Costing More Accurately Reflects Costs in Arthroplasty Surgery.
Akhavan, Sina; Ward, Lorrayne; Bozic, Kevin J
2016-01-01
Cost estimates derived from traditional hospital cost accounting systems have inherent limitations that restrict their usefulness for measuring process and quality improvement. Newer approaches such as time-driven activity-based costing (TDABC) may offer more precise estimates of true cost, but to our knowledge, the differences between this TDABC and more traditional approaches have not been explored systematically in arthroplasty surgery. The purposes of this study were to compare the costs associated with (1) primary total hip arthroplasty (THA); (2) primary total knee arthroplasty (TKA); and (3) three surgeons performing these total joint arthroplasties (TJAs) as measured using TDABC versus traditional hospital accounting (TA). Process maps were developed for each phase of care (preoperative, intraoperative, and postoperative) for patients undergoing primary TJA performed by one of three surgeons at a tertiary care medical center. Personnel costs for each phase of care were measured using TDABC based on fully loaded labor rates, including physician compensation. Costs associated with consumables (including implants) were calculated based on direct purchase price. Total costs for 677 primary TJAs were aggregated over 17 months (January 2012 to May 2013) and organized into cost categories (room and board, implant, operating room services, drugs, supplies, other services). Costs derived using TDABC, based on actual time and intensity of resources used, were compared with costs derived using TA techniques based on activity-based costing and indirect costs calculated as a percentage of direct costs from the hospital decision support system. Substantial differences between cost estimates using TDABC and TA were found for primary THA (USD 12,982 TDABC versus USD 23,915 TA), primary TKA (USD 13,661 TDABC versus USD 24,796 TA), and individually across all three surgeons for both (THA: TDABC = 49%-55% of TA total cost; TKA: TDABC = 53%-55% of TA total cost). Cost categories with the most variability between TA and TDABC estimates were operating room services and room and board. Traditional hospital cost accounting systems overestimate the costs associated with many surgical procedures, including primary TJA. TDABC provides a more accurate measure of true resource use associated with TJAs and can be used to identify high-cost/high-variability processes that can be targeted for process/quality improvement. Level III, therapeutic study.
Estimation of Temporal Gait Parameters Using a Human Body Electrostatic Sensing-Based Method.
Li, Mengxuan; Li, Pengfei; Tian, Shanshan; Tang, Kai; Chen, Xi
2018-05-28
Accurate estimation of gait parameters is essential for obtaining quantitative information on motor deficits in Parkinson's disease and other neurodegenerative diseases, which helps determine disease progression and therapeutic interventions. Due to the demand for high accuracy, unobtrusive measurement methods such as optical motion capture systems, foot pressure plates, and other systems have been commonly used in clinical environments. However, the high cost of existing lab-based methods greatly hinders their wider usage, especially in developing countries. In this study, we present a low-cost, noncontact, and an accurate temporal gait parameters estimation method by sensing and analyzing the electrostatic field generated from human foot stepping. The proposed method achieved an average 97% accuracy on gait phase detection and was further validated by comparison to the foot pressure system in 10 healthy subjects. Two results were compared using the Pearson coefficient r and obtained an excellent consistency ( r = 0.99, p < 0.05). The repeatability of the purposed method was calculated between days by intraclass correlation coefficients (ICC), and showed good test-retest reliability (ICC = 0.87, p < 0.01). The proposed method could be an affordable and accurate tool to measure temporal gait parameters in hospital laboratories and in patients' home environments.
Development of a practical costing method for hospitals.
Cao, Pengyu; Toyabe, Shin-Ichi; Akazawa, Kouhei
2006-03-01
To realize an effective cost control, a practical and accurate cost accounting system is indispensable in hospitals. In traditional cost accounting systems, the volume-based costing (VBC) is the most popular cost accounting method. In this method, the indirect costs are allocated to each cost object (services or units of a hospital) using a single indicator named a cost driver (e.g., Labor hours, revenues or the number of patients). However, this method often results in rough and inaccurate results. The activity based costing (ABC) method introduced in the mid 1990s can prove more accurate results. With the ABC method, all events or transactions that cause costs are recognized as "activities", and a specific cost driver is prepared for each activity. Finally, the costs of activities are allocated to cost objects by the corresponding cost driver. However, it is much more complex and costly than other traditional cost accounting methods because the data collection for cost drivers is not always easy. In this study, we developed a simplified ABC (S-ABC) costing method to reduce the workload of ABC costing by reducing the number of cost drivers used in the ABC method. Using the S-ABC method, we estimated the cost of the laboratory tests, and as a result, similarly accurate results were obtained with the ABC method (largest difference was 2.64%). Simultaneously, this new method reduces the seven cost drivers used in the ABC method to four. Moreover, we performed an evaluation using other sample data from physiological laboratory department to certify the effectiveness of this new method. In conclusion, the S-ABC method provides two advantages in comparison to the VBC and ABC methods: (1) it can obtain accurate results, and (2) it is simpler to perform. Once we reduce the number of cost drivers by applying the proposed S-ABC method to the data for the ABC method, we can easily perform the cost accounting using few cost drivers after the second round of costing.
The Joint Confidence Level Paradox: A History of Denial
NASA Technical Reports Server (NTRS)
Butts, Glenn; Linton, Kent
2009-01-01
This paper is intended to provide a reliable methodology for those tasked with generating price tags on construction (C0F) and research and development (R&D) activities in the NASA performance world. This document consists of a collection of cost-related engineering detail and project fulfillment information from early agency days to the present. Accurate historical detail is the first place to start when determining improved methodologies for future cost and schedule estimating. This paper contains a beneficial proposed cost estimating method for arriving at more reliable numbers for future submits. When comparing current cost and schedule methods with earlier cost and schedule approaches, it became apparent that NASA's organizational performance paradigm has morphed. Mission fulfillment speed has slowed and cost calculating factors have increased in 21st Century space exploration.
Ruger, Jennifer Prah; Emmons, Karen M; Kearney, Margaret H; Weinstein, Milton C
2009-01-01
Background Economic theory provides the philosophical foundation for valuing costs in judging medical and public health interventions. When evaluating smoking cessation interventions, accurate data on costs are essential for understanding resource consumption. Smoking cessation interventions, for which prior data on resource costs are typically not available, present special challenges. We develop a micro-costing methodology for estimating the real resource costs of outreach motivational interviewing (MI) for smoking cessation and relapse prevention among low-income pregnant women and report results from a randomized controlled trial (RCT) employing the methodology. Methodological standards in cost analysis are necessary for comparison and uniformity in analysis across interventions. Estimating the costs of outreach programs is critical for understanding the economics of reaching underserved and hard-to-reach populations. Methods Randomized controlled trial (1997-2000) collecting primary cost data for intervention. A sample of 302 low-income pregnant women was recruited from multiple obstetrical sites in the Boston metropolitan area. MI delivered by outreach health nurses vs. usual care (UC), with economic costs as the main outcome measures. Results The total cost of the MI intervention for 156 participants was $48,672 or $312 per participant. The total cost of $311.8 per participant for the MI intervention compared with a cost of $4.82 per participant for usual care, a difference of $307 ([CI], $289.2 to $322.8). The total fixed costs of the MI were $3,930 and the total variable costs of the MI were $44,710. The total expected program costs for delivering MI to 500 participants would be 147,430, assuming no economies of scale in program delivery. The main cost components of outreach MI were intervention delivery, travel time, scheduling, and training. Conclusion Grounded in economic theory, this methodology systematically identifies and measures resource utilization, using a process tracking system and calculates both component-specific and total costs of outreach MI. The methodology could help improve collection of accurate data on costs and estimates of the real resource costs of interventions alongside clinical trials and improve the validity and reliability of estimates of resource costs for interventions targeted at underserved and hard-to-reach populations. PMID:19775455
SAMICS Validation. SAMICS Support Study, Phase 3
NASA Technical Reports Server (NTRS)
1979-01-01
SAMICS provides a consistent basis for estimating array costs and compares production technology costs. A review and a validation of the SAMICS model are reported. The review had the following purposes: (1) to test the computational validity of the computer model by comparison with preliminary hand calculations based on conventional cost estimating techniques; (2) to review and improve the accuracy of the cost relationships being used by the model: and (3) to provide an independent verification to users of the model's value in decision making for allocation of research and developement funds and for investment in manufacturing capacity. It is concluded that the SAMICS model is a flexible, accurate, and useful tool for managerial decision making.
Use of the DISST Model to Estimate the HOMA and Matsuda Indexes Using Only a Basal Insulin Assay
Docherty, Paul D.; Chase, J. Geoffrey
2014-01-01
Background: It is hypothesized that early detection of reduced insulin sensitivity (SI) could prompt intervention that may reduce the considerable financial strain type 2 diabetes mellitus (T2DM) places on global health care. Reduction of the cost of already inexpensive SI metrics such as the Matsuda and HOMA indexes would enable more widespread, economically feasible use of these metrics for screening. The goal of this research was to determine a means of reducing the number of insulin samples and therefore the cost required to provide an accurate Matsuda Index value. Method: The Dynamic Insulin Sensitivity and Secretion Test (DISST) model was used with the glucose and basal insulin measurements from an Oral Glucose Tolerance Test (OGTT) to predict patient insulin responses. The insulin response to the OGTT was determined via population based regression analysis that incorporated the 60-minute glucose and basal insulin values. Results: The proposed method derived accurate and precise Matsuda Indices as compared to the fully sampled Matsuda (R = .95) using only the basal assay insulin-level data and 4 glucose measurements. Using a model employing the basal insulin also allows for determination of the 1-day HOMA value. Conclusion: The DISST model was successfully modified to allow for the accurate prediction an individual’s insulin response to the OGTT. In turn, this enabled highly accurate and precise estimation of a Matsuda Index using only the glucose and basal insulin assays. As insulin assays account for the majority of the cost of the Matsuda Index, this model offers a significant reduction in assay cost. PMID:24876431
Lloyd-Smith, Patrick
2017-12-01
Decisions regarding the optimal provision of infection prevention and control resources depend on accurate estimates of the attributable costs of health care-associated infections. This is challenging given the skewed nature of health care cost data and the endogeneity of health care-associated infections. The objective of this study is to determine the hospital costs attributable to vancomycin-resistant enterococci (VRE) while accounting for endogeneity. This study builds on an attributable cost model conducted by a retrospective cohort study including 1,292 patients admitted to an urban hospital in Vancouver, Canada. Attributable hospital costs were estimated with multivariate generalized linear models (GLMs). To account for endogeneity, a control function approach was used. The analysis sample included 217 patients with health care-associated VRE. In the standard GLM, the costs attributable to VRE are $17,949 (SEM, $2,993). However, accounting for endogeneity, the attributable costs were estimated to range from $14,706 (SEM, $7,612) to $42,101 (SEM, $15,533). Across all model specifications, attributable costs are 76% higher on average when controlling for endogeneity. VRE was independently associated with increased hospital costs, and controlling for endogeneity lead to higher attributable cost estimates. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Spacecraft Complexity Subfactors and Implications on Future Cost Growth
NASA Technical Reports Server (NTRS)
Leising, Charles J.; Wessen, Randii; Ellyin, Ray; Rosenberg, Leigh; Leising, Adam
2013-01-01
During the last ten years the Jet Propulsion Laboratory has used a set of cost-risk subfactors to independently estimate the magnitude of development risks that may not be covered in the high level cost models employed during early concept development. Within the last several years the Laboratory has also developed a scale of Concept Maturity Levels with associated criteria to quantitatively assess a concept's maturity. This latter effort has been helpful in determining whether a concept is mature enough for accurate costing but it does not provide any quantitative estimate of cost risk. Unfortunately today's missions are significantly more complex than when the original cost-risk subfactors were first formulated. Risks associated with complex missions are not being adequately evaluated and future cost growth is being underestimated. The risk subfactor process needed to be updated.
Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.
2018-01-01
The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.
The paper describes a methodology developed to estimate emissions factors for a variety of different area sources in a rapid, accurate, and cost effective manner. he methodology involves using an open-path Fourier transform infrared (FTIR) spectrometer to measure concentrations o...
Adverse drug reactions in Germany: direct costs of internal medicine hospitalizations.
Rottenkolber, Dominik; Schmiedl, Sven; Rottenkolber, Marietta; Farker, Katrin; Saljé, Karen; Mueller, Silke; Hippius, Marion; Thuermann, Petra A; Hasford, Joerg
2011-06-01
German hospital reimbursement modalities changed as a result of the introduction of Diagnosis Related Groups (DRG) in 2004. Therefore, no data on the direct costs of adverse drug reactions (ADRs) resulting in admissions to departments of internal medicine are available. The objective was to quantify the ADR-related economic burden (direct costs) of hospitalizations in internal medicine wards in Germany. Record-based study analyzing the patient records of about 57,000 hospitalizations between 2006 and 2007 of the Net of Regional Pharmacovigilance Centers (Germany). All ADRs were evaluated by a team of experts in pharmacovigilance for severity, causality, and preventability. The calculation of accurate person-related costs for ADRs relied on the German DRG system (G-DRG 2009). Descriptive and bootstrap statistical methods were applied for data analysis. The incidence of hospitalization due to at least 'possible' serious outpatient ADRs was estimated to be approximately 3.25%. Mean age of the 1834 patients was 71.0 years (SD 14.7). Most frequent ADRs were gastrointestinal hemorrhage (n = 336) and drug-induced hypoglycemia (n = 270). Average inpatient length-of-stay was 9.3 days (SD 7.1). Average treatment costs of a single ADR were estimated to be approximately €2250. The total costs sum to €434 million per year for Germany. Considering the proportion of preventable cases (20.1%), this equals a saving potential of €87 million per year. Preventing ADRs is advisable in order to realize significant nationwide savings potential. Our cost estimates provide a reliable benchmark as they were calculated based on an intensified ADR surveillance and an accurate person-related cost application. Copyright © 2011 John Wiley & Sons, Ltd.
Costs Climb on Materials for Schools: Construction Projects Delayed, Scrapped
ERIC Educational Resources Information Center
Sack, Joetta L.
2004-01-01
The rapidly rising cost of steel and other construction materials is forcing some districts that are building new schools to scramble for more money, delay work, or redesign projects. Nationwide, contractors and architects are finding it harder to give accurate estimates on projects, and some have even had to renegotiate contracts with districts.…
Space Station Facility government estimating
NASA Technical Reports Server (NTRS)
Brown, Joseph A.
1993-01-01
This new, unique Cost Engineering Report introduces the 800-page, C-100 government estimate for the Space Station Processing Facility (SSPF) and Volume IV Aerospace Construction Price Book. At the January 23, 1991, bid opening for the SSPF, the government cost estimate was right on target. Metric, Inc., Prime Contractor, low bid was 1.2 percent below the government estimate. This project contains many different and complex systems. Volume IV is a summary of the cost associated with construction, activation and Ground Support Equipment (GSE) design, estimating, fabrication, installation, testing, termination, and verification of this project. Included are 13 reasons the government estimate was so accurate; abstract of bids, for 8 bidders and government estimate with additive alternates, special labor and materials, budget comparison and system summaries; and comments on the energy credit from local electrical utility. This report adds another project to our continuing study of 'How Does the Low Bidder Get Low and Make Money?' which was started in 1967, and first published in the 1973 AACE Transaction with 18 ways the low bidders get low. The accuracy of this estimate proves the benefits of our Kennedy Space Center (KSC) teamwork efforts and KSC Cost Engineer Tools which are contributing toward our goals of the Space Station.
The economic burden of cancer care in Canada: a population-based cost study
de Oliveira, Claire; Weir, Sharada; Rangrej, Jagadish; Krahn, Murray D.; Mittmann, Nicole; Hoch, Jeffrey S.; Chan, Kelvin K.W.; Peacock, Stuart
2018-01-01
Background: Resource and cost issues are a growing concern in health care. Thus, it is important to have an accurate estimate of the economic burden of care. Previous work has estimated the economic burden of cancer care for Canada; however, there is some concern this estimate is too low. The objective of this analysis was to provide a comprehensive revised estimate of this burden. Methods: We used a case-control prevalence-based approach to estimate direct annual cancer costs from 2005 to 2012. We used patient-level administrative health care data from Ontario to correctly attribute health care costs to cancer. We employed the net cost method (cost difference between patients with cancer and control subjects without cancer) to account for costs directly and indirectly related to cancer and its sequelae. Using average patient-level cost estimates from Ontario, we applied proportions from national health expenditures data to obtain the economic burden of cancer care for Canada. All costs were adjusted to 2015 Canadian dollars. Results: Costs of cancer care rose steadily over our analysis period, from $2.9 billion in 2005 to $7.5 billion in 2012, mostly owing to the increase in costs of hospital-based care. Most expenditures for health care services increased over time, with chemotherapy and radiation therapy expenditures accounting for the largest increases over the study period. Our cost estimates were larger than those in the Economic Burden of Illness in Canada 2005-2008 report for every year except 2005 and 2006. Interpretation: The economic burden of cancer care in Canada is substantial. Further research is needed to understand how the economic burden of cancer compares to that of other diseases. PMID:29301745
Zaloshnja, Eduard; Miller, Ted; Romano, Eduardo; Spicer, Rebecca
2004-05-01
This paper presents costs per US motor vehicle crash victim differentiated into many more diagnostic categories than prior estimates. These unit costs, which include the first keyed to the 1990 edition of Abbreviated Injury Scale (AIS) threat-to-life severity scores, are reported by body part, whether a fracture/dislocation was involved, and the maximum AIS score among the victim's injuries. This level of detail allows for a more accurate estimation of the social costs of motor vehicle crashes. It also allows for reliable analyses of interventions targeting narrow ranges of injuries. The paper updates the medical care data underlying the US crash costs from 1979 to 1986 to the mid 1990s and improves on prior productivity cost estimates. In addition to presenting the latest generation of crash victim costs, this paper analyzes the effects of applying injury costs classified by AIS code from the 1985 edition to injury incidence data coded with the 1990 edition of AIS. This long-standing practice results in inaccurate cost-benefit analyses that typically overestimate benefits. This problem is more acute when old published costs adjusted for inflation are used rather than the recent costs.
Cost of chronic disease in California: estimates at the county level.
Brown, Paul M; Gonzalez, Mariaelena; Dhaul, Ritem Sandhu
2015-01-01
An estimated 39% of people in California suffer from at least one chronic condition or disease. While the increased coverage provided by the Affordable Care Act will result in greater access to primary health care, coordinated strategies are needed to prevent chronic conditions. To identify cost-effective strategies, local health departments and other agencies need accurate information on the costs of chronic conditions in their region. To present a methodology for estimating the cost of chronic conditions for counties. Estimates of the attributable cost of 6 chronic conditions-arthritis, asthma, cancer, cardiovascular disease, diabetes, and depression-from the Centers for Disease Control and Prevention's Chronic Disease Cost Calculator were combined with prevalence rates from the various sources and census data for California counties to estimate the number of cases and costs of each condition. The estimates were adjusted for differences in prices using Medicare geographical adjusters. An estimated $98 billion is currently spent on treating chronic conditions in California. There is significant variation between counties in the percentage of total health care expenditure due to chronic conditions and county size, ranging from a low 32% to a high of 63%. The variations between counties result from differing rates of chronic conditions across age, ethnicity, and gender. Information on the cost of chronic conditions is important for planning prevention and control efforts. This study demonstrates a method for providing local health departments with estimates of the scope of the problems in their region. Combining the cost estimates with information on current prevention strategies can identify gaps in prevention activities and the prevention measures that promise the greatest return on investment for each county.
Cost estimation model for advanced planetary programs, fourth edition
NASA Technical Reports Server (NTRS)
Spadoni, D. J.
1983-01-01
The development of the planetary program cost model is discussed. The Model was updated to incorporate cost data from the most recent US planetary flight projects and extensively revised to more accurately capture the information in the historical cost data base. This data base is comprised of the historical cost data for 13 unmanned lunar and planetary flight programs. The revision was made with a two fold objective: to increase the flexibility of the model in its ability to deal with the broad scope of scenarios under consideration for future missions, and to maintain and possibly improve upon the confidence in the model's capabilities with an expected accuracy of 20%. The Model development included a labor/cost proxy analysis, selection of the functional forms of the estimating relationships, and test statistics. An analysis of the Model is discussed and two sample applications of the cost model are presented.
Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul
2015-01-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
A Cost Analysis of Colonoscopy using Microcosting and Time-and-motion Techniques
Ness, Reid M.; Stiles, Renée A.; Shintani, Ayumi K.; Dittus, Robert S.
2007-01-01
Background The cost of an individual colonoscopy is an important determinant of the overall cost and cost-effectiveness of colorectal cancer screening. Published cost estimates vary widely and typically report institutional costs derived from gross-costing methods. Objective Perform a cost analysis of colonoscopy using micro-costing and time-and-motion techniques to determine the total societal cost of colonoscopy, which includes direct health care costs as well as direct non-health care costs and costs related to patients’ time. The design is prospective cohort. The participants were 276 contacted, eligible patients who underwent colonoscopy between July 2001 and June 2002, at either a Veterans’ Affairs Medical Center or a University Hospital in the Southeastern United States. Major results The median direct health care cost for colonoscopy was $379 (25%, 75%; $343, $433). The median direct non-health care and patient time costs were $226 (25%, 75%; $187, $323) and $274 (25%, 75%; $186, $368), respectively. The median total societal cost of colonoscopy was $923 (25%, 75%; $805, $1047). The median direct health care, direct non-health care, patient time costs, and total costs at the VA were $391, $288, $274, and $958, respectively; analogous costs at the University Hospital were $376, $189, $368, and $905, respectively. Conclusion Microcosting techniques and time-and-motion studies can produce accurate, detailed cost estimates for complex medical interventions. Cost estimates that inform health policy decisions or cost-effectiveness analyses should use total costs from the societal perspective. Societal cost estimates, which include patient and caregiver time costs, may affect colonoscopy screening rates. PMID:17665271
2016-10-01
Dementia prevalence estimates vary among population-based studies, depending on the definitions of dementia, methodologies and data sources and types of costs they use. A common approach is needed to avoid confusion and increase public and stakeholder confidence in the estimates. Since 1994, five major studies have yielded widely differing estimates of dementia prevalence and monetary costs of dementia in Canada. These studies variously estimated the prevalence of dementia for the year 2011 as low as 340 170 and as high as 747 000. The main reason for this difference was that mild cognitive impairment (MCI) was not consistently included in the projections. The estimated monetary costs of dementia for the same year also varied, from $910 million to $33 billion. This discrepancy is largely due to three factors: (1) the lack of agreed-upon methods for estimating financial costs; (2) the unavailability of prevalence estimates for the various stages of dementia (mild, moderate and severe), which directly affect the amount of money spent; and (3) the absence of tools to measure direct, indirect and intangible costs more accurately. Given the increasing challenges of dementia in Canada and around the globe, reconciling these differences is critical for developing standards to generate reliable information for public consumption and to shape public policy and service development.
Accurate position estimation methods based on electrical impedance tomography measurements
NASA Astrophysics Data System (ADS)
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.
Juan Guerra-Hernández; Eduardo González-Ferreiro; Vicente Monleon; Sonia Faias; Margarida Tomé; Ramón Díaz-Varela
2017-01-01
High spatial resolution imagery provided by unmanned aerial vehicles (UAVs) can yield accurate and efficient estimation of tree dimensions and canopy structural variables at the local scale. We flew a low-cost, lightweight UAV over an experimental Pinus pinea L. plantation (290 trees distributed over 16 ha with different fertirrigation treatments)...
NASA Astrophysics Data System (ADS)
Abu, M. Y.; Nor, E. E. Mohd; Rahman, M. S. Abd
2018-04-01
Integration between quality and costing system is very crucial in order to achieve an accurate product cost and profit. Current practice by most of remanufacturers, there are still lacking on optimization during the remanufacturing process which contributed to incorrect variables consideration to the costing system. Meanwhile, traditional costing accounting being practice has distortion in the cost unit which lead to inaccurate cost of product. The aim of this work is to identify the critical and non-critical variables during remanufacturing process using Mahalanobis-Taguchi System and simultaneously estimate the cost using Activity Based Costing method. The orthogonal array was applied to indicate the contribution of variables in the factorial effect graph and the critical variables were considered with overhead costs that are actually demanding the activities. This work improved the quality inspection together with costing system to produce an accurate profitability information. As a result, the cost per unit of remanufactured crankshaft of MAN engine model with 5 critical crankpins is MYR609.50 while Detroit engine model with 4 critical crankpins is MYR1254.80. The significant of output demonstrated through promoting green by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.
Wasim, Fatima; Mahmood, Tariq; Ayub, Khurshid
2016-07-28
Density functional theory (DFT) calculations have been performed to study the response of polypyrrole towards nitrate ions in gas and aqueous phases. First, an accurate estimate of interaction energies is obtained by methods calibrated against the gold standard CCSD(T) method. Then, a number of low cost DFT methods are also evaluated for their ability to accurately estimate the binding energies of polymer-nitrate complexes. The low cost methods evaluated here include dispersion corrected potential (DCP), Grimme's D3 correction, counterpoise correction of the B3LYP method, and Minnesota functionals (M05-2X). The interaction energies calculated using the counterpoise (CP) correction and DCP methods at the B3LYP level are in better agreement with the interaction energies calculated using the calibrated methods. The interaction energies of an infinite polymer (polypyrrole) with nitrate ions are calculated by a variety of low cost methods in order to find the associated errors. The electronic and spectroscopic properties of polypyrrole oligomers nPy (where n = 1-9) and nPy-NO3(-) complexes are calculated, and then extrapolated for an infinite polymer through a second degree polynomial fit. Charge analysis, frontier molecular orbital (FMO) analysis and density of state studies also reveal the sensing ability of polypyrrole towards nitrate ions. Interaction energies, charge analysis and density of states analyses illustrate that the response of polypyrrole towards nitrate ions is considerably reduced in the aqueous medium (compared to the gas phase).
Economic burden of moderate to severe plaque psoriasis in Canada.
Levy, Adrian R; Davie, Alison M; Brazier, Nicole C; Jivraj, Farah; Albrecht, Lorne E; Gratton, David; Lynde, Charles W
2012-12-01
Psoriasis is a chronic debilitating disease affecting approximately one million Canadians. The objective of this study is to estimate the economic burden in $CDN (2008) of moderate to severe plaque psoriasis among Canadian adults. Using a cross-sectional design, direct resource use, costs, lost productivity, and quality of life were obtained for 90 subjects diagnosed with psoriasis in three dermatology clinics in British Columbia, Ontario, and Québec. An Excel-based economic model was developed to project the annual cost of psoriasis, from the societal perspective. The estimated mean annual cost of psoriasis was $7999/subject (95% CI: $3563-$12,434) with direct costs accounting for 57%. Mean lost productivity costs, which accounted for 43% of the mean annual costs of psoriasis, were $3442/subject (95% CI: $1293-$5590). Projecting the mean costs per patient to the afflicted population yields an estimated total annual cost of $1.7 billion (95% CI: $0.8-$2.6 billion) attributable to moderate to severe psoriasis in Canada. Understanding the interplay between direct costs, lost productivity, and quality of life is critical for accurately identifying and evaluating effective treatments for this disease. © 2012 The International Society of Dermatology.
Social cost impact assessment of pipeline infrastructure projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, John C., E-mail: matthewsj@battelle.org; Allouche, Erez N., E-mail: allouche@latech.edu; Sterling, Raymond L., E-mail: sterling@latech.edu
A key advantage of trenchless construction methods compared with traditional open-cut methods is their ability to install or rehabilitate underground utility systems with limited disruption to the surrounding built and natural environments. The equivalent monetary values of these disruptions are commonly called social costs. Social costs are often ignored by engineers or project managers during project planning and design phases, partially because they cannot be calculated using standard estimating methods. In recent years some approaches for estimating social costs were presented. Nevertheless, the cost data needed for validation of these estimating methods is lacking. Development of such social cost databasesmore » can be accomplished by compiling relevant information reported in various case histories. This paper identifies eight most important social cost categories, presents mathematical methods for calculating them, and summarizes the social cost impacts for two pipeline construction projects. The case histories are analyzed in order to identify trends for the various social cost categories. The effectiveness of the methods used to estimate these values is also discussed. These findings are valuable for pipeline infrastructure engineers making renewal technology selection decisions by providing a more accurate process for the assessment of social costs and impacts. - Highlights: • Identified the eight most important social cost factors for pipeline construction • Presented mathematical methods for calculating those social cost factors • Summarized social cost impacts for two pipeline construction projects • Analyzed those projects to identify trends for the social cost factors.« less
Jack, B Kelsey; Leimona, Beria; Ferraro, Paul J
2009-04-01
To supply ecosystem services, private landholders incur costs. Knowledge of these costs is critical for the design of conservation-payment programs. Estimating these costs accurately is difficult because the minimum acceptable payment to a potential supplier is private information. We describe how an auction of payment contracts can be designed to elicit this information during the design phase of a conservation-payment program. With an estimate of the ecosystem-service supply curve from a pilot auction, conservation planners can explore the financial, ecological, and socioeconomic consequences of alternative scaled-up programs. We demonstrate the potential of our approach in Indonesia, where soil erosion on coffee farms generates downstream ecological and economic costs. Bid data from a small-scale, uniform-price auction for soil-conservation contracts allowed estimates of the costs of a scaled-up program, the gain from integrating biophysical and economic data to target contracts, and the trade-offs between poverty alleviation and supply of ecosystem services. Our study illustrates an auction-based approach to revealing private information about the costs of supplying ecosystem services. Such information can improve the design of programs devised to protect and enhance ecosystem services.
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
Evaluating cost-efficiency and accuracy of hunter harvest survey designs
Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.
2011-01-01
Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.
An empirical Bayes approach to analyzing recurring animal surveys
Johnson, D.H.
1989-01-01
Recurring estimates of the size of animal populations are often required by biologists or wildlife managers. Because of cost or other constraints, estimates frequently lack the accuracy desired but cannot readily be improved by additional sampling. This report proposes a statistical method employing empirical Bayes (EB) estimators as alternatives to those customarily used to estimate population size, and evaluates them by a subsampling experiment on waterfowl surveys. EB estimates, especially a simple limited-translation version, were more accurate and provided shorter confidence intervals with greater coverage probabilities than customary estimates.
Levy, Adrian; Johnston, Karissa; Annemans, Lieven; Tramarin, Andrea; Montaner, Julio
2010-01-01
The global prevalence of HIV infection continues to grow, as a result of increasing incidence in some countries and improved survival where highly active antiretroviral therapy (HAART) is available. Growing healthcare expenditure and shifts in the types of medical resources used have created a greater need for accurate information on the costs of treatment. The objectives of this review were to compare published estimates of direct medical costs for treating HIV and to determine the impact of disease stage on such costs, based on CD4 cell count and plasma viral load. A literature review was conducted to identify studies meeting prespecified criteria for information content, including an original estimate of the direct medical costs of treating an HIV-infected individual, stratified based on markers of disease progression. Three unpublished cost-of-care studies were also included, which were applied in the economic analyses published in this supplement. A two-step procedure was used to convert costs into a common price year (2004) using country-specific health expenditure inflators and, to account for differences in currency, using health-specific purchasing power parities to express all cost estimates in US dollars. In all nine studies meeting the eligibility criteria, infected individuals were followed longitudinally and a 'bottom-up' approach was used to estimate costs. The same patterns were observed in all studies: the lowest CD4 categories had the highest cost; there was a sharp decrease in costs as CD4 cell counts rose towards 100 cells/mm³; and there was a more gradual decline in costs as CD4 cell counts rose above 100 cells/mm³. In the single study reporting cost according to viral load, it was shown that higher plasma viral load level (> 100,000 HIV-RNA copies/mL) was associated with higher costs of care. The results demonstrate that the cost of treating HIV disease increases with disease progression, particularly at CD4 cell counts below 100 cells/mm³. The suggestion that costs increase as the plasma viral load rises needs independent verification. This review of the literature further suggests that publicly available information on the cost of HAART by disease stage is inadequate. To address the information gap, multiple stakeholders (governments, pharmaceutical industry, private insurers and non-governmental organizations) have begun to establish and support an independent, high quality and standardized multicountry data collection for evaluating the cost of HIV management. An accurate, representative and relevant cost-estimate data resource would provide a valuable asset to healthcare planners that may lead to improved policy and decision-making in managing the HIV epidemic.
Wang, Xiao Jun; Lopez, Shaun Eric; Chan, Alexandre
2015-05-01
The primary objective of this review was to identify the cost components that were most frequently associated with the economic burden of febrile neutropenia (FN) among patients with lymphoma. The secondary objective was to identify any parameter associated with higher FN cost. Ten cost of illness (COI) studies were identified. General characteristics on study design, country, perspective, and patient population were extracted and systematically reported. It was observed that majority (70%) of the studies employed the perspective of healthcare provider. 20% of the studies considered long-term costs. Estimated costs were adjusted to 2013 US dollars and ranged from US$5819 to US$34,756. The cost components that were most frequently associated with economic burden were ward and medication costs. Inpatient management, male gender, discharged dead, and comorbidity were positively associated with higher FN costs. Future COI studies on FN should focus on the accurate estimation on ward and medication costs. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Dewey, H M; Thrift, A G; Mihalopoulos, C; Carter, R; Macdonell, R A; McNeil, J J; Donnan, G A
2001-10-01
Accurate information about resource use and costs of stroke is necessary for informed health service planning. The purpose of this study was to determine the patterns of resource use among stroke patients and to estimate the total costs (direct service use and indirect production losses) of stroke (excluding SAH) in Australia for 1997. An incidence-based cost-of-illness model was developed, incorporating data obtained from the North East Melbourne Stroke Incidence Study (NEMESIS). The costs of stroke during the first year after stroke and the present value of total lifetime costs of stroke were estimated. The total first-year costs of all first-ever-in-a lifetime strokes (SAH excluded) that occurred in Australia during 1997 were estimated to be A$555 million (US$420 million), and the present value of lifetime costs was estimated to be A$1.3 billion (US$985 million). The average cost per case during the first 12 months and over a lifetime was A$18 956 (US$14 361) and A$44 428 (US$33 658), respectively. The most important categories of cost during the first year were acute hospitalization (A$154 million), inpatient rehabilitation (A$150 million), and nursing home care (A$63 million). The present value of lifetime indirect costs was estimated to be A$34 million. Similar to other studies, hospital and nursing home costs contributed most to the total cost of stroke (excluding SAH) in Australia. Inpatient rehabilitation accounts for approximately 27% of total first-year costs. Given the magnitude of these costs, investigation of the cost-effectiveness of rehabilitation services should become a priority in this community.
NASA Technical Reports Server (NTRS)
Bredt, J. H.
1974-01-01
Two types of space processing operations may be considered economically justified; they are manufacturing operations that make profits and experiment operations that provide needed applied research results at lower costs than those of alternative methods. Some examples from the Skylab experiments suggest that applied research should become cost effective soon after the space shuttle and Spacelab become operational. In space manufacturing, the total cost of space operations required to process materials must be repaid by the value added to the materials by the processing. Accurate estimates of profitability are not yet possible because shuttle operational costs are not firmly established and the markets for future products are difficult to estimate. However, approximate calculations show that semiconductor products and biological preparations may be processed on a scale consistent with market requirements and at costs that are at least compatible with profitability using the Shuttle/Spacelab system.
Costs of a work-family intervention: evidence from the work, family, and health network.
Barbosa, Carolina; Bray, Jeremy W; Brockwood, Krista; Reeves, Daniel
2014-01-01
To estimate the cost to the workplace of implementing initiatives to reduce work-family conflict. Prospective cost analysis conducted alongside a group-randomized multisite controlled experimental study, using a microcosting approach. An information technology firm. Employees (n = 1004) and managers (n = 141) randomized to the intervention arm. STAR (Start. Transform. Achieve. Results.) to enhance employees' control over their work time, increase supervisor support for employees to manage work and family responsibilities, and reorient the culture toward results. A taxonomy of activities related to customization, start-up, and implementation was developed. Resource use and unit costs were estimated for each activity, excluding research-related activities. Economic costing approach (accounting and opportunity costs). Sensitivity analyses on intervention costs. The total cost of STAR was $709,654, of which $389,717 was labor costs and $319,937 nonlabor costs (including $313,877 for intervention contract). The cost per employee participation in the intervention was $340 (95% confidence interval: $330-$351); $597 ($561-$634) for managers and $300 ($292-$308) for other employees (2011 prices). A detailed activity costing approach allows for more accurate cost estimates and identifies key drivers of cost. The key cost driver was employees' time spent on receiving the intervention. Ignoring this cost, which is usual in studies that cost workplace interventions, would seriously underestimate the cost of a workplace initiative.
Reducing Contingency through Sampling at the Luckey FUSRAP Site - 13186
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frothingham, David; Barker, Michelle; Buechi, Steve
2013-07-01
Typically, the greatest risk in developing accurate cost estimates for the remediation of hazardous, toxic, and radioactive waste sites is the uncertainty in the estimated volume of contaminated media requiring remediation. Efforts to address this risk in the remediation cost estimate can result in large cost contingencies that are often considered unacceptable when budgeting for site cleanups. Such was the case for the Luckey Formerly Utilized Sites Remedial Action Program (FUSRAP) site near Luckey, Ohio, which had significant uncertainty surrounding the estimated volume of site soils contaminated with radium, uranium, thorium, beryllium, and lead. Funding provided by the American Recoverymore » and Reinvestment Act (ARRA) allowed the U.S. Army Corps of Engineers (USACE) to conduct additional environmental sampling and analysis at the Luckey Site between November 2009 and April 2010, with the objective to further delineate the horizontal and vertical extent of contaminated soils in order to reduce the uncertainty in the soil volume estimate. Investigative work included radiological, geophysical, and topographic field surveys, subsurface borings, and soil sampling. Results from the investigative sampling were used in conjunction with Argonne National Laboratory's Bayesian Approaches for Adaptive Spatial Sampling (BAASS) software to update the contaminated soil volume estimate for the site. This updated volume estimate was then used to update the project cost-to-complete estimate using the USACE Cost and Schedule Risk Analysis process, which develops cost contingencies based on project risks. An investment of $1.1 M of ARRA funds for additional investigative work resulted in a reduction of 135,000 in-situ cubic meters (177,000 in-situ cubic yards) in the estimated base volume estimate. This refinement of the estimated soil volume resulted in a $64.3 M reduction in the estimated project cost-to-complete, through a reduction in the uncertainty in the contaminated soil volume estimate and the associated contingency costs. (authors)« less
Estimating and bidding for the Space Station Processing Facility
NASA Technical Reports Server (NTRS)
Brown, Joseph A.
1993-01-01
This new, unique Cost Engineering Report introduces the 800-page, C-100 government estimate for the Space Station Processing Facility (SSPF) and Volume IV Aerospace Construction Price Book. At the January 23, 1991, bid opening for the SSPF, the government cost estimate was right on target. Metric, Inc., Prime Contractor, low bid was 1.2 percent below the government estimate. This project contains many different and complex systems. Volume IV is a summary of the cost associated with construction, activation and Ground Support Equipment (GSE) design, estimating, fabrication, installation, testing, termination, and verification of this project. Included are 13 reasons the government estimate was so accurate; abstract of bids, for 8 bidders and government estimate with additive alternates, special labor and materials, budget comparison and system summaries; and comments on the energy credit from local electrical utility. This report adds another project to our continuing study of 'How Does the Low Bidder Get Low and Make Money?' which was started in 1967, and first published in the 1973 AACE Transaction with 10 more ways the low bidder got low. The accuracy of this estimate proves the benefits of our Kennedy Space Center (KSC) teamwork efforts and KSC Cost Engineer Tools which are contributing toward our goals of the Space Station.
Standardization in software conversion of (ROM) estimating
NASA Technical Reports Server (NTRS)
Roat, G. H.
1984-01-01
Technical problems and their solutions comprise by far the majority of work involved in space simulation engineering. Fixed price contracts with schedule award fees are becoming more and more prevalent. Accurate estimation of these jobs is critical to maintain costs within limits and to predict realistic contract schedule dates. Computerized estimating may hold the answer to these new problems, though up to now computerized estimating has been complex, expensive, and geared to the business world, not to technical people. The objective of this effort was to provide a simple program on a desk top computer capable of providing a Rough Order of Magnitude (ROM) estimate in a short time. This program is not intended to provide a highly detailed breakdown of costs to a customer, but to provide a number which can be used as a rough estimate on short notice. With more debugging and fine tuning, a more detailed estimate can be made.
Tenon, Mathieu; Feuillère, Nicolas; Roller, Marc; Birtić, Simona
2017-04-15
Yucca GRAS-labelled saponins have been and are increasingly used in food/feed, pharmaceutical or cosmetic industries. Existing techniques presently used for Yucca steroidal saponin quantification remain either inaccurate and misleading or accurate but time consuming and cost prohibitive. The method reported here addresses all of the above challenges. HPLC/ELSD technique is an accurate and reliable method that yields results of appropriate repeatability and reproducibility. This method does not over- or under-estimate levels of steroidal saponins. HPLC/ELSD method does not require each and every pure standard of saponins, to quantify the group of steroidal saponins. The method is a time- and cost-effective technique that is suitable for routine industrial analyses. HPLC/ELSD methods yield a saponin fingerprints specific to the plant species. As the method is capable of distinguishing saponin profiles from taxonomically distant species, it can unravel plant adulteration issues. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Gupta, Puneet; Bhowmick, Brojeshwar; Pal, Arpan
2017-07-01
Camera-equipped devices are ubiquitous and proliferating in the day-to-day life. Accurate heart rate (HR) estimation from the face videos acquired from the low cost cameras in a non-contact manner, can be used in many real-world scenarios and hence, require rigorous exploration. This paper has presented an accurate and near real-time HR estimation system using these face videos. It is based on the phenomenon that the color and motion variations in the face video are closely related to the heart beat. The variations also contain the noise due to facial expressions, respiration, eye blinking and environmental factors which are handled by the proposed system. Neither Eulerian nor Lagrangian temporal signals can provide accurate HR in all the cases. The cases where Eulerian temporal signals perform spuriously are determined using a novel poorness measure and then both the Eulerian and Lagrangian temporal signals are employed for better HR estimation. Such a fusion is referred as serial fusion. Experimental results reveal that the error introduced in the proposed algorithm is 1.8±3.6 which is significantly lower than the existing well known systems.
Shift level analysis of cable yarder availability, utilization, and productive time
James R. Sherar; Chris B. LeDoux
1989-01-01
Decision makers, loggers, managers, and planners need to understand and have methods for estimating utilization and productive time of cable logging systems. In making an accurate prediction of how much area and volume a machine will log per unit time and the associated cable yarding costs, a reliable estimate of the availability, utilization, and productive time of...
Estimating the Latent Number of Types in Growing Corpora with Reduced Cost-Accuracy Trade-Off
ERIC Educational Resources Information Center
Hidaka, Shohei
2016-01-01
The number of unique words in children's speech is one of most basic statistics indicating their language development. We may, however, face difficulties when trying to accurately evaluate the number of unique words in a child's growing corpus over time with a limited sample size. This study proposes a novel technique to estimate the latent number…
A Leo Satellite Navigation Algorithm Based on GPS and Magnetometer Data
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack
2001-01-01
The Global Positioning System (GPS) has become a standard method for low cost onboard satellite orbit determination. The use of a GPS receiver as an attitude and rate sensor has also been developed in the recent past. Additionally, focus has been given to attitude and orbit estimation using the magnetometer, a low cost, reliable sensor. Combining measurements from both GPS and a magnetometer can provide a robust navigation system that takes advantage of the estimation qualities of both measurements. Ultimately, a low cost, accurate navigation system can result, potentially eliminating the need for more costly sensors, including gyroscopes. This work presents the development of a technique to eliminate numerical differentiation of the GPS phase measurements and also compares the use of one versus two GPS satellites.
Brennan, Victoria K; Colosia, Ann D; Copley-Merriman, Catherine; Mauskopf, Josephine; Hass, Bastian; Palencia, Roberto
2014-07-01
To identify cost estimates related to myocardial infarction (MI) or stroke in patients with type 2 diabetes mellitus (T2DM) for use in economic models. A systematic literature review was conducted. Electronic databases and conference abstracts were screened against inclusion criteria, which included studies performed in patients who had T2DM before experiencing an MI or stroke. Primary cost studies and economic models were included. Costs were converted to 2012 pounds sterling. Fifty-four studies were identified: 13 primary cost studies and 41 economic evaluations using secondary sources for complication costs. Primary studies provided costs from 10 countries. Estimates for a fatal event ranged from £2482-£5222 for MI and from £4900-£6694 for stroke. Costs for the year a non-fatal event occurred ranged from £5071-£29,249 for MI and from £5171-£38,732 for stroke. Annual follow-up costs ranged from £945-£1616 for an MI and from £4704-£12,926 for a stroke. Economic evaluations from 12 countries were identified, and costs of complications showed similar variability to the primary studies. The costs identified within primary studies varied between and within countries. Many studies used costs estimated in studies not specific to patients with T2DM. Data gaps included a detailed breakdown of resource use, which affected the ability to compare data across countries. In the development of economic models for patients with T2DM, the use of accurate estimates of costs associated with MI and stroke is important. When country-specific costs are not available, clear justification for the choice of estimates should be provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.
Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290
1994-06-07
023- S -94 Military Construction Projects Budgeted January 14, 1994 and Programmed for Bases Identified for Closure or Realignment 028-C-93...deferred to this analysis as the more accurate basis for design and construction costs, rather than the gross estimates in the 1391’ s submitted much...solutions( s ), it is imperative that design and construction costs, operation/maintenance costs, the specific health care needs of the population to
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G
2012-10-01
Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
Using optimal transport theory to estimate transition probabilities in metapopulation dynamics
Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.
2017-01-01
This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.
Variation in the costs of delivering routine immunization services in Peru.
Walker, D; Mosqueira, N R; Penny, M E; Lanata, C F; Clark, A D; Sanderson, C F B; Fox-Rushby, J A
2004-09-01
Estimates of vaccination costs usually provide only point estimates at national level with no information on cost variation. In practice, however, such information is necessary for programme managers. This paper presents information on the variations in costs of delivering routine immunization services in three diverse districts of Peru: Ayacucho (a mountainous area), San Martin (a jungle area) and Lima (a coastal area). We consider the impact of variability on predictions of cost and reflect on the likely impact on expected cost-effectiveness ratios, policy decisions and future research practice. All costs are in 2002 prices in US dollars and include the costs of providing vaccination services incurred by 19 government health facilities during the January-December 2002 financial year. Vaccine wastage rates have been estimated using stock records. The cost per fully vaccinated child ranged from 16.63-24.52 U.S. Dollars in Ayacucho, 21.79-36.69 U.S. Dollars in San Martin and 9.58-20.31 U.S. Dollars in Lima. The volume of vaccines administered and wastage rates are determinants of the variation in costs of delivering routine immunization services. This study shows there is considerable variation in the costs of providing vaccines across geographical regions and different types of facilities. Information on how costs vary can be used as a basis from which to generalize to other settings and provide more accurate estimates for decision-makers who do not have disaggregated data on local costs. Future studies should include sufficiently large sample sizes and ensure that regions are carefully selected in order to maximize the interpretation of cost variation.
Hogan, Thomas J
2012-05-01
The objective was to review recent economic evaluations of influenza vaccination by injection in the US, assess their evidence, and conclude on their collective findings. The literature was searched for economic evaluations of influenza vaccination injection in healthy working adults in the US published since 1995. Ten evaluations described in nine papers were identified. These were synopsized and their results evaluated, the basic structure of all evaluations was ascertained, and sensitivity of outcomes to changes in parameter values were explored using a decision model. Areas to improve economic evaluations were noted. Eight of nine evaluations with credible economic outcomes were favourable to vaccination, representing a statistically significant result compared with a proportion of 50% that would be expected if vaccination and no vaccination were economically equivalent. Evaluations shared a basic structure, but differed considerably with respect to cost components, assumptions, methods, and parameter estimates. Sensitivity analysis indicated that changes in parameter values within the feasible range, individually or simultaneously, could reverse economic outcomes. Given stated misgivings, the methods of estimating influenza reduction ascribed to vaccination must be researched to confirm that they produce accurate and reliable estimates. Research is also needed to improve estimates of the costs per case of influenza illness and the costs of vaccination. Based on their assumptions, the reviewed papers collectively appear to support the economic benefits of influenza vaccination of healthy adults. Yet the underlying assumptions, methods and parameter estimates themselves warrant further research to confirm they are accurate, reliable and appropriate to economic evaluation purposes.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng; Sun, Zhenglong; Qu, Xingda
2018-02-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial-lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior-posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng
2018-01-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial–lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior–posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly. PMID:29389857
Direct costs of osteoporosis and hip fracture: an analysis for the Mexican healthcare system.
Clark, P; Carlos, F; Barrera, C; Guzman, J; Maetzel, A; Lavielle, P; Ramirez, E; Robinson, V; Rodriguez-Cabrera, R; Tamayo, J; Tugwell, P
2008-03-01
This study reports the direct costs related to osteoporosis and hip fractures paid for governmental and private institutions in the Mexican health system and estimates the impact of these entities on Mexico. We conclude that the economic burden due to the direct costs of hip fracture justifies wide-scale prevention programs for osteoporosis (OP). To estimate the total direct costs of OP and hip fractures in the Mexican Health care system, a sample of governmental and private institutions were studied. Information was gathered through direct questionnaires in 275 OP patients and 218 hip fracture cases. Additionally, a chart review was conducted and experts' opinions obtained to get accurate protocol scenarios for diagnoses and treatment of OP with no fracture. Microcosting and activity-based costing techniques were used to yield unit costs. The total direct costs for OP and hip fracture were estimated for 2006 based on the projected annual incidence of hip fractures in Mexico. A total of 22,233 hip fracture cases were estimated for 2006 with a total cost to the healthcare system of US$ 97,058,159 for the acute treatment alone ($4,365.50 per case). We found considerable differences in costs and the way the patients were treated across the different health sectors within the country. Costs of the acute treatment of hip fractures in Mexico are high and are expected to increase with the predicted increment of life expectancy and the number of elderly in our population.
Salemi, Jason L; Comins, Meg M; Chandler, Kristen; Mogos, Mulubrhan F; Salihu, Hamisu M
2013-08-01
Comparative effectiveness research (CER) and cost-effectiveness analysis are valuable tools for informing health policy and clinical care decisions. Despite the increased availability of rich observational databases with economic measures, few researchers have the skills needed to conduct valid and reliable cost analyses for CER. The objectives of this paper are to (i) describe a practical approach for calculating cost estimates from hospital charges in discharge data using publicly available hospital cost reports, and (ii) assess the impact of using different methods for cost estimation in maternal and child health (MCH) studies by conducting economic analyses on gestational diabetes (GDM) and pre-pregnancy overweight/obesity. In Florida, we have constructed a clinically enhanced, longitudinal, encounter-level MCH database covering over 2.3 million infants (and their mothers) born alive from 1998 to 2009. Using this as a template, we describe a detailed methodology to use publicly available data to calculate hospital-wide and department-specific cost-to-charge ratios (CCRs), link them to the master database, and convert reported hospital charges to refined cost estimates. We then conduct an economic analysis as a case study on women by GDM and pre-pregnancy body mass index (BMI) status to compare the impact of using different methods on cost estimation. Over 60 % of inpatient charges for birth hospitalizations came from the nursery/labor/delivery units, which have very different cost-to-charge markups (CCR = 0.70) than the commonly substituted hospital average (CCR = 0.29). Using estimated mean, per-person maternal hospitalization costs for women with GDM as an example, unadjusted charges ($US14,696) grossly overestimated actual cost, compared with hospital-wide ($US3,498) and department-level ($US4,986) CCR adjustments. However, the refined cost estimation method, although more accurate, did not alter our conclusions that infant/maternal hospitalization costs were significantly higher for women with GDM than without, and for overweight/obese women than for those in a normal BMI range. Cost estimates, particularly among MCH-related services, vary considerably depending on the adjustment method. Our refined approach will be valuable to researchers interested in incorporating more valid estimates of cost into databases with linked hospital discharge files.
Saruta, Yuko; Puig-Junoy, Jaume
2016-06-01
Conventional intraoperative sentinel lymph node biopsy (SLNB) in breast cancer (BC) has limitations in establishing a definitive diagnosis of metastasis intraoperatively, leading to an unnecessary second operation. The one-step nucleic amplification assay (OSNA) provides accurate intraoperative diagnosis and avoids further testing. Only five articles have researched the cost and cost effectiveness of this diagnostic tool, although many hospitals have adopted it, and economic evaluation is needed for budget holders. We aimed to measure the budget impact in Japanese BC patients after the introduction of OSNA, and assess the certainty of the results. Budget impact analysis of OSNA on Japanese healthcare expenditure from 2015 to 2020. Local governments, society-managed health insurers, and Japan health insurance associations were the budget holders. In order to assess the cost gap between the gold standard (GS) and OSNA in intraoperative SLNB, a two-scenario comparative model that was structured using the clinical pathway of a BC patient group who received SLNB was applied. Clinical practice guidelines for BC were cited for cost estimation. The total estimated cost of all BC patients diagnosed by GS was US$1,023,313,850. The budget impact of OSNA in total health expenditure was -US$24,413,153 (-US$346 per patient). Two-way sensitivity analysis between survival rate (SR) of the GS and OSNA was performed by illustrating a cost-saving threshold: y ≅ 1.14x - 0.16 in positive patients, and y ≅ 0.96x + 0.029 in negative patients (x = SR-GS, y = SR-OSNA). Base inputs of the variables in these formulas demonstrated a cost saving. OSNA reduces healthcare costs, as confirmed by sensitivity analysis.
Fast and accurate spectral estimation for online detection of partial broken bar in induction motors
NASA Astrophysics Data System (ADS)
Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti
2018-01-01
In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.
Stephen, Dimity Maree; Barnett, Adrian Gerard
2017-12-11
The incidence of salmonellosis, a costly foodborne disease, is rising in Australia. Salmonellosis increases during high temperatures and rainfall, and future incidence is likely to rise under climate change. Allocating funding to preventative strategies would be best informed by accurate estimates of salmonellosis costs under climate change and by knowing which population subgroups will be most affected. We used microsimulation models to estimate the health and economic costs of salmonellosis in Central Queensland under climate change between 2016 and 2036 to inform preventative strategies. We projected the entire population of Central Queensland to 2036 by simulating births, deaths, and migration, and salmonellosis and two resultant conditions, reactive arthritis and postinfectious irritable bowel syndrome. We estimated salmonellosis risks and costs under baseline conditions and under projected climate conditions for Queensland under the A1FI emissions scenario using composite projections from 6 global climate models (warm with reduced rainfall). We estimated the resulting costs based on direct medical expenditures combined with the value of lost quality-adjusted life years (QALYs) based on willingness-to-pay. Estimated costs of salmonellosis between 2016 and 2036 increased from 456.0 QALYs (95% CI: 440.3, 473.1) and AUD29,900,000 million (95% CI: AUD28,900,000, AUD31,600,000), assuming no climate change, to 485.9 QALYs (95% CI: 469.6, 503.5) and AUD31,900,000 (95% CI: AUD30,800,000, AUD33,000,000) under the climate change scenario. We applied a microsimulation approach to estimate the costs of salmonellosis and its sequelae in Queensland during 2016-2036 under baseline conditions and according to climate change projections. This novel application of microsimulation models demonstrates the models' potential utility to researchers for examining complex interactions between weather and disease to estimate future costs. https://doi.org/10.1289/EHP1370.
2017-01-01
Background: The incidence of salmonellosis, a costly foodborne disease, is rising in Australia. Salmonellosis increases during high temperatures and rainfall, and future incidence is likely to rise under climate change. Allocating funding to preventative strategies would be best informed by accurate estimates of salmonellosis costs under climate change and by knowing which population subgroups will be most affected. Objective: We used microsimulation models to estimate the health and economic costs of salmonellosis in Central Queensland under climate change between 2016 and 2036 to inform preventative strategies. Methods: We projected the entire population of Central Queensland to 2036 by simulating births, deaths, and migration, and salmonellosis and two resultant conditions, reactive arthritis and postinfectious irritable bowel syndrome. We estimated salmonellosis risks and costs under baseline conditions and under projected climate conditions for Queensland under the A1FI emissions scenario using composite projections from 6 global climate models (warm with reduced rainfall). We estimated the resulting costs based on direct medical expenditures combined with the value of lost quality-adjusted life years (QALYs) based on willingness-to-pay. Results: Estimated costs of salmonellosis between 2016 and 2036 increased from 456.0 QALYs (95% CI: 440.3, 473.1) and AUD29,900,000 million (95% CI: AUD28,900,000, AUD31,600,000), assuming no climate change, to 485.9 QALYs (95% CI: 469.6, 503.5) and AUD31,900,000 (95% CI: AUD30,800,000, AUD33,000,000) under the climate change scenario. Conclusion: We applied a microsimulation approach to estimate the costs of salmonellosis and its sequelae in Queensland during 2016–2036 under baseline conditions and according to climate change projections. This novel application of microsimulation models demonstrates the models’ potential utility to researchers for examining complex interactions between weather and disease to estimate future costs. https://doi.org/10.1289/EHP1370 PMID:29233795
Singh, Jeshika; Longworth, Louise
2017-01-01
Our study addresses the important issue of estimating treatment costs from historical data. It is a problem frequently faced by health technology assessment analysts. We compared four approaches used to estimate current costs when good quality contemporary data are not available using liver transplantation as an example. First, the total cost estimates extracted for patients from a cohort study, conducted in the 1990s, were inflated using a published inflation multiplier. Second, resource use estimates from the cohort study were extracted for hepatitis C patients and updated using current unit costs. Third, expert elicitation was carried out to identify changes in clinical practice over time and quantify current resource use. Fourth, routine data on resource use were obtained from National Health Service Blood and Transplant (NHSBT). The first two methods did not account for changes in clinical practice. Also the first was not specific to hepatitis patients. The use of experts confirmed significant changes in clinical practice. However, the quantification of resource use using experts is challenging as clinical specialists may not have a complete overview of clinical pathway. The NHSBT data are the most accurate reflection of transplantation and posttransplantation phase; however, data were not available for the whole pathway of care. The best estimate of total cost, combining NHSBT data and expert elicitation, is £121,211. Observational data from routine care are potentially the most reliable reflection of current resource use. Efforts should be made to make such data readily available and accessible to researchers. Expert elicitation provided reasonable estimates.
Chen, Po-Chuan; Lee, Jenq-Chang; Wang, Jung-Der
2015-01-01
Backgrounds and aims Life-expectancy of colon cancer patients cannot be accurately answered due to the lack of both large datasets and long-term follow-ups, which impedes accurate estimation of lifetime cost to treat colon cancer patients. In this study, we applied a method to estimate life-expectancy of colon cancer patients in Taiwan and calculate the lifetime costs by different stages and age groups. Methods A total of 17,526 cases with pathologically verified colon adenocarcinoma between 2002 and 2009 were extracted from Taiwan Cancer Registry database for analysis. All patients were followed-up until the end of 2011. Life-expectancy, expected-years-of-life-lost and lifetime costs were estimated, using a semi-parametric survival extrapolation method and borrowing information from life tables of vital statistics. Results Patients with more advanced stages of colon cancer were generally younger and less co-morbid with major chronic diseases than those with stages I and II. The LE of stage I was not significantly different from that of the age- and sex-matched general population, whereas those of stages II, III, and IV colon cancer patients after diagnosis were 16.57±0.07, 13.35±0.07, and 4.05±0.05 years, respectively; the corresponding expected-years-of-life-lost were 1.28±0.07, 5.93±0.07 and 16.42±0.06 years, significantly shorter than the general population after accounting for lead time bias. Besides, the lifetime cost of managing stage II colon cancer patients would be US $8,416±1939, 14,334±1,755, and 21,837±1,698, respectively, indicating a big saving for early diagnosis and treatment after stratification for age and sex. Conclusions Treating colon cancer at younger age and earlier stage saves more life-years and healthcare costs. Future studies are indicated to apply these quantitative results into the cost-effectiveness evaluation of screening program for colon cancers. PMID:26207912
Rebuilding health systems in post-conflict countries: estimating the costs of basic services.
Newbrander, William; Yoder, Richard; Debevoise, Anne Bilby
2007-01-01
After the fall of the Taliban in 2001, the Afghan transitional government and international donors found the health system near collapse. Afghanistan had some of the worst health indicators ever recorded. To begin activities that would quickly improve the health situation, the Ministry of Health (MOH) needed both a national package of health services and reliable data on the costs of providing those services. This study details the process of determining national health priorities, creating a basic package of services, and estimating per capita and unit costs for providing those services, with an emphasis on the costing exercise. Strategies for obtaining a rapid yet reasonably accurate estimate of health service costs nationwide are discussed. In 2002 this costing exercise indicated that the basic package of services could be provided for US dollars 4.55 per person. In 2006, the findings were validated: the four major donors who contracted with non-governmental organizations (NGOs) to provide basic health services for nearly 80% of the population found per capita costs ranging from dollars 4.30 to dollars 5.12. This study is relevant for other post-conflict countries that are re-establishing health services and seeking to develop cost-effective and equitable health systems. Copyright (c) 2007 John Wiley & Sons, Ltd.
Griffiths, Robert I; Gleeson, Michelle L; Danese, Mark D; O'Hagan, Anthony
2012-01-01
To assess the accuracy and precision of inverse probability weighted (IPW) least squares regression analysis for censored cost data. By using Surveillance, Epidemiology, and End Results-Medicare, we identified 1500 breast cancer patients who died and had complete cost information within the database. Patients were followed for up to 48 months (partitions) after diagnosis, and their actual total cost was calculated in each partition. We then simulated patterns of administrative and dropout censoring and also added censoring to patients receiving chemotherapy to simulate comparing a newer to older intervention. For each censoring simulation, we performed 1000 IPW regression analyses (bootstrap, sampling with replacement), calculated the average value of each coefficient in each partition, and summed the coefficients for each regression parameter to obtain the cumulative values from 1 to 48 months. The cumulative, 48-month, average cost was $67,796 (95% confidence interval [CI] $58,454-$78,291) with no censoring, $66,313 (95% CI $54,975-$80,074) with administrative censoring, and $66,765 (95% CI $54,510-$81,843) with administrative plus dropout censoring. In multivariate analysis, chemotherapy was associated with increased cost of $25,325 (95% CI $17,549-$32,827) compared with $28,937 (95% CI $20,510-$37,088) with administrative censoring and $29,593 ($20,564-$39,399) with administrative plus dropout censoring. Adding censoring to the chemotherapy group resulted in less accurate IPW estimates. This was ameliorated, however, by applying IPW within treatment groups. IPW is a consistent estimator of population mean costs if the weight is correctly specified. If the censoring distribution depends on some covariates, a model that accommodates this dependency must be correctly specified in IPW to obtain accurate estimates. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Improved management of radiotherapy departments through accurate cost data.
Kesteloot, K; Lievens, Y; van der Schueren, E
2000-06-01
Escalating health care expenses urge governments towards cost containment. More accurate data on the precise costs of health care interventions are needed. We performed an aggregate cost calculation of radiation therapy departments and treatments and discussed the different cost components. The costs of a radiotherapy department were estimated, based on accreditation norms for radiotherapy departments set forth in the Belgian legislation. The major cost components of radiotherapy are the cost of buildings and facilities, equipment, medical and non-medical staff, materials and overhead. They respectively represent around 3, 30, 50, 4 and 13% of the total costs, irrespective of the department size. The average cost per patient lowers with increasing department size and optimal utilization of resources. Radiotherapy treatment costs vary in a stepwise fashion: minor variations of patient load do not affect the cost picture significantly due to a small impact of variable costs. With larger increases in patient load however, additional equipment and/or staff will become necessary, resulting in additional semi-fixed costs and an important increase in costs. A sensitivity analysis of these two major cost inputs shows that a decrease in total costs of 12-13% can be obtained by assuming a 20% less than full time availability of personnel; that due to evolving seniority levels, the annual increase in wage costs is estimated to be more than 1%; that by changing the clinical life-time of buildings and equipment with unchanged interest rate, a 5% reduction of total costs and cost per patient can be calculated. More sophisticated equipment will not have a very large impact on the cost (+/-4000 BEF/patient), provided that the additional equipment is adapted to the size of the department. That the recommendations we used, based on the Belgian legislation, are not outrageous is shown by replacing them by the USA Blue book recommendations. Depending on the department size, costs in our model would then increase with 14-36%. We showed that cost information can be used to analyze the precise financial consequences of changes in routine clinical practice in radiotherapy. Comparing the cost data with the prevailing reimbursement may reveal inconsistencies and stimulate to develop improved financing systems.
Changes in Cleanup Strategies and Long-Term Monitoring Costs for DOE FUSRAP Sites-17241
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castillo, Darina; Carpenter, Cliff; Roberts, Rebecca
2017-03-05
LM is preparing for the transfer of 11 new FUSRAP sites within the next 10 years from USACE, many of which will have substantially greater LTSM requirements than the current Completed sites. LM is analyzing the estimates for the level of effort required to monitor the new sites in order to make more customized and accurate predictions of future life cycle costs and environmental liabilities of these sites.
Variation in the costs of delivering routine immunization services in Peru.
Walker, D.; Mosqueira, N. R.; Penny, M. E.; Lanata, C. F.; Clark, A. D.; Sanderson, C. F. B.; Fox-Rushby, J. A.
2004-01-01
OBJECTIVE: Estimates of vaccination costs usually provide only point estimates at national level with no information on cost variation. In practice, however, such information is necessary for programme managers. This paper presents information on the variations in costs of delivering routine immunization services in three diverse districts of Peru: Ayacucho (a mountainous area), San Martin (a jungle area) and Lima (a coastal area). METHODS: We consider the impact of variability on predictions of cost and reflect on the likely impact on expected cost-effectiveness ratios, policy decisions and future research practice. All costs are in 2002 prices in US dollars and include the costs of providing vaccination services incurred by 19 government health facilities during the January-December 2002 financial year. Vaccine wastage rates have been estimated using stock records. FINDINGS: The cost per fully vaccinated child ranged from 16.63-24.52 U.S. Dollars in Ayacucho, 21.79-36.69 U.S. Dollars in San Martin and 9.58-20.31 U.S. Dollars in Lima. The volume of vaccines administered and wastage rates are determinants of the variation in costs of delivering routine immunization services. CONCLUSION: This study shows there is considerable variation in the costs of providing vaccines across geographical regions and different types of facilities. Information on how costs vary can be used as a basis from which to generalize to other settings and provide more accurate estimates for decision-makers who do not have disaggregated data on local costs. Future studies should include sufficiently large sample sizes and ensure that regions are carefully selected in order to maximize the interpretation of cost variation. PMID:15628205
Diagnosis-Based Risk Adjustment for Medicare Capitation Payments
Ellis, Randall P.; Pope, Gregory C.; Iezzoni, Lisa I.; Ayanian, John Z.; Bates, David W.; Burstin, Helen; Ash, Arlene S.
1996-01-01
Using 1991-92 data for a 5-percent Medicare sample, we develop, estimate, and evaluate risk-adjustment models that utilize diagnostic information from both inpatient and ambulatory claims to adjust payments for aged and disabled Medicare enrollees. Hierarchical coexisting conditions (HCC) models achieve greater explanatory power than diagnostic cost group (DCG) models by taking account of multiple coexisting medical conditions. Prospective models predict average costs of individuals with chronic conditions nearly as well as concurrent models. All models predict medical costs far more accurately than the current health maintenance organization (HMO) payment formula. PMID:10172666
Davis, Jennifer C; Verhagen, Evert; Bryan, Stirling; Liu-Ambrose, Teresa; Borland, Jeff; Buchner, David; Hendriks, Marike R C; Weiler, Richard; Morrow, James R; van Mechelen, Willem; Blair, Steven N; Pratt, Mike; Windt, Johann; al-Tunaiji, Hashel; Macri, Erin; Khan, Karim M
2014-06-01
This article describes major topics discussed from the 'Economics of Physical Inactivity Consensus Workshop' (EPIC), held in Vancouver, Canada, in April 2011. Specifically, we (1) detail existing evidence on effective physical inactivity prevention strategies; (2) introduce economic evaluation and its role in health policy decisions; (3) discuss key challenges in establishing and building health economic evaluation evidence (including accurate and reliable costs and clinical outcome measurement) and (4) provide insight into interpretation of economic evaluations in this critically important field. We found that most methodological challenges are related to (1) accurately and objectively valuing outcomes; (2) determining meaningful clinically important differences in objective measures of physical inactivity; (3) estimating investment and disinvestment costs and (4) addressing barriers to implementation. We propose that guidelines specific for economic evaluations of physical inactivity intervention studies are developed to ensure that related costs and effects are robustly, consistently and accurately measured. This will also facilitate comparisons among future economic evidence. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
Fast Conceptual Cost Estimating of Aerospace Projects Using Historical Information
NASA Technical Reports Server (NTRS)
Butts, Glenn
2007-01-01
Accurate estimates can be created in less than a minute by applying powerful techniques and algorithms to create an Excel-based parametric cost model. In five easy steps you will learn how to normalize your company 's historical cost data to the new project parameters. This paper provides a complete, easy-to-understand, step by step how-to guide. Such a guide does not seem to currently exist. Over 2,000 hours of research, data collection, and trial and error, and thousands of lines of Excel Visual Basic Application (VBA) code were invested in developing these methods. While VBA is not required to use this information, it increases the power and aesthetics of the model. Implementing all of the steps described, while not required, will increase the accuracy of the results.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
Standish, Katherine; Kuan, Guillermina; Avilés, William; Balmaseda, Angel; Harris, Eva
2010-01-01
Dengue is a major public health problem in tropical and subtropical regions; however, under-reporting of cases to national surveillance systems hinders accurate knowledge of disease burden and costs. Laboratory-confirmed dengue cases identified through the Nicaraguan Pediatric Dengue Cohort Study (PDCS) were compared to those reported from other health facilities in Managua to the National Epidemiologic Surveillance (NES) program of the Nicaraguan Ministry of Health. Compared to reporting among similar pediatric populations in Managua, the PDCS identified 14 to 28 (average 21.3) times more dengue cases each year per 100,000 persons than were reported to the NES. Applying these annual expansion factors to national-level data, we estimate that the incidence of confirmed pediatric dengue throughout Nicaragua ranged from 300 to 1000 cases per 100,000 persons. We have estimated a much higher incidence of dengue than reported by the Ministry of Health. A country-specific expansion factor for dengue that allows for a more accurate estimate of incidence may aid governments and other institutions calculating disease burden, costs, resource needs for prevention and treatment, and the economic benefits of drug and vaccine development. PMID:20300515
Adaptive error covariances estimation methods for ensemble Kalman filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhen, Yicun, E-mail: zhen@math.psu.edu; Harlim, John, E-mail: jharlim@psu.edu
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for usingmore » information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.« less
Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Garcia-Alzorriz, Enric; Castells, Xavier; Grau, Santiago; Cots, Francesc
2017-04-01
To estimate the incremental cost of nosocomial bacteremia according to the causative focus and classified by the antibiotic sensitivity of the microorganism.Patients admitted to Hospital del Mar in Barcelona from 2005 to 2012 were included. We analyzed the total hospital costs of patients with nosocomial bacteremia caused by microorganisms with a high prevalence and, often, with multidrug-resistance. A control group was defined by selecting patients without bacteremia in the same diagnosis-related group.Our hospital has a cost accounting system (full-costing) that uses activity-based criteria to estimate per-patient costs. A logistic regression was fitted to estimate the probability of developing bacteremia (propensity score) and was used for propensity-score matching adjustment. This propensity score was included in an econometric model to adjust the incremental cost of patients with bacteremia with differentiation of the causative focus and antibiotic sensitivity.The mean incremental cost was estimated at &OV0556;15,526. The lowest incremental cost corresponded to bacteremia caused by multidrug-sensitive urinary infection (&OV0556;6786) and the highest to primary or unknown sources of bacteremia caused by multidrug-resistant microorganisms (&OV0556;29,186).This is one of the first analyses to include all episodes of bacteremia produced during hospital stays in a single study. The study included accurate information about the focus and antibiotic sensitivity of the causative organism and actual hospital costs. It provides information that could be useful to improve, establish, and prioritize prevention strategies for nosocomial infections.
An economic model of the benefits of professional doula labor support in Wisconsin births.
Chapple, Will; Gilliland, Amy; Li, Dongmei; Shier, Emily; Wright, Emily
2013-04-01
The purpose of this study is to estimate the immediate cost savings per delivery with in-hospital professional doula labor support in Wisconsin. This is the first study that calculates the estimated cost savings of professional doula labor support specific to Wisconsin. This analysis used results presented in and derived from the Cochrane Review of continuous labor support to estimate procedure reduction and cost savings in Wisconsin using birth statistics from 2010. The delivery outcomes included were cesarean deliveries, instrumental deliveries, and regional analgesia use. To accurately reflect published studies on labor support, only low-risk deliveries were used for intervention reduction calculations. For 2010 data, estimated savings of 28,997,754.80 dollars could have been achieved if every low-risk birth were attended in-hospital by a professional doula. A professional doula providing only in-hospital labor support would yield an estimated cost savings of 424.14 dollars per delivery or 530.89 dollars per low-risk delivery. A system-based change in how laboring mothers are supported would be an innovative step that would put Wisconsin at the forefront of cost-effective health care, reducing interventions while improving outcomes. It is recommended that Wisconsin insurers consider reimbursing for professional doula labor support. It is also recommended that pilot programs be implemented in Wisconsin that can better assess the implementation of professional doula labor support services.
Comparison of methods for estimating the cost of human immunodeficiency virus-testing interventions.
Shrestha, Ram K; Sansom, Stephanie L; Farnham, Paul G
2012-01-01
The Centers for Disease Control and Prevention (CDC), Division of HIV/AIDS Prevention, spends approximately 50% of its $325 million annual human immunodeficiency virus (HIV) prevention funds for HIV-testing services. An accurate estimate of the costs of HIV testing in various settings is essential for efficient allocation of HIV prevention resources. To assess the costs of HIV-testing interventions using different costing methods. We used the microcosting-direct measurement method to assess the costs of HIV-testing interventions in nonclinical settings, and we compared these results with those from 3 other costing methods: microcosting-staff allocation, where the labor cost was derived from the proportion of each staff person's time allocated to HIV testing interventions; gross costing, where the New York State Medicaid payment for HIV testing was used to estimate program costs, and program budget, where the program cost was assumed to be the total funding provided by Centers for Disease Control and Prevention. Total program cost, cost per person tested, and cost per person notified of new HIV diagnosis. The median costs per person notified of a new HIV diagnosis were $12 475, $15 018, $2697, and $20 144 based on microcosting-direct measurement, microcosting-staff allocation, gross costing, and program budget methods, respectively. Compared with the microcosting-direct measurement method, the cost was 78% lower with gross costing, and 20% and 61% higher using the microcosting-staff allocation and program budget methods, respectively. Our analysis showed that HIV-testing program cost estimates vary widely by costing methods. However, the choice of a particular costing method may depend on the research question being addressed. Although program budget and gross-costing methods may be attractive because of their simplicity, only the microcosting-direct measurement method can identify important determinants of the program costs and provide guidance to improve efficiency.
The cost of clinical mastitis in the first 30 days of lactation: An economic modeling tool.
Rollin, E; Dhuyvetter, K C; Overton, M W
2015-12-01
Clinical mastitis results in considerable economic losses for dairy producers and is most commonly diagnosed in early lactation. The objective of this research was to estimate the economic impact of clinical mastitis occurring during the first 30 days of lactation for a representative US dairy. A deterministic partial budget model was created to estimate direct and indirect costs per case of clinical mastitis occurring during the first 30 days of lactation. Model inputs were selected from the available literature, or when none were available, from herd data. The average case of clinical mastitis resulted in a total economic cost of $444, including $128 in direct costs and $316 in indirect costs. Direct costs included diagnostics ($10), therapeutics ($36), non-saleable milk ($25), veterinary service ($4), labor ($21), and death loss ($32). Indirect costs included future milk production loss ($125), premature culling and replacement loss ($182), and future reproductive loss ($9). Accurate decision making regarding mastitis control relies on understanding the economic impacts of clinical mastitis, especially the longer term indirect costs that represent 71% of the total cost per case of mastitis. Future milk production loss represents 28% of total cost, and future culling and replacement loss represents 41% of the total cost of a case of clinical mastitis. In contrast to older estimates, these values represent the current dairy economic climate, including milk price ($0.461/kg), feed price ($0.279/kg DM (dry matter)), and replacement costs ($2,094/head), along with the latest published estimates on the production and culling effects of clinical mastitis. This economic model is designed to be customized for specific dairy producers and their herd characteristics to better aid them in developing mastitis control strategies. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
A Tactical Database for the Low Cost Combat Direction System
1990-12-01
another object. Track is a representation of some environmental phenomena converted into accurate estimates of geographical position with respect to...by the method CALCULATE RELATIVE POSITION. In order to obtain a better similarity of mehods , the methods OWNSHIP DISTANCE TO PIM, ESTIMATED TIME OF...this mechanism entails the risk that the user will lose all of the work that was done if conflicts are detected and the transaction cannot be committed
Use of Multiple Data Sources to Estimate the Economic Cost of Dengue Illness in Malaysia
Shepard, Donald S.; Undurraga, Eduardo A.; Lees, Rosemary Susan; Halasa, Yara; Lum, Lucy Chai See; Ng, Chiu Wan
2012-01-01
Dengue represents a substantial burden in many tropical and sub-tropical regions of the world. We estimated the economic burden of dengue illness in Malaysia. Information about economic burden is needed for setting health policy priorities, but accurate estimation is difficult because of incomplete data. We overcame this limitation by merging multiple data sources to refine our estimates, including an extensive literature review, discussion with experts, review of data from health and surveillance systems, and implementation of a Delphi process. Because Malaysia has a passive surveillance system, the number of dengue cases is under-reported. Using an adjusted estimate of total dengue cases, we estimated an economic burden of dengue illness of US$56 million (Malaysian Ringgit MYR196 million) per year, which is approximately US$2.03 (Malaysian Ringgit 7.14) per capita. The overall economic burden of dengue would be even higher if we included costs associated with dengue prevention and control, dengue surveillance, and long-term sequelae of dengue. PMID:23033404
Use of multiple data sources to estimate the economic cost of dengue illness in Malaysia.
Shepard, Donald S; Undurraga, Eduardo A; Lees, Rosemary Susan; Halasa, Yara; Lum, Lucy Chai See; Ng, Chiu Wan
2012-11-01
Dengue represents a substantial burden in many tropical and sub-tropical regions of the world. We estimated the economic burden of dengue illness in Malaysia. Information about economic burden is needed for setting health policy priorities, but accurate estimation is difficult because of incomplete data. We overcame this limitation by merging multiple data sources to refine our estimates, including an extensive literature review, discussion with experts, review of data from health and surveillance systems, and implementation of a Delphi process. Because Malaysia has a passive surveillance system, the number of dengue cases is under-reported. Using an adjusted estimate of total dengue cases, we estimated an economic burden of dengue illness of US$56 million (Malaysian Ringgit MYR196 million) per year, which is approximately US$2.03 (Malaysian Ringgit 7.14) per capita. The overall economic burden of dengue would be even higher if we included costs associated with dengue prevention and control, dengue surveillance, and long-term sequelae of dengue.
An In-Depth Cost Analysis for New Light-Duty Vehicle ...
Within the transportation sector, light-duty vehicles are the predominant source of greenhouse gas (GHG) emissions, principally exhaust CO2 and refrigerant leakage from vehicle air conditioners. EPA has contracted with FEV to estimate the costs of technologies that may be employed to reduce these emissions. The purpose of this work is to determine accurate costs for GHG-reducing technologies. This is of paramount importance in setting the appropriate GHG standards. EPA has contracted with FEV to perform this cost analysis through tearing down vehicles, engines and components, both with and without these technologies, and evaluating, part by part, the observed differences in size, weight, materials, machining steps, and other cost-affecting parameters.
A geostatistical approach to identify and mitigate agricultural nitrous oxide emission hotspots
USDA-ARS?s Scientific Manuscript database
Anthropogenic emissions of nitrous oxide (N2O), a trace gas with severe environmental costs, are greatest from agricultural soils amended with nitrogen (N) fertilizer. However, accurate N2O emission estimates at fine spatial scales are made difficult by their high variability, which represents a cr...
What Are the Metacognitive Costs of Young Children's Overconfidence?
ERIC Educational Resources Information Center
Destan, Nesrin; Roebers, Claudia M.
2015-01-01
Children typically hold very optimistic views of their own skills but so far, only a few studies have investigated possible correlates of the ability to predict performance accurately. Therefore, this study examined the role of individual differences in performance estimation accuracy as a global metacognitive index for different monitoring and…
A modified utilization gauge for western range grasses
Earl F. Aldon; Richard E. Francis
1984-01-01
Accurate, low cost measurements of forage utilization by livestock are essential in range management and the evaluation of grazing systems. However, because of difficulty in making these measurements, visual estimates often are substituted for measured values. To help land managers better determine use, range utilization calculating charts (Crafts 1938, NRCAB 1962)...
A time-driven activity-based costing model to improve health-care resource use in Mirebalais, Haiti.
Mandigo, Morgan; O'Neill, Kathleen; Mistry, Bipin; Mundy, Bryan; Millien, Christophe; Nazaire, Yolande; Damuse, Ruth; Pierre, Claire; Mugunga, Jean Claude; Gillies, Rowan; Lucien, Franciscka; Bertrand, Karla; Luo, Eva; Costas, Ainhoa; Greenberg, Sarah L M; Meara, John G; Kaplan, Robert
2015-04-27
In resource-limited settings, efficiency is crucial to maximise resources available for patient care. Time driven activity-based costing (TDABC) estimates costs directly from clinical and administrative processes used in patient care, thereby providing valuable information for process improvements. TDABC is more accurate and simpler than traditional activity-based costing because it assigns resource costs to patients based on the amount of time clinical and staff resources are used in patient encounters. Other costing approaches use somewhat arbitrary allocations that provide little transparency into the actual clinical processes used to treat medical conditions. TDABC has been successfully applied in European and US health-care settings to facilitate process improvements and new reimbursement approaches, but it has not been used in resource-limited settings. We aimed to optimise TDABC for use in a resource-limited setting to provide accurate procedure and service costs, reliably predict financing needs, inform quality improvement initiatives, and maximise efficiency. A multidisciplinary team used TDABC to map clinical processes for obstetric care (vaginal and caesarean deliveries, from triage to post-partum discharge) and breast cancer care (diagnosis, chemotherapy, surgery, and support services, such as pharmacy, radiology, laboratory, and counselling) at Hôpital Universitaire de Mirebalais (HUM) in Haiti. The team estimated the direct costs of personnel, equipment, and facilities used in patient care based on the amount of time each of these resources was used. We calculated inpatient personnel costs by allocating provider costs per staffed bed, and assigned indirect costs (administration, facility maintenance and operations, education, procurement and warehouse, bloodbank, and morgue) to various subgroups of the patient population. This study was approved by the Partners in Health/Zanmi Lasante Research Committee. The direct cost of an uncomplicated vaginal delivery at HUM was US$62 and the direct cost of a caesarean delivery was US$249. The direct costs of breast cancer care (including diagnostics, chemotherapy, and mastectomy) totalled US$1393. A mastectomy, including post-anaesthesia recovery and inpatient stay, totalled US$282 in direct costs. Indirect costs comprised 26-38% of total costs, and salaries were the largest percentage of total costs (51-72%). Accurate costing of health services is vital for financial officers and funders. TDABC showed opportunities at HUM to optimise use of resources and reduce costs-for instance, by streamlining sterilisation procedures and redistributing certain tasks to improve teamwork. TDABC has also improved budget forecasting and informed financing decisions. HUM leadership recognised its value to improve health-care delivery and expand access in low-resource settings. Boston Children's Hospital, Harvard Business School, and Partners in Health. Copyright © 2015 Elsevier Ltd. All rights reserved.
Productivity losses from road traffic deaths in Turkey.
Naci, Huseyin; Baker, Timothy D
2008-03-01
The importance of road traffic injuries in Turkey is not generally appreciated, in part due to lack of knowledge of its economic burden and in part due to major underestimation in official statistics. The total years of potential life lost and potentially productive years of life lost from mortality were calculated in order to estimate the cost of productivity losses from road traffic deaths in Turkey. More years of potentially productive life are lost due to road traffic deaths than to respiratory tract illnesses or diabetes mellitus, two other serious health problems in Turkey. Road traffic deaths cost Turkey an estimated USD 2.6 billion every year in productivity losses alone, more than the World Bank estimate of the indirect costs from the 1999 Marmara earthquake (USD 1.2-2 billion), Turkey's worst earthquake since 1939 (World Bank Turkey Country Office, 1999). This study highlights the importance of accurate information in ameliorating the burden of road traffic safety in Turkey. Turkey has great opportunities to implement cost-effective interventions to reduce the economic burden of fatal and non-fatal road traffic injuries.
Palsis, John A; Brehmer, Thomas S; Pellegrini, Vincent D; Drew, Jacob M; Sachs, Barton L
2018-02-21
In an era of mandatory bundled payments for total joint replacement, accurate analysis of the cost of procedures is essential for orthopaedic surgeons and their institutions to maintain viable practices. The purpose of this study was to compare traditional accounting and time-driven activity-based costing (TDABC) methods for estimating the total costs of total hip and knee arthroplasty care cycles. We calculated the overall costs of elective primary total hip and total knee replacement care cycles at our academic medical center using traditional and TDABC accounting methods. We compared the methods with respect to the overall costs of hip and knee replacement and the costs for each major cost category. The traditional accounting method resulted in higher cost estimates. The total cost per hip replacement was $22,076 (2014 USD) using traditional accounting and was $12,957 using TDABC. The total cost per knee replacement was $29,488 using traditional accounting and was $16,981 using TDABC. With respect to cost categories, estimates using traditional accounting were greater for hip and knee replacement, respectively, by $3,432 and $5,486 for personnel, by $3,398 and $3,664 for space and equipment, and by $2,289 and $3,357 for indirect costs. Implants and consumables were derived from the actual hospital purchase price; accordingly, both methods produced equivalent results. Substantial cost differences exist between accounting methods. The focus of TDABC only on resources used directly by the patient contrasts with the allocation of all operating costs, including all indirect costs and unused capacity, with traditional accounting. We expect that the true costs of hip and knee replacement care cycles are likely somewhere between estimates derived from traditional accounting methods and TDABC. TDABC offers patient-level granular cost information that better serves in the redesign of care pathways and may lead to more strategic resource-allocation decisions to optimize actual operating margins.
Valuing Insect Pollination Services with Cost of Replacement
Allsopp, Mike H.; de Lange, Willem J.; Veldtman, Ruan
2008-01-01
Value estimates of ecosystem goods and services are useful to justify the allocation of resources towards conservation, but inconclusive estimates risk unsustainable resource allocations. Here we present replacement costs as a more accurate value estimate of insect pollination as an ecosystem service, although this method could also be applied to other services. The importance of insect pollination to agriculture is unequivocal. However, whether this service is largely provided by wild pollinators (genuine ecosystem service) or managed pollinators (commercial service), and which of these requires immediate action amidst reports of pollinator decline, remains contested. If crop pollination is used to argue for biodiversity conservation, clear distinction should be made between values of managed- and wild pollination services. Current methods either under-estimate or over-estimate the pollination service value, and make use of criticised general insect and managed pollinator dependence factors. We apply the theoretical concept of ascribing a value to a service by calculating the cost to replace it, as a novel way of valuing wild and managed pollination services. Adjusted insect and managed pollinator dependence factors were used to estimate the cost of replacing insect- and managed pollination services for the Western Cape deciduous fruit industry of South Africa. Using pollen dusting and hand pollination as suitable replacements, we value pollination services significantly higher than current market prices for commercial pollination, although lower than traditional proportional estimates. The complexity associated with inclusive value estimation of pollination services required several defendable assumptions, but made estimates more inclusive than previous attempts. Consequently this study provides the basis for continued improvement in context specific pollination service value estimates. PMID:18781196
A probabilistic method for the estimation of residual risk in donated blood.
Bish, Ebru K; Ragavan, Prasanna K; Bish, Douglas R; Slonim, Anthony D; Stramer, Susan L
2014-10-01
The residual risk (RR) of transfusion-transmitted infections, including the human immunodeficiency virus and hepatitis B and C viruses, is typically estimated by the incidence[Formula: see text]window period model, which relies on the following restrictive assumptions: Each screening test, with probability 1, (1) detects an infected unit outside of the test's window period; (2) fails to detect an infected unit within the window period; and (3) correctly identifies an infection-free unit. These assumptions need not hold in practice due to random or systemic errors and individual variations in the window period. We develop a probability model that accurately estimates the RR by relaxing these assumptions, and quantify their impact using a published cost-effectiveness study and also within an optimization model. These assumptions lead to inaccurate estimates in cost-effectiveness studies and to sub-optimal solutions in the optimization model. The testing solution generated by the optimization model translates into fewer expected infections without an increase in the testing cost. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Building of an Experimental Cline With Arabidopsis thaliana to Estimate Herbicide Fitness Cost
Roux, Fabrice; Giancola, Sandra; Durand, Stéphanie; Reboud, Xavier
2006-01-01
Various management strategies aim at maintaining pesticide resistance frequency under a threshold value by taking advantage of the benefit of the fitness penalty (the cost) expressed by the resistance allele outside the treated area or during the pesticide selection “off years.” One method to estimate a fitness cost is to analyze the resistance allele frequency along transects across treated and untreated areas. On the basis of the shape of the cline, this method gives the relative contributions of both gene flow and the fitness difference between genotypes in the treated and untreated areas. Taking advantage of the properties of such migration–selection balance, an artificial cline was built up to optimize the conditions where the fitness cost of two herbicide-resistant mutants (acetolactate synthase and auxin-induced target genes) in the model species Arabidopsis thaliana could be more accurately measured. The analysis of the microevolutionary dynamics in these experimental populations indicated mean fitness costs of ∼15 and 92% for the csr1-1 and axr2-1 resistances, respectively. In addition, negative frequency dependence for the fitness cost was also detected for the axr2-1 resistance. The advantages and disadvantages of the cline approach are discussed in regard to other methods of cost estimation. This comparison highlights the powerful ability of an experimental cline to measure low fitness costs and detect sensibility to frequency-dependent variations. PMID:16582450
NASA Astrophysics Data System (ADS)
Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali
2017-12-01
This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.
Alatise, Mary B; Hancke, Gerhard P
2017-09-21
Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).
Hancke, Gerhard P.
2017-01-01
Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs). PMID:28934102
Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan
2015-01-01
Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993
Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan
2015-08-11
Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.
[Costing nuclear medicine diagnostic procedures].
Markou, Pavlos
2005-01-01
To the Editor: Referring to a recent special report about the cost analysis of twenty-nine nuclear medicine procedures, I would like to clarify some basic aspects for determining costs of nuclear medicine procedure with various costing methodologies. Activity Based Costing (ABC) method, is a new approach in imaging services costing that can provide the most accurate cost data, but is difficult to perform in nuclear medicine diagnostic procedures. That is because ABC requires determining and analyzing all direct and indirect costs of each procedure, according all its activities. Traditional costing methods, like those for estimating incomes and expenses per procedure or fixed and variable costs per procedure, which are widely used in break-even point analysis and the method of ratio-of-costs-to-charges per procedure may be easily performed in nuclear medicine departments, to evaluate the variability and differences between costs and reimbursement - charges.
The costs of introducing new technologies into space systems
NASA Technical Reports Server (NTRS)
Dodson, E. N.; Partma, H.; Ruhland, W.
1992-01-01
A review is conducted of cost-research studies intended to provide guidelines for cost estimates of integrating new technologies into existing satellite systems. Quantitative methods are described for determining the technological state-of-the-art so that proposed programs can be evaluated accurately in terms of their contribution to technological development. The R&D costs associated with the proposed programs are then assessed with attention given to the technological advances. Also incorporated quantifiably are any reductions in the costs of production, operations, and support afforded by the advanced technologies. The proposed model is employed in relation to a satellite sizing and cost study in which a tradeoff between increased R&D costs and reduced production costs is examined. The technology/cost model provides a consistent yardstick for assessing the true relative economic impact of introducing novel techniques and technologies.
Lightweight, Miniature Inertial Measurement System
NASA Technical Reports Server (NTRS)
Tang, Liang; Crassidis, Agamemnon
2012-01-01
A miniature, lighter-weight, and highly accurate inertial navigation system (INS) is coupled with GPS receivers to provide stable and highly accurate positioning, attitude, and inertial measurements while being subjected to highly dynamic maneuvers. In contrast to conventional methods that use extensive, groundbased, real-time tracking and control units that are expensive, large, and require excessive amounts of power to operate, this method focuses on the development of an estimator that makes use of a low-cost, miniature accelerometer array fused with traditional measurement systems and GPS. Through the use of a position tracking estimation algorithm, onboard accelerometers are numerically integrated and transformed using attitude information to obtain an estimate of position in the inertial frame. Position and velocity estimates are subject to drift due to accelerometer sensor bias and high vibration over time, and so require the integration with GPS information using a Kalman filter to provide highly accurate and reliable inertial tracking estimations. The method implemented here uses the local gravitational field vector. Upon determining the location of the local gravitational field vector relative to two consecutive sensors, the orientation of the device may then be estimated, and the attitude determined. Improved attitude estimates further enhance the inertial position estimates. The device can be powered either by batteries, or by the power source onboard its target platforms. A DB9 port provides the I/O to external systems, and the device is designed to be mounted in a waterproof case for all-weather conditions.
Sá, Luísa; Costa-Santos, Cristina; Teixeira, Andreia; Couto, Luciana; Costa-Pereira, Altamiro; Hespanhol, Alberto; Santos, Paulo; Martins, Carlos
2015-01-01
Background Physicians’ ability to make cost-effective decisions has been shown to be affected by their knowledge of health care costs. This study assessed whether Portuguese family physicians are aware of the costs of the most frequently prescribed diagnostic and laboratory tests. Methods A cross-sectional study was conducted in a representative sample of Portuguese family physicians, using computer-assisted telephone interviews for data collection. A Likert scale was used to assess physician’s level of agreement with four statements about health care costs. Family physicians were also asked to estimate the costs of diagnostic and laboratory tests. Each physician’s cost estimate was compared with the true cost and the absolute error was calculated. Results One-quarter (24%; 95% confidence interval: 23%–25%) of all cost estimates were accurate to within 25% of the true cost, with 55% (95% IC: 53–56) overestimating and 21% (95% IC: 20–22) underestimating the true actual cost. The majority (76%) of family physicians thought they did not have or were uncertain as to whether they had adequate knowledge of diagnostic and laboratory test costs, and only 7% reported receiving adequate education. The majority of the family physicians (82%) said that they had adequate access to information about the diagnostic and laboratory test costs. Thirty-three percent thought that costs did not influence their decision to order tests, while 27% were uncertain. Conclusions Portuguese family physicians have limited awareness of diagnostic and laboratory test costs, and our results demonstrate a need for improved education in this area. Further research should focus on identifying whether interventions in cost knowledge actually change ordering behavior, in identifying optimal methods to disseminate cost information, and on improving the cost-effectiveness of care. PMID:26356625
Inferring invasive species abundance using removal data from management actions
Davis, Amy J.; Hooten, Mevin B.; Miller, Ryan S.; Farnsworth, Matthew L.; Lewis, Jesse S.; Moxcey, Michael; Pepin, Kim M.
2016-01-01
Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480–19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates.
2011-01-01
Background The few studies that have attempted to estimate the future cost of caring for people with dementia in Australia are typically based on total prevalence and the cost per patient over the average duration of illness. However, costs associated with dementia care also vary according to the length of the disease, severity of symptoms and type of care provided. This study aimed to determine more accurately the future costs of dementia management by taking these factors into consideration. Methods The current study estimated the prevalence of dementia in Australia (2010-2040). Data from a variety of sources was recalculated to distribute this prevalence according to the location (home/institution), care requirements (informal/formal), and dementia severity. The cost of care was attributed to redistributed prevalences and used in prediction of future costs of dementia. Results Our computer modeling indicates that the ratio between the prevalence of people with mild/moderate/severe dementia will change over the three decades from 2010 to 2040 from 50/30/20 to 44/32/24. Taking into account the severity of symptoms, location of care and cost of care per hour, the current study estimates that the informal cost of care in 2010 is AU$3.2 billion and formal care at AU$5.0 billion per annum. By 2040 informal care is estimated to cost AU$11.6 billion and formal care $AU16.7 billion per annum. Interventions to slow disease progression will result in relative savings of 5% (AU$1.5 billion) per annum and interventions to delay disease onset will result in relative savings of 14% (AU$4 billion) of the cost per annum. With no intervention, the projected combined annual cost of formal and informal care for a person with dementia in 2040 will be around AU$38,000 (in 2010 dollars). An intervention to delay progression by 2 years will see this reduced to AU$35,000. Conclusions These findings highlight the need to account for more than total prevalence when estimating the costs of dementia care. While the absolute values of cost of care estimates are subject to the validity and reliability of currently available data, dynamic systems modeling allows for future trends to be estimated. PMID:21988908
A LiDAR data-based camera self-calibration method
NASA Astrophysics Data System (ADS)
Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun
2018-07-01
To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.
Cost Model Comparison: A Study of Internally and Commercially Developed Cost Models in Use by NASA
NASA Technical Reports Server (NTRS)
Gupta, Garima
2011-01-01
NASA makes use of numerous cost models to accurately estimate the cost of various components of a mission - hardware, software, mission/ground operations - during the different stages of a mission's lifecycle. The purpose of this project was to survey these models and determine in which respects they are similar and in which they are different. The initial survey included a study of the cost drivers for each model, the form of each model (linear/exponential/other CER, range/point output, capable of risk/sensitivity analysis), and for what types of missions and for what phases of a mission lifecycle each model is capable of estimating cost. The models taken into consideration consisted of both those that were developed by NASA and those that were commercially developed: GSECT, NAFCOM, SCAT, QuickCost, PRICE, and SEER. Once the initial survey was completed, the next step in the project was to compare the cost models' capabilities in terms of Work Breakdown Structure (WBS) elements. This final comparison was then portrayed in a visual manner with Venn diagrams. All of the materials produced in the process of this study were then posted on the Ground Segment Team (GST) Wiki.
NASA Astrophysics Data System (ADS)
Schwabe, O.; Shehab, E.; Erkoyuncu, J.
2015-08-01
The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.
USDA-ARS?s Scientific Manuscript database
Irrigation scheduling is one of the most cost effective means of conserving limited groundwater resources, particularly in semi-arid regions. Effective precipitation, or the net amount of water from precipitation that can be used in field water balance equations, is essential to accurate and effecti...
USDA-ARS?s Scientific Manuscript database
To improve climate change impact estimates, multi-model ensembles (MMEs) have been suggested. MMEs enable quantifying model uncertainty, and their medians are more accurate than that of any single model when compared with observations. However, multi-model ensembles are costly to execute, so model i...
In the Right Ballpark? Assessing the Accuracy of Net Price Calculators
ERIC Educational Resources Information Center
Anthony, Aaron M.; Page, Lindsay C.; Seldin, Abigail
2016-01-01
Large differences often exist between a college's sticker price and net price after accounting for financial aid. Net price calculators (NPCs) were designed to help students more accurately estimate their actual costs to attend a given college. This study assesses the accuracy of information provided by net price calculators. Specifically, we…
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
Improved rapid magnitude estimation for a community-based, low-cost MEMS accelerometer network
Chung, Angela I.; Cochran, Elizabeth S.; Kaiser, Anna E.; Christensen, Carl M.; Yildirim, Battalgazi; Lawrence, Jesse F.
2015-01-01
Immediately following the Mw 7.2 Darfield, New Zealand, earthquake, over 180 Quake‐Catcher Network (QCN) low‐cost micro‐electro‐mechanical systems accelerometers were deployed in the Canterbury region. Using data recorded by this dense network from 2010 to 2013, we significantly improved the QCN rapid magnitude estimation relationship. The previous scaling relationship (Lawrence et al., 2014) did not accurately estimate the magnitudes of nearby (<35 km) events. The new scaling relationship estimates earthquake magnitudes within 1 magnitude unit of the GNS Science GeoNet earthquake catalog magnitudes for 99% of the events tested, within 0.5 magnitude units for 90% of the events, and within 0.25 magnitude units for 57% of the events. These magnitudes are reliably estimated within 3 s of the initial trigger recorded on at least seven stations. In this report, we present the methods used to calculate a new scaling relationship and demonstrate the accuracy of the revised magnitude estimates using a program that is able to retrospectively estimate event magnitudes using archived data.
Costing interventions in primary care.
Kernick, D
2000-02-01
Against a background of increasing demands on limited resources, studies that relate benefits of health interventions to the resources they consume will be an important part of any decision-making process in primary care, and an accurate assessment of costs will be an important part of any economic evaluation. Although there is no such thing as a gold standard cost estimate, there are a number of basic costing concepts that underlie any costing study. How costs are derived and combined will depend on the assumptions that have been made in their derivation. It is important to be clear what assumptions have been made and why in order to maintain consistency across comparative studies and prevent inappropriate conclusions being drawn. This paper outlines some costing concepts and principles to enable primary care practitioners and researchers to have a basic understanding of costing exercises and their pitfalls.
Belger, Mark; Haro, Josep Maria; Reed, Catherine; Happich, Michael; Kahle-Wrobleski, Kristin; Argimon, Josep Maria; Bruno, Giuseppe; Dodel, Richard; Jones, Roy W; Vellas, Bruno; Wimo, Anders
2016-07-18
Missing data are a common problem in prospective studies with a long follow-up, and the volume, pattern and reasons for missing data may be relevant when estimating the cost of illness. We aimed to evaluate the effects of different methods for dealing with missing longitudinal cost data and for costing caregiver time on total societal costs in Alzheimer's disease (AD). GERAS is an 18-month observational study of costs associated with AD. Total societal costs included patient health and social care costs, and caregiver health and informal care costs. Missing data were classified as missing completely at random (MCAR), missing at random (MAR) or missing not at random (MNAR). Simulation datasets were generated from baseline data with 10-40 % missing total cost data for each missing data mechanism. Datasets were also simulated to reflect the missing cost data pattern at 18 months using MAR and MNAR assumptions. Naïve and multiple imputation (MI) methods were applied to each dataset and results compared with complete GERAS 18-month cost data. Opportunity and replacement cost approaches were used for caregiver time, which was costed with and without supervision included and with time for working caregivers only being costed. Total costs were available for 99.4 % of 1497 patients at baseline. For MCAR datasets, naïve methods performed as well as MI methods. For MAR, MI methods performed better than naïve methods. All imputation approaches were poor for MNAR data. For all approaches, percentage bias increased with missing data volume. For datasets reflecting 18-month patterns, a combination of imputation methods provided more accurate cost estimates (e.g. bias: -1 % vs -6 % for single MI method), although different approaches to costing caregiver time had a greater impact on estimated costs (29-43 % increase over base case estimate). Methods used to impute missing cost data in AD will impact on accuracy of cost estimates although varying approaches to costing informal caregiver time has the greatest impact on total costs. Tailoring imputation methods to the reason for missing data will further our understanding of the best analytical approach for studies involving cost outcomes.
Productivity costs in economic evaluations: past, present, future.
Krol, Marieke; Brouwer, Werner; Rutten, Frans
2013-07-01
Productivity costs occur when the productivity of individuals is affected by illness, treatment, disability or premature death. The objective of this paper was to review past and current developments related to the inclusion, identification, measurement and valuation of productivity costs in economic evaluations. The main debates in the theory and practice of economic evaluations of health technologies described in this review have centred on the questions of whether and how to include productivity costs, especially productivity costs related to paid work. The past few decades have seen important progress in this area. There are important sources of productivity costs other than absenteeism (e.g. presenteeism and multiplier effects in co-workers), but their exact influence on costs remains unclear. Different measurement instruments have been developed over the years, but which instrument provides the most accurate estimates has not been established. Several valuation approaches have been proposed. While empirical research suggests that productivity costs are best included in the cost side of the cost-effectiveness ratio, the jury is still out regarding whether the human capital approach or the friction cost approach is the most appropriate valuation method to do so. Despite the progress and the substantial amount of scientific research, a consensus has not been reached on either the inclusion of productivity costs in economic evaluations or the methods used to produce productivity cost estimates. Such a lack of consensus has likely contributed to ignoring productivity costs in actual economic evaluations and is reflected in variations in national health economic guidelines. Further research is needed to lessen the controversy regarding the estimation of health-related productivity costs. More standardization would increase the comparability and credibility of economic evaluations taking a societal perspective.
Brennan, Aline; Jackson, Arthur; Horgan, Mary; Bergin, Colm J; Browne, John P
2015-04-03
It is anticipated that demands on ambulatory HIV services will increase in coming years as a consequence of the increased life expectancy of HIV patients on highly active anti-retroviral therapy (HAART). Accurate cost data are needed to enable evidence based policy decisions be made about new models of service delivery, new technologies and new medications. A micro-costing study was carried out in an HIV outpatient clinic in a single regional centre in the south of Ireland. The costs of individual appointment types were estimated based on staff grade and time. Hospital resources used by HIV patients who attended the ambulatory care service in 2012 were identified and extracted from existing hospital systems. Associations between patient characteristics and costs per patient month, in 2012 euros, were examined using univariate and multivariate analyses. The average cost of providing ambulatory HIV care was found to be €973 (95% confidence interval €938-€1008) per patient month in 2012. Sensitivity analysis, varying the base-case staff time estimates by 20% and diagnostic testing costs by 60%, estimated the average cost to vary from a low of €927 per patient month to a high of €1019 per patient month. The vast majority of costs were due to the cost of HAART. Women were found to have significantly higher HAART costs per patient month while patients over 50 years of age had significantly lower HAART costs using multivariate analysis. This study provides the estimated cost of ambulatory care in a regional HIV centre in Ireland. These data are valuable for planning services at a local level, and the identification of patient factors, such as age and gender, associated with resource use is of interest both nationally and internationally for the long-term planning of HIV care provision.
The economic burden of skin disease in the United States.
Dehkharghani, Seena; Bible, Jason; Chen, John G; Feldman, Steven R; Fleischer, Alan B
2003-04-01
Skin diseases and their complications are a significant burden on the nation, both in terms of acute and chronic morbidities and their related expenditures for care. Because accurately calculating the cost of skin disease has proven difficult in the past, we present here multiple comparative techniques allowing a more expanded approach to estimating the overall economic burden. Our aims were to (1) determine the economic burden of primary diseases falling within the realm of skin disease, as defined by modern clinical disease classification schemes and (2) identify the specific contribution of each component of costs to the overall expense. Costs were taken as the sum of several factors, divided into direct and indirect health care costs. The direct costs included inpatient hospital costs, ambulatory visit costs (further divided into physician's office visits, outpatient department visits, and emergency department visits), prescription drug costs, and self-care/over-the-counter drug costs. Indirect costs were calculated as the outlay of days of work lost because of skin diseases. The economic burden of skin disease in the United States is large, estimated at approximately $35.9 billion for 1997, including $19.8 billion (54%) in ambulatory care costs; $7.2 billion (20.2%) in hospital inpatient charges; $3.0 billion (8.2%) in prescription drug costs; $4.3 billion (11.7%) in over-the-counter preparations; and $1.6 billion (6.0%) in indirect costs attributable to lost workdays. Our determination of the economic burden of skin care in the United States surpasses past estimates several-fold, and the model presented for calculating cost of illness allows for tracking changes in national expenses for skin care in future studies. The amount of estimated resources devoted to skin disease management is far more than required to treat conditions such as urinary incontinence ($16 billion) and hypertension ($23 billion), but far less than required to treat musculoskeletal conditions ($193 billion).
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
Loganathan, Tharani; Ng, Chiu-Wan; Lee, Way-Seah; Jit, Mark
2016-06-01
Rotavirus gastroenteritis (RVGE) results in substantial mortality and morbidity worldwide. However, an accurate estimation of the health and economic burden of RVGE in Malaysia covering public, private and home treatment is lacking. Data from multiple sources were used to estimate diarrheal mortality and morbidity according to health service utilization. The proportion of this burden attributable to rotavirus was estimated from a community-based study and a meta-analysis we conducted of primary hospital-based studies. Rotavirus incidence was determined by multiplying acute gastroenteritis incidence with estimates of the proportion of gastroenteritis attributable to rotavirus. The economic burden of rotavirus disease was estimated from the health systems and societal perspective. Annually, rotavirus results in 27 deaths, 31,000 hospitalizations, 41,000 outpatient visits and 145,000 episodes of home-treated gastroenteritis in Malaysia. We estimate an annual rotavirus incidence of 1 death per 100,000 children and 12 hospitalizations, 16 outpatient clinic visits and 57 home-treated episodes per 1000 children under-5 years. Annually, RVGE is estimated to cost US$ 34 million to the healthcare provider and US$ 50 million to society. Productivity loss contributes almost a third of costs to society. Publicly, privately and home-treated episodes consist of 52%, 27% and 21%, respectively, of the total societal costs. RVGE represents a considerable health and economic burden in Malaysia. Much of the burden lies in privately or home-treated episodes and is poorly captured in previous studies. This study provides vital information for future evaluation of cost-effectiveness, which are necessary for policy-making regarding universal vaccination.
Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.
2016-01-01
Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679
Rapidly falling costs of battery packs for electric vehicles
NASA Astrophysics Data System (ADS)
Nykvist, Björn; Nilsson, Måns
2015-04-01
To properly evaluate the prospects for commercially competitive battery electric vehicles (BEV) one must have accurate information on current and predicted cost of battery packs. The literature reveals that costs are coming down, but with large uncertainties on past, current and future costs of the dominating Li-ion technology. This paper presents an original systematic review, analysing over 80 different estimates reported 2007-2014 to systematically trace the costs of Li-ion battery packs for BEV manufacturers. We show that industry-wide cost estimates declined by approximately 14% annually between 2007 and 2014, from above US$1,000 per kWh to around US$410 per kWh, and that the cost of battery packs used by market-leading BEV manufacturers are even lower, at US$300 per kWh, and has declined by 8% annually. Learning rate, the cost reduction following a cumulative doubling of production, is found to be between 6 and 9%, in line with earlier studies on vehicle battery technology. We reveal that the costs of Li-ion battery packs continue to decline and that the costs among market leaders are much lower than previously reported. This has significant implications for the assumptions used when modelling future energy and transport systems and permits an optimistic outlook for BEVs contributing to low-carbon transport.
Ezenduka, Charles C; Falleiros, Daniel Resende; Godman, Brian B
2017-09-01
Accurate information on the facility costs of treatment is essential to enhance decision making and funding for malaria control. The objective of this study was to estimate the costs of providing treatment for uncomplicated malaria through a public health facility in Nigeria. Hospital costs were estimated from a provider perspective, applying a standard costing procedure. Capital and recurrent expenditures were estimated using an ingredient approach combined with step-down methodology. Costs attributable to malaria treatment were calculated based on the proportion of malaria cases to total outpatient visits. The costs were calculated in local currency [Naira (N)] and converted to US dollars at the 2013 exchange rate. Total annual costs of N28.723 million (US$182,953.65) were spent by the facility on the treatment of uncomplicated malaria, at a rate of US$31.49 per case, representing approximately 25% of the hospital's total expenditure in the study year. Personnel accounted for over 82.5% of total expenditure, followed by antimalarial medicines at 6.6%. More than 45% of outpatients visits were for uncomplicated malaria. Changes in personnel costs, drug prices and malaria prevalence significantly impacted on the study results, indicating the need for improved efficiency in the use of hospital resources. Malaria treatment currently consumes a considerable amount of resources in the facility, driven mainly by personnel cost and a high proportion of malaria cases. There is scope for enhanced efficiency to prevent waste and reduce costs to the provider and ultimately the consumer.
Tracking of electrochemical impedance of batteries
NASA Astrophysics Data System (ADS)
Piret, H.; Granjon, P.; Guillet, N.; Cattin, V.
2016-04-01
This paper presents an evolutionary battery impedance estimation method, which can be easily embedded in vehicles or nomad devices. The proposed method not only allows an accurate frequency impedance estimation, but also a tracking of its temporal evolution contrary to classical electrochemical impedance spectroscopy methods. Taking into account constraints of cost and complexity, we propose to use the existing electronics of current control to perform a frequency evolutionary estimation of the electrochemical impedance. The developed method uses a simple wideband input signal, and relies on a recursive local average of Fourier transforms. The averaging is controlled by a single parameter, managing a trade-off between tracking and estimation performance. This normalized parameter allows to correctly adapt the behavior of the proposed estimator to the variations of the impedance. The advantage of the proposed method is twofold: the method is easy to embed into a simple electronic circuit, and the battery impedance estimator is evolutionary. The ability of the method to monitor the impedance over time is demonstrated on a simulator, and on a real Lithium ion battery, on which a repeatability study is carried out. The experiments reveal good tracking results, and estimation performance as accurate as the usual laboratory approaches.
Designing Control System Application Software for Change
NASA Technical Reports Server (NTRS)
Boulanger, Richard
2001-01-01
The Unified Modeling Language (UML) was used to design the Environmental Systems Test Stand (ESTS) control system software. The UML was chosen for its ability to facilitate a clear dialog between software designer and customer, from which requirements are discovered and documented in a manner which transposes directly to program objects. Applying the UML to control system software design has resulted in a baseline set of documents from which change and effort of that change can be accurately measured. As the Environmental Systems Test Stand evolves, accurate estimates of the time and effort required to change the control system software will be made. Accurate quantification of the cost of software change can be before implementation, improving schedule and budget accuracy.
The Burden of Exposure–Related Diffuse Lung Disease
Goldyn, Sheryl R.; Condos, Rany; Rom, William N.
2013-01-01
Estimating the burden of exposure-related diffuse lung disease in terms of health effects and economic burden remains challenging. Labor statistics are inadequate to define the scope of the problem, and few studies have analyzed the prevalence of exposure-related illnesses and the subsequent health care cost. Well-defined exposures, such as those associated with coal mines, asbestos mines, and stonecutting, have led to more accurate assessment of prevalence and cost. As governmental regulation of workplace exposure has increased, the prevalence of diseases such as silicosis and coal workers’ pneumoconiosis has diminished. However, the health and economic effects of diseases with long latency periods, such as asbestosis and mesothelioma, continue to increase in the short term. Newer exposures, such as those related to air pollution, nylon flock, and the World Trade Center collapse, have added to these costs. As a result, estimates of cost for occupational diseases, including respiratory illnesses, exceed $26 billion annually, and the true economic burden is likely much higher. PMID:19221957
Ion production cost of a gridded helicon ion thruster
NASA Astrophysics Data System (ADS)
Williams, Logan T.; Walker, Mitchell L. R.
2013-10-01
Helicon plasma sources are capable of efficiently ionizing propellants and have been considered for application in electric propulsion. However, studies that estimate the ion production cost of the helicon plasma source are limited and rely on estimates of the extracted ion current. The ion production cost of a helicon plasma source is determined using a gridded ion thruster configuration that allows accurate measurement of the ion beam current. These measurements are used in conjunction with previous characterization of the helicon plasma to create a model of the discharge plasma within the gridded thruster. The device is tested across a range of operating conditions: 343-600 W radio frequency power at 13.56 MHz, 50-250 G and 1.5 mg s-1 of argon at a pressure of 1.6 × 10-5 Torr-Ar. The ion production cost is 132-212 ± 28-46 eV/ion, driven primarily by ion loss to the walls and anode, as well as energy loss in the anode and grid sheaths.
A measurement system for large, complex software programs
NASA Technical Reports Server (NTRS)
Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.
1994-01-01
This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
Hendriks, Marleen E.; Kundu, Piyali; Boers, Alexander C.; Bolarinwa, Oladimeji A.; te Pas, Mark J.; Akande, Tanimola M.; Agbede, Kayode; Gomez, Gabriella B.; Redekop, William K.; Schultsz, Constance; Tan, Siok Swan
2014-01-01
Background Disease-specific costing studies can be used as input into cost-effectiveness analyses and provide important information for efficient resource allocation. However, limited data availability and limited expertise constrain such studies in low- and middle-income countries (LMICs). Objective To describe a step-by-step guideline for conducting disease-specific costing studies in LMICs where data availability is limited and to illustrate how the guideline was applied in a costing study of cardiovascular disease prevention care in rural Nigeria. Design The step-by-step guideline provides practical recommendations on methods and data requirements for six sequential steps: 1) definition of the study perspective, 2) characterization of the unit of analysis, 3) identification of cost items, 4) measurement of cost items, 5) valuation of cost items, and 6) uncertainty analyses. Results We discuss the necessary tradeoffs between the accuracy of estimates and data availability constraints at each step and illustrate how a mixed methodology of accurate bottom-up micro-costing and more feasible approaches can be used to make optimal use of all available data. An illustrative example from Nigeria is provided. Conclusions An innovative, user-friendly guideline for disease-specific costing in LMICs is presented, using a mixed methodology to account for limited data availability. The illustrative example showed that the step-by-step guideline can be used by healthcare professionals in LMICs to conduct feasible and accurate disease-specific cost analyses. PMID:24685170
The Launch Systems Operations Cost Model
NASA Technical Reports Server (NTRS)
Prince, Frank A.; Hamaker, Joseph W. (Technical Monitor)
2001-01-01
One of NASA's primary missions is to reduce the cost of access to space while simultaneously increasing safety. A key component, and one of the least understood, is the recurring operations and support cost for reusable launch systems. In order to predict these costs, NASA, under the leadership of the Independent Program Assessment Office (IPAO), has commissioned the development of a Launch Systems Operations Cost Model (LSOCM). LSOCM is a tool to predict the operations & support (O&S) cost of new and modified reusable (and partially reusable) launch systems. The requirements are to predict the non-recurring cost for the ground infrastructure and the recurring cost of maintaining that infrastructure, performing vehicle logistics, and performing the O&S actions to return the vehicle to flight. In addition, the model must estimate the time required to cycle the vehicle through all of the ground processing activities. The current version of LSOCM is an amalgamation of existing tools, leveraging our understanding of shuttle operations cost with a means of predicting how the maintenance burden will change as the vehicle becomes more aircraft like. The use of the Conceptual Operations Manpower Estimating Tool/Operations Cost Model (COMET/OCM) provides a solid point of departure based on shuttle and expendable launch vehicle (ELV) experience. The incorporation of the Reliability and Maintainability Analysis Tool (RMAT) as expressed by a set of response surface model equations gives a method for estimating how changing launch system characteristics affects cost and cycle time as compared to today's shuttle system. Plans are being made to improve the model. The development team will be spending the next few months devising a structured methodology that will enable verified and validated algorithms to give accurate cost estimates. To assist in this endeavor the LSOCM team is part of an Agency wide effort to combine resources with other cost and operations professionals to support models, databases, and operations assessments.
An approach to estimate body dimensions through constant body ratio benchmarks.
Chao, Wei-Cheng; Wang, Eric Min-Yang
2010-12-01
Building a new anthropometric database is a difficult and costly job that requires considerable manpower and time. However, most designers and engineers do not know how to convert old anthropometric data into applicable new data with minimal errors and costs (Wang et al., 1999). To simplify the process of converting old anthropometric data into useful new data, this study analyzed the available data in paired body dimensions in an attempt to determine constant body ratio (CBR) benchmarks that are independent of gender and age. In total, 483 CBR benchmarks were identified and verified from 35,245 ratios analyzed. Additionally, 197 estimation formulae, taking as inputs 19 easily measured body dimensions, were built using 483 CBR benchmarks. Based on the results for 30 recruited participants, this study determined that the described approach is more accurate and cost-effective than alternative techniques. Copyright © 2010 Elsevier Ltd. All rights reserved.
A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.
Pandis, Petros; Bull, Anthony Mj
2017-11-01
Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.
Simplifying silicon burning: Application of quasi-equilibrium to (alpha) network nucleosynthesis
NASA Technical Reports Server (NTRS)
Hix, W. R.; Thielemann, F.-K.; Khokhlov, A. M.; Wheeler, J. C.
1997-01-01
While the need for accurate calculation of nucleosynthesis and the resulting rate of thermonuclear energy release within hydrodynamic models of stars and supernovae is clear, the computational expense of these nucleosynthesis calculations often force a compromise in accuracy to reduce the computational cost. To redress this trade-off of accuracy for speed, the authors present an improved nuclear network which takes advantage of quasi- equilibrium in order to reduce the number of independent nuclei, and hence the computational cost of nucleosynthesis, without significant reduction in accuracy. In this paper they will discuss the first application of this method, the further reduction in size of the minimal alpha network. The resultant QSE- reduced alpha network is twice as fast as the conventional alpha network it replaces and requires the tracking of half as many abundance variables, while accurately estimating the rate of energy generation. Such reduction in cost is particularly necessary for future generation of multi-dimensional models for supernovae.
Estimating cost ratio distribution between fatal and non-fatal road accidents in Malaysia
NASA Astrophysics Data System (ADS)
Hamdan, Nurhidayah; Daud, Noorizam
2014-07-01
Road traffic crashes are a global major problem, and should be treated as a shared responsibility. In Malaysia, road accident tragedies kill 6,917 people and injure or disable 17,522 people in year 2012, and government spent about RM9.3 billion in 2009 which cost the nation approximately 1 to 2 percent loss of gross domestic product (GDP) reported annually. The current cost ratio for fatal and non-fatal accident used by Ministry of Works Malaysia simply based on arbitrary value of 6:4 or equivalent 1.5:1 depends on the fact that there are six factors involved in the calculation accident cost for fatal accident while four factors for non-fatal accident. The simple indication used by the authority to calculate the cost ratio is doubted since there is lack of mathematical and conceptual evidence to explain how this ratio is determined. The main aim of this study is to determine the new accident cost ratio for fatal and non-fatal accident in Malaysia based on quantitative statistical approach. The cost ratio distributions will be estimated based on Weibull distribution. Due to the unavailability of official accident cost data, insurance claim data both for fatal and non-fatal accident have been used as proxy information for the actual accident cost. There are two types of parameter estimates used in this study, which are maximum likelihood (MLE) and robust estimation. The findings of this study reveal that accident cost ratio for fatal and non-fatal claim when using MLE is 1.33, while, for robust estimates, the cost ratio is slightly higher which is 1.51. This study will help the authority to determine a more accurate cost ratio between fatal and non-fatal accident as compared to the official ratio set by the government, since cost ratio is an important element to be used as a weightage in modeling road accident related data. Therefore, this study provides some guidance tips to revise the insurance claim set by the Malaysia road authority, hence the appropriate method that suitable to implement in Malaysia can be analyzed.
NASA Astrophysics Data System (ADS)
Bateni, S. M.; Xu, T.
2015-12-01
Accurate estimation of water and heat fluxes is required for irrigation scheduling, weather prediction, and water resources planning and management. A weak-constraint variational data assimilation (WC-VDA) scheme is developed to estimate water and heat fluxes by assimilating sequences of land surface temperature (LST) observations. The commonly used strong-constraint VDA systems adversely affect the accuracy of water and heat flux estimates as they assume the model is perfect. The WC-VDA approach accounts for structural and model errors and generates more accurate results via adding a model error term into the surface energy balance equation. The two key unknown parameters of the WC-VDA system (i.e., CHN, the bulk heat transfer coefficient and EF, evaporative fraction) and the model error term are optimized by minimizing the cost function. The WC-VDA model was tested at two sites with contrasting hydrological and vegetative conditions: the Daman site (a wet site located in an oasis area and covered by seeded corn) and the Huazhaizi site (a dry site located in a desert area and covered by sparse grass) in middle stream of Heihe river basin, northwest China. Compared to the strong-constraint VDA system, the WC-VDA method generates more accurate estimates of water and energy fluxes over the desert and oasis sites with dry and wet conditions.
Improving CAR Navigation with a Vision-Based System
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, K.; Lee, I.
2015-08-01
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
Improving Car Navigation with a Vision-Based System
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, K.; Lee, I.
2015-08-01
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
NASA Astrophysics Data System (ADS)
Audet, J.; Martinsen, L.; Hasler, B.; de Jonge, H.; Karydi, E.; Ovesen, N. B.; Kronvang, B.
2014-07-01
Eutrophication of aquatic ecosystems caused by excess concentrations of nitrogen and phosphorus may have harmful consequences for biodiversity and poses a health risk to humans via the water supplies. Reduction of nitrogen and phosphorus losses to aquatic ecosystems involves implementation of costly measures, and reliable monitoring methods are therefore essential to select appropriate mitigation strategies and to evaluate their effects. Here, we compare the performances and costs of three methodologies for the monitoring of nutrients in rivers: grab sampling, time-proportional sampling and passive sampling using flow proportional samplers. Assuming time-proportional sampling to be the best estimate of the "true" nutrient load, our results showed that the risk of obtaining wrong total nutrient load estimates by passive samplers is high despite similar costs as the time-proportional sampling. Our conclusion is that for passive samplers to provide a reliable monitoring alternative, further development is needed. Grab sampling was the cheapest of the three methods and was more precise and accurate than passive sampling. We conclude that although monitoring employing time-proportional sampling is costly, its reliability precludes unnecessarily high implementation expenses.
NASA Astrophysics Data System (ADS)
Audet, J.; Martinsen, L.; Hasler, B.; de Jonge, H.; Karydi, E.; Ovesen, N. B.; Kronvang, B.
2014-11-01
Eutrophication of aquatic ecosystems caused by excess concentrations of nitrogen and phosphorus may have harmful consequences for biodiversity and poses a health risk to humans via water supplies. Reduction of nitrogen and phosphorus losses to aquatic ecosystems involves implementation of costly measures, and reliable monitoring methods are therefore essential to select appropriate mitigation strategies and to evaluate their effects. Here, we compare the performances and costs of three methodologies for the monitoring of nutrients in rivers: grab sampling; time-proportional sampling; and passive sampling using flow-proportional samplers. Assuming hourly time-proportional sampling to be the best estimate of the "true" nutrient load, our results showed that the risk of obtaining wrong total nutrient load estimates by passive samplers is high despite similar costs as the time-proportional sampling. Our conclusion is that for passive samplers to provide a reliable monitoring alternative, further development is needed. Grab sampling was the cheapest of the three methods and was more precise and accurate than passive sampling. We conclude that although monitoring employing time-proportional sampling is costly, its reliability precludes unnecessarily high implementation expenses.
Fast Quaternion Attitude Estimation from Two Vector Measurements
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2001-01-01
Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. Existing closed-form attitude estimates based on Wahba's optimality criterion for two arbitrarily weighted observations are somewhat slow to evaluate. This paper presents two new fast quaternion attitude estimation algorithms using two vector observations, one optimal and one suboptimal. The suboptimal method gives the same estimate as the TRIAD algorithm, at reduced computational cost. Simulations show that the TRIAD estimate is almost as accurate as the optimal estimate in representative test scenarios.
An improved set of standards for finding cost for cost-effectiveness analysis.
Barnett, Paul G
2009-07-01
Guidelines have helped standardize methods of cost-effectiveness analysis, allowing different interventions to be compared and enhancing the generalizability of study findings. There is agreement that all relevant services be valued from the societal perspective using a long-term time horizon and that more exact methods be used to cost services most affected by the study intervention. Guidelines are not specific enough with respect to costing methods, however. The literature was reviewed to identify the problems associated with the 4 principal methods of cost determination. Microcosting requires direct measurement and is ordinarily reserved to cost novel interventions. Analysts should include nonwage labor cost, person-level and institutional overhead, and the cost of development, set-up activities, supplies, space, and screening. Activity-based cost systems have promise of finding accurate costs of all services provided, but are not widely adopted. Quality must be evaluated and the generalizability of cost estimates to other settings must be considered. Administrative cost estimates, chiefly cost-adjusted charges, are widely used, but the analyst must consider items excluded from the available system. Gross costing methods determine quantity of services used and employ a unit cost. If the intervention will affect the characteristics of a service, the method should not assume that the service is homogeneous. Questions are posed for future reviews of the quality of costing methods. The analyst must avoid inappropriate assumptions, especially those that bias the analysis by exclusion of costs that are affected by the intervention under study.
ERIC Educational Resources Information Center
Hamilton, Darrick; Goldsmith, Arthur H.; Darity, William, Jr.
2008-01-01
Scholars have found that poor English proficiency is negatively associated with wages using self-reported measures. However, these estimates may suffer from misclassification bias. Interviewer ratings are likely to more accurately proxy employer assessment of worker language ability. Using self-reported and interviewer ratings from the Multi-City…
Low-Cost Direct Detect Spaceborne Lidar
2014-06-01
of-flight for a set of photons and determine a very accurate range estimate. With light being a much shorter wavelength than the radio frequencies in...ADDRESS(ES) N/A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11 . SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author...19 A. APOLLO LASER ALTIMETER .......................................................... 19 B. CLEMENTINE
The feasibility of remotely sensed data to estimate urban tree dimensions and biomass
Jun-Hak Lee; Yekang Ko; E. Gregory McPherson
2016-01-01
Accurately measuring the biophysical dimensions of urban trees, such as crown diameter, stem diameter, height, and biomass, is essential for quantifying their collective benefits as an urban forest. However, the cost of directly measuring thousands or millions of individual trees through field surveys can be prohibitive. Supplementing field surveys with remotely sensed...
NASA Astrophysics Data System (ADS)
Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.
2017-03-01
Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.
Design of Low-Cost Vehicle Roll Angle Estimator Based on Kalman Filters and an Iot Architecture.
Garcia Guzman, Javier; Prieto Gonzalez, Lisardo; Pajares Redondo, Jonatan; Sanz Sanchez, Susana; Boada, Beatriz L
2018-06-03
In recent years, there have been many advances in vehicle technologies based on the efficient use of real-time data provided by embedded sensors. Some of these technologies can help you avoid or reduce the severity of a crash such as the Roll Stability Control (RSC) systems for commercial vehicles. In RSC, several critical variables to consider such as sideslip or roll angle can only be directly measured using expensive equipment. These kind of devices would increase the price of commercial vehicles. Nevertheless, sideslip or roll angle or values can be estimated using MEMS sensors in combination with data fusion algorithms. The objectives stated for this research work consist of integrating roll angle estimators based on Linear and Unscented Kalman filters to evaluate the precision of the results obtained and determining the fulfillment of the hard real-time processing constraints to embed this kind of estimators in IoT architectures based on low-cost equipment able to be deployed in commercial vehicles. An experimental testbed composed of a van with two sets of low-cost kits was set up, the first one including a Raspberry Pi 3 Model B, and the other having an Intel Edison System on Chip. This experimental environment was tested under different conditions for comparison. The results obtained from low-cost experimental kits, based on IoT architectures and including estimators based on Kalman filters, provide accurate roll angle estimation. Also, these results show that the processing time to get the data and execute the estimations based on Kalman Filters fulfill hard real time constraints.
A Robust Nonlinear Observer for Real-Time Attitude Estimation Using Low-Cost MEMS Inertial Sensors
Guerrero-Castellanos, José Fermi; Madrigal-Sastre, Heberto; Durand, Sylvain; Torres, Lizeth; Muñoz-Hernández, German Ardul
2013-01-01
This paper deals with the attitude estimation of a rigid body equipped with angular velocity sensors and reference vector sensors. A quaternion-based nonlinear observer is proposed in order to fuse all information sources and to obtain an accurate estimation of the attitude. It is shown that the observer error dynamics can be separated into two passive subsystems connected in “feedback”. Then, this property is used to show that the error dynamics is input-to-state stable when the measurement disturbance is seen as an input and the error as the state. These results allow one to affirm that the observer is “robustly stable”. The proposed observer is evaluated in real-time with the design and implementation of an Attitude and Heading Reference System (AHRS) based on low-cost MEMS (Micro-Electro-Mechanical Systems) Inertial Measure Unit (IMU) and magnetic sensors and a 16-bit microcontroller. The resulting estimates are compared with a high precision motion system to demonstrate its performance. PMID:24201316
An Unscented Kalman-Particle Hybrid Filter for Space Object Tracking
NASA Astrophysics Data System (ADS)
Raihan A. V, Dilshad; Chakravorty, Suman
2018-03-01
Optimal and consistent estimation of the state of space objects is pivotal to surveillance and tracking applications. However, probabilistic estimation of space objects is made difficult by the non-Gaussianity and nonlinearity associated with orbital mechanics. In this paper, we present an unscented Kalman-particle hybrid filtering framework for recursive Bayesian estimation of space objects. The hybrid filtering scheme is designed to provide accurate and consistent estimates when measurements are sparse without incurring a large computational cost. It employs an unscented Kalman filter (UKF) for estimation when measurements are available. When the target is outside the field of view (FOV) of the sensor, it updates the state probability density function (PDF) via a sequential Monte Carlo method. The hybrid filter addresses the problem of particle depletion through a suitably designed filter transition scheme. To assess the performance of the hybrid filtering approach, we consider two test cases of space objects that are assumed to undergo full three dimensional orbital motion under the effects of J 2 and atmospheric drag perturbations. It is demonstrated that the hybrid filters can furnish fast, accurate and consistent estimates outperforming standard UKF and particle filter (PF) implementations.
Lave, Matthew; Stein, Joshua; Smith, Ryan
2016-07-28
To address the lack of knowledge of local solar variability, we have developed and deployed a low-cost solar variability datalogger (SVD). While most currently used solar irradiance sensors are expensive pyranometers with high accuracy (relevant for annual energy estimates), low-cost sensors display similar precision (relevant for solar variability) as high-cost pyranometers, even if they are not as accurate. In this work, we present evaluation of various low-cost irradiance sensor types, describe the SVD, and present validation and comparison of the SVD collected data. In conclusion, the low cost and ease of use of the SVD will enable a greater understandingmore » of local solar variability, which will reduce developer and utility uncertainty about the impact of solar photovoltaic (PV) installations and thus will encourage greater penetrations of solar energy.« less
Assays for estimating HIV incidence: updated global market assessment and estimated economic value.
Morrison, Charles S; Homan, Rick; Mack, Natasha; Seepolmuang, Pairin; Averill, Megan; Taylor, Jamilah; Osborn, Jennifer; Dailey, Peter; Parkin, Neil; Ongarello, Stefano; Mastro, Timothy D
2017-11-01
Accurate incidence estimates are needed to characterize the HIV epidemic and guide prevention efforts. HIV Incidence assays are cost-effective laboratory assays that provide incidence estimates from cross-sectional surveys. We conducted a global market assessment of HIV incidence assays under three market scenarios and estimated the economic value of improved incidence assays. We interviewed 27 stakeholders, and reviewed journal articles, working group proceedings, and manufacturers' sales figures. We determined HIV incidence assay use in 2014, and estimated use in 2015 to 2017 and in 5 to 10-years under three market scenarios, as well as the cost of conducting national and key population surveys using an HIV incidence assay with improved performance. Global 2014 HIV incidence assay use was 308,900 tests, highest in Asia and mostly for case- and population-based surveillance. Estimated 2015 to 2017 use was 94,475 annually, with declines due to China and the United States discontinuing incidence assay use for domestic surveillance. Annual projected 5 to 10 year use under scenario 1 - no change in technology - was 94,475. For scenario 2 - a moderately improved incidence assay - projected annual use was 286,031. Projected annual use for scenario 3 - game-changing technologies with an HIV incidence assay part of (a) standard confirmatory testing, and (b) standard rapid testing, were 500,000 and 180 million, respectively. As HIV incidence assay precision increases, decreased sample sizes required for incidence estimation resulted in $5 to 23 million annual reductions in survey costs and easily offset the approximately $3 million required to develop a new assay. Improved HIV incidence assays could substantially reduce HIV incidence estimation costs. Continued development of HIV incidence assays with improved performance is required to realize these cost benefits. © 2017 The Authors. Journal of the International AIDS Society published by John Wiley & sons Ltd on behalf of the International AIDS Society.
NASA Technical Reports Server (NTRS)
1981-01-01
Econ, Inc.'s agricultural aerial application, "ag-air," involves more than 10,000 aircraft spreading insecticides, herbicides, fertilizer, seed and other materials over millions of acres of farmland. Difficult for an operator to estimate costs accurately and decide what to charge or which airplane can handle which assignment most efficiently. Computerized service was designed to improve business efficiency in choice of aircraft and determination of charge rates based on realistic operating cost data. Each subscriber fills out a detailed form which pertains to his needs and then receives a custom-tailored computer printout best suited to his particular business mix.
Meyer-Rath, Gesine; Over, Mead
2012-01-01
Policy discussions about the feasibility of massively scaling up antiretroviral therapy (ART) to reduce HIV transmission and incidence hinge on accurately projecting the cost of such scale-up in comparison to the benefits from reduced HIV incidence and mortality. We review the available literature on modelled estimates of the cost of providing ART to different populations around the world, and suggest alternative methods of characterising cost when modelling several decades into the future. In past economic analyses of ART provision, costs were often assumed to vary by disease stage and treatment regimen, but for treatment as prevention, in particular, most analyses assume a uniform cost per patient. This approach disregards variables that can affect unit cost, such as differences in factor prices (i.e., the prices of supplies and services) and the scale and scope of operations (i.e., the sizes and types of facilities providing ART). We discuss several of these variables, and then present a worked example of a flexible cost function used to determine the effect of scale on the cost of a proposed scale-up of treatment as prevention in South Africa. Adjusting previously estimated costs of universal testing and treatment in South Africa for diseconomies of small scale, i.e., more patients being treated in smaller facilities, adds 42% to the expected future cost of the intervention. PMID:22802731
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun; Braun, J.
Refrigerant mass flow rate is an important measurement for monitoring equipment performance and enabling fault detection and diagnostics. However, a traditional mass flow meter is expensive to purchase and install. A virtual refrigerant mass flow sensor (VRMF) uses a mathematical model to estimate flow rate using low-cost measurements and can potentially be implemented at low cost. This study evaluates three VRMFs for estimating refrigerant mass flow rate. The first model uses a compressor map that relates refrigerant flow rate to measurements of inlet and outlet pressure, and inlet temperature measurements. The second model uses an energy-balance method on the compressormore » that uses a compressor map for power consumption, which is relatively independent of compressor faults that influence mass flow rate. The third model is developed using an empirical correlation for an electronic expansion valve (EEV) based on an orifice equation. The three VRMFs are shown to work well in estimating refrigerant mass flow rate for various systems under fault-free conditions with less than 5% RMS error. Each of the three mass flow rate estimates can be utilized to diagnose and track the following faults: 1) loss of compressor performance, 2) fouled condenser or evaporator filter, 3) faulty expansion device, respectively. For example, a compressor refrigerant flow map model only provides an accurate estimation when the compressor operates normally. When a compressor is not delivering the expected flow due to a leaky suction or discharge valve or other internal fault, the energy-balance or EEV model can provide accurate flow estimates. In this paper, the flow differences provide an indication of loss of compressor performance and can be used for fault detection and diagnostics.« less
McGee, Benton D.; Tollett, Roland W.; Goree, Burl B.
2007-01-01
Pressure transducers (sensors) are accurate, reliable, and cost-effective tools to measure and record the magnitude, extent, and timing of hurricane storm surge. Sensors record storm-surge peaks more accurately and reliably than do high-water marks. Data collected by sensors may be used in storm-surge models to estimate when, where, and to what degree stormsurge flooding will occur during future storm-surge events and to calibrate and verify stormsurge models, resulting in a better understanding of the dynamics of storm surge.
A technique for estimating dry deposition velocities based on similarity with latent heat flux
NASA Astrophysics Data System (ADS)
Pleim, Jonathan E.; Finkelstein, Peter L.; Clarke, John F.; Ellestad, Thomas G.
Field measurements of chemical dry deposition are needed to assess impacts and trends of airborne contaminants on the exposure of crops and unmanaged ecosystems as well as for the development and evaluation of air quality models. However, accurate measurements of dry deposition velocities require expensive eddy correlation measurements and can only be practically made for a few chemical species such as O 3 and CO 2. On the other hand, operational dry deposition measurements such as those used in large area networks involve relatively inexpensive standard meteorological and chemical measurements but rely on less accurate deposition velocity models. This paper describes an intermediate technique which can give accurate estimates of dry deposition velocity for chemical species which are dominated by stomatal uptake such as O 3 and SO 2. This method can give results that are nearly the quality of eddy correlation measurements of trace gas fluxes at much lower cost. The concept is that bulk stomatal conductance can be accurately estimated from measurements of latent heat flux combined with standard meteorological measurements of humidity, temperature, and wind speed. The technique is tested using data from a field experiment where high quality eddy correlation measurements were made over soybeans. Over a four month period, which covered the entire growth cycle, this technique showed very good agreement with eddy correlation measurements for O 3 deposition velocity.
Use of the Magnetic Field for Improving Gyroscopes’ Biases Estimation
Munoz Diaz, Estefania; de Ponte Müller, Fabian; García Domínguez, Juan Jesús
2017-01-01
An accurate orientation is crucial to a satisfactory position in pedestrian navigation. The orientation estimation, however, is greatly affected by errors like the biases of gyroscopes. In order to minimize the error in the orientation, the biases of gyroscopes must be estimated and subtracted. In the state of the art it has been proposed, but not proved, that the estimation of the biases can be accomplished using magnetic field measurements. The objective of this work is to evaluate the effectiveness of using magnetic field measurements to estimate the biases of medium-cost micro-electromechanical sensors (MEMS) gyroscopes. We carry out the evaluation with experiments that cover both, quasi-error-free turn rate and magnetic measurements and medium-cost MEMS turn rate and magnetic measurements. The impact of different homogeneous magnetic field distributions and magnetically perturbed environments is analyzed. Additionally, the effect of the successful biases subtraction on the orientation and the estimated trajectory is detailed. Our results show that the use of magnetic field measurements is beneficial to the correct biases estimation. Further, we show that different magnetic field distributions affect differently the biases estimation process. Moreover, the biases are likewise correctly estimated under perturbed magnetic fields. However, for indoor and urban scenarios the biases estimation process is very slow. PMID:28398232
Inferring invasive species abundance using removal data from management actions.
Davis, Amy J; Hooten, Mevin B; Miller, Ryan S; Farnsworth, Matthew L; Lewis, Jesse; Moxcey, Michael; Pepin, Kim M
2016-10-01
Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480-19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates. © 2016 by the Ecological Society of America.
Effect of plot and sample size on timing and precision of urban forest assessments
David J. Nowak; Jeffrey T. Walton; Jack C. Stevens; Daniel E. Crane; Robert E. Hoehn
2008-01-01
Accurate field data can be used to assess ecosystem services from trees and to improve urban forest management, yet little is known about the optimization of field data collection in the urban environment. Various field and Geographic Information System (GIS) tests were performed to help understand how time costs and precision of tree population estimates change with...
UWB Wind Turbine Blade Deflection Sensing for Wind Energy Cost Reduction.
Zhang, Shuai; Jensen, Tobias Lindstrøm; Franek, Ondrej; Eggers, Patrick C F; Olesen, Kim; Byskov, Claus; Pedersen, Gert Frølund
2015-08-12
A new application of utilizing ultra-wideband (UWB) technology to sense wind turbine blade deflections is introduced in this paper for wind energy cost reduction. The lower UWB band of 3.1-5.3 GHz is applied. On each blade, there will be one UWB blade deflection sensing system, which consists of two UWB antennas at the blade root and one UWB antenna at the blade tip. The detailed topology and challenges of this deflection sensing system are addressed. Due to the complexity of the problem, this paper will first realize the on-blade UWB radio link in the simplest case, where the tip antenna is situated outside (and on the surface of) a blade tip. To investigate this case, full-blade time-domain measurements are designed and conducted under different deflections. The detailed measurement setups and results are provided. If the root and tip antenna locations are properly selected, the first pulse is always of sufficient quality for accurate estimations under different deflections. The measured results reveal that the blade tip-root distance and blade deflection can be accurately estimated in the complicated and lossy wireless channels around a wind turbine blade. Some future research topics on this application are listed finally.
An Economic Evaluation of Food Safety Education Interventions: Estimates and Critical Data Gaps.
Zan, Hua; Lambea, Maria; McDowell, Joyce; Scharff, Robert L
2017-08-01
The economic evaluation of food safety interventions is an important tool that practitioners and policy makers use to assess the efficacy of their efforts. These evaluations are built on models that are dependent on accurate estimation of numerous input variables. In many cases, however, there is no data available to determine input values and expert opinion is used to generate estimates. This study uses a benefit-cost analysis of the food safety component of the adult Expanded Food and Nutrition Education Program (EFNEP) in Ohio as a vehicle for demonstrating how results based on variable values that are not objectively determined may be sensitive to alternative assumptions. In particular, the focus here is on how reported behavioral change is translated into economic benefits. Current gaps in the literature make it impossible to know with certainty how many people are protected by the education (what are the spillover effects?), the length of time education remains effective, and the level of risk reduction from change in behavior. Based on EFNEP survey data, food safety education led 37.4% of participants to improve their food safety behaviors. Under reasonable default assumptions, benefits from this improvement significantly outweigh costs, yielding a benefit-cost ratio of between 6.2 and 10.0. Incorporation of a sensitivity analysis using alternative estimates yields a greater range of estimates (0.2 to 56.3), which highlights the importance of future research aimed at filling these research gaps. Nevertheless, most reasonable assumptions lead to estimates of benefits that justify their costs.
Location estimation in wireless sensor networks using spring-relaxation technique.
Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M
2010-01-01
Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.
Resource Consumption of a Diffusion Model for Prevention Programs: The PROSPER Delivery System
Crowley, Daniel M.; Jones, Damon E.; Greenberg, Mark T.; Feinberg, Mark E.; Spoth, Richard L.
2012-01-01
Purpose To prepare public systems to implement evidence-based prevention programs for adolescents, it is necessary to have accurate estimates of programs’ resource consumption. When evidence-based programs are implemented through a specialized prevention delivery system, additional costs may be incurred during cultivation of the delivery infrastructure. Currently, there is limited research on the resource consumption of such delivery systems and programs. In this article, we describe the resource consumption of implementing the PROSPER (PROmoting School–Community–University Partnerships to Enhance Resilience) delivery system for a period of 5 years in one state, and how the financial and economic costs of its implementation affect local communities as well as the Cooperative Extension and University systems. Methods We used a six-step framework for conducting cost analysis, using a Cost–Procedure–Process–Outcome Analysis model (Yates, Analyzing costs, procedures, processes, and outcomes in human services: An introduction, 1996; Yates, 2009). This method entails defining the delivery System; bounding cost parameters; identifying, quantifying, and valuing systemic resource Consumption, and conducting sensitivity analysis of the cost estimates. Results Our analyses estimated both the financial and economic costs of the PROSPER delivery system. Evaluation of PROSPER illustrated how costs vary over time depending on the primacy of certain activities (e.g., team development, facilitator training, program implementation). Additionally, this work describes how the PROSPER model cultivates a complex resource infrastructure and provides preliminary evidence of systemic efficiencies. Conclusions This work highlights the need to study the costs of diffusion across time and broadens definitions of what is essential for successful implementation. In particular, cost analyses offer innovative methodologies for analyzing the resource needs of prevention systems. PMID:22325131
Probabilistic/Fracture-Mechanics Model For Service Life
NASA Technical Reports Server (NTRS)
Watkins, T., Jr.; Annis, C. G., Jr.
1991-01-01
Computer program makes probabilistic estimates of lifetime of engine and components thereof. Developed to fill need for more accurate life-assessment technique that avoids errors in estimated lives and provides for statistical assessment of levels of risk created by engineering decisions in designing system. Implements mathematical model combining techniques of statistics, fatigue, fracture mechanics, nondestructive analysis, life-cycle cost analysis, and management of engine parts. Used to investigate effects of such engine-component life-controlling parameters as return-to-service intervals, stresses, capabilities for nondestructive evaluation, and qualities of materials.
Gilliland, Jason; Clark, Andrew F; Kobrzynski, Marta; Filler, Guido
2015-07-01
Childhood obesity is a critical public health matter associated with numerous pediatric comorbidities. Local-level data are required to monitor obesity and to help administer prevention efforts when and where they are most needed. We hypothesized that samples of children visiting hospital clinics could provide representative local population estimates of childhood obesity using data from 2007 to 2013. Such data might provide more accurate, timely, and cost-effective obesity estimates than national surveys. Results revealed that our hospital-based sample could not serve as a population surrogate. Further research is needed to confirm this finding.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1985-01-01
Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.
Graphical Models for Ordinal Data
Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji
2014-01-01
A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267
Johnston, Stephen S; Salkever, David S; Ialongo, Nicholas S; Slade, Eric P; Stuart, Elizabeth A
2017-11-01
When candidates for school-based preventive interventions are heterogeneous in their risk of poor outcomes, an intervention's expected economic net benefits may be maximized by targeting candidates for whom the intervention is most likely to yield benefits, such as those at high risk of poor outcomes. Although increasing amounts of information about candidates may facilitate more accurate targeting, collecting information can be costly. We present an illustrative example to show how cost-benefit analysis results from effective intervention demonstrations can help us to assess whether improved targeting accuracy justifies the cost of collecting additional information needed to make this improvement.
Methods for estimating magnitude and frequency of peak flows for natural streams in Utah
Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.
2007-01-01
Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.
IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.
Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio
2016-03-01
The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result.
Hawkins, H; Langer, J; Padua, E; Reaves, J
2001-06-01
Activity-based costing (ABC) is a process that enables the estimation of the cost of producing a product or service. More accurate than traditional charge-based approaches, it emphasizes analysis of processes, and more specific identification of both direct and indirect costs. This accuracy is essential in today's healthcare environment, in which managed care organizations necessitate responsible and accountable costing. However, to be successfully utilized, it requires time, effort, expertise, and support. Data collection can be tedious and expensive. By integrating ABC with information management (IM) and systems (IS), organizations can take advantage of the process orientation of both, extend and improve ABC, and decrease resource utilization for ABC projects. In our case study, we have examined the process of a multidisciplinary breast center. We have mapped the constituent activities and established cost drivers. This information has been structured and included in our information system database for subsequent analysis.
NASA Astrophysics Data System (ADS)
O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.
2017-07-01
The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.
NASA Astrophysics Data System (ADS)
Dutta, D.; Drewry, D.; Johnson, W. R.
2017-12-01
The surface temperature of plant canopies is an important indicator of the stomatal regulation of plant water use and the associated water flux from plants to atmosphere (evapotranspiration (ET)). Remotely sensed thermal observations using compact, low-cost, lightweight sensors from small unmanned aerial systems (sUAS) have the potential to provide surface temperature (ST) and ET estimates at unprecedented spatial and temporal resolutions, allowing us to characterize the intra-field diurnal variations in canopy ST and ET for a variety of vegetation systems. However, major challenges exist for obtaining accurate surface temperature estimates from low-cost uncooled microbolometer-type sensors. Here we describe the development of calibration methods using thermal chamber experiments, taking into account the ambient optics and sensor temperatures, and applying simple models of spatial non-uniformity correction to the sensor focal-plane-array. We present a framework that can be used to derive accurate surface temperatures using radiometric observations from low-cost sensors, and demonstrate this framework using a sUAS-mounted sensor across a diverse set of calibration and vegetation targets. Further, we demonstrate the use of the Surface Temperature Initiated Closure (STIC) model for computing spatially explicit, high spatial resolution ET estimates across several well-monitored agricultural systems, as driven by sUAS acquired surface temperatures. STIC provides a physically-based surface energy balance framework for the simultaneous retrieval of the surface and atmospheric vapor conductances and surface energy fluxes, by physically integrating radiometric surface temperature information into the Penman-Monteith equation. Results of our analysis over agricultural systems in Ames, IA and Davis, CA demonstrate the power of this approach for quantifying the intra-field spatial variability in the diurnal cycle of plant water use at sub-meter resolutions.
Global mortality consequences of climate change accounting for adaptation costs and benefits
NASA Astrophysics Data System (ADS)
Rising, J. A.; Jina, A.; Carleton, T.; Hsiang, S. M.; Greenstone, M.
2017-12-01
Empirically-based and plausibly causal estimates of the damages of climate change are greatly needed to inform rapidly developing global and local climate policies. To accurately reflect the costs of climate change, it is essential to estimate how much populations will adapt to a changing climate, yet adaptation remains one of the least understood aspects of social responses to climate. In this paper, we develop and implement a novel methodology to estimate climate impacts on mortality rates. We assemble comprehensive sub-national panel data in 41 countries that account for 56% of the world's population, and combine them with high resolution daily climate data to flexibly estimate the causal effect of temperature on mortality. We find the impacts of temperature on mortality have a U-shaped response; both hot days and cold days cause excess mortality. However, this average response obscures substantial heterogeneity, as populations are differentially adapted to extreme temperatures. Our empirical model allows us to extrapolate response functions across the entire globe, as well as across time, using a range of economic, population, and climate change scenarios. We also develop a methodology to capture not only the benefits of adaptation, but also its costs. We combine these innovations to produce the first causal, micro-founded, global, empirically-derived climate damage function for human health. We project that by 2100, business-as-usual climate change is likely to incur mortality-only costs that amount to approximately 5% of global GDP for 5°C degrees of warming above pre-industrial levels. On average across model runs, we estimate that the upper bound on adaptation costs amounts to 55% of the total damages.
Remote sensing for grassland management in the arid Southwest
Marsett, R.C.; Qi, J.; Heilman, P.; Biedenbender, S.H.; Watson, M.C.; Amer, S.; Weltz, M.; Goodrich, D.; Marsett, R.
2006-01-01
We surveyed a group of rangeland managers in the Southwest about vegetation monitoring needs on grassland. Based on their responses, the objective of the RANGES (Rangeland Analysis Utilizing Geospatial Information Science) project was defined to be the accurate conversion of remotely sensed data (satellite imagery) to quantitative estimates of total (green and senescent) standing cover and biomass on grasslands and semidesert grasslands. Although remote sensing has been used to estimate green vegetation cover, in arid grasslands herbaceous vegetation is senescent much of the year and is not detected by current remote sensing techniques. We developed a ground truth protocol compatible with both range management requirements and Landsat's 30 m resolution imagery. The resulting ground-truth data were then used to develop image processing algorithms that quantified total herbaceous vegetation cover, height, and biomass. Cover was calculated based on a newly developed Soil Adjusted Total Vegetation Index (SATVI), and height and biomass were estimated based on reflectance in the near infrared (NIR) band. Comparison of the remotely sensed estimates with independent ground measurements produced r2 values of 0.80, 0.85, and 0.77 and Nash Sutcliffe values of 0.78, 0.70, and 0.77 for the cover, plant height, and biomass, respectively. The approach for estimating plant height and biomass did not work for sites where forbs comprised more than 30% of total vegetative cover. The ground reconnaissance protocol and image processing techniques together offer land managers accurate and timely methods for monitoring extensive grasslands. The time-consuming requirement to collect concurrent data in the field for each image implies a need to share the high fixed costs of processing an image across multiple users to reduce the costs for individual rangeland managers.
NASA Astrophysics Data System (ADS)
Watkinson, Catherine A.; Majumdar, Suman; Pritchard, Jonathan R.; Mondal, Rajesh
2017-12-01
In this paper, we establish the accuracy and robustness of a fast estimator for the bispectrum - the 'FFT-bispectrum estimator'. The implementation of the estimator presented here offers speed and simplicity benefits over a direct-measurement approach. We also generalize the derivation so it may be easily be applied to any order polyspectra, such as the trispectrum, with the cost of only a handful of Fast-Fourier Transforms (FFTs). All lower order statistics can also be calculated simultaneously for little extra cost. To test the estimator, we make use of a non-linear density field, and for a more strongly non-Gaussian test case, we use a toy-model of reionization in which ionized bubbles at a given redshift are all of equal size and are randomly distributed. Our tests find that the FFT-estimator remains accurate over a wide range of k, and so should be extremely useful for analysis of 21-cm observations. The speed of the FFT-bispectrum estimator makes it suitable for sampling applications, such as Bayesian inference. The algorithm we describe should prove valuable in the analysis of simulations and observations, and whilst, we apply it within the field of cosmology, this estimator is useful in any field that deals with non-Gaussian data.
Woolcott, J C; Khan, K M; Mitrovic, S; Anis, A H; Marra, C A
2012-05-01
We prospectively collected data on elderly fallers to estimate the total cost of a fall requiring an Emergency Department presentation. Using data collected on 102 falls, we found the average cost per fall causing an Emergency Department presentation of $11,408. When hospitalization was required, the average cost per fall was $29,363. For elderly persons, falls are a major source of mortality, morbidity, and disability. Previous Canadian cost estimates of seniors' falls were based upon administrative data that has been shown to underestimate the incidence of falls. Our objective was to use a labor-intensive, direct observation patient-tracking method to accurately estimate the total cost of falls among seniors who presented to a major urban Emergency Department (ED) in Canada. We prospectively collected data from seniors (>70 years) presenting to the Vancouver General Hospital ED after a fall. We excluded individuals who where cognitively impaired or unable to read/write English. Data were collected on the care provided including physician assessments/consultations, radiology and laboratory tests, ED/hospital time, rehabilitation facility time, and in-hospital procedures. Unit costs of health resources were taken from a fully allocated hospital cost model. Data were collected on 101 fall-related ED presentations. The most common diagnoses were fractures (n = 33) and lacerations (n = 11). The mean cost of a fall causing ED presentation was $11,408 (SD: $19,655). Thirty-eight fallers had injuries requiring hospital admission with an average total cost of $29,363 (SD: $22,661). Hip fractures cost $39,507 (SD: $17,932). Among the 62 individuals not admitted to the hospital, the average cost of their ED visit was $674 (SD: $429). Among the growing population of Canadian seniors, falls have substantial costs. With the cost of a fall-related hospitalization approaching $30,000, there is an increased need for fall prevention programs.
Humans, 'things' and space: costing hospital infection control interventions.
Page, K; Graves, N; Halton, K; Barnett, A G
2013-07-01
Previous attempts at costing infection control programmes have tended to focus on accounting costs rather than economic costs. For studies using economic costs, estimates tend to be quite crude and probably underestimate the true cost. One of the largest costs of any intervention is staff time, but this cost is difficult to quantify and has been largely ignored in previous attempts. To design and evaluate the costs of hospital-based infection control interventions or programmes. This article also discusses several issues to consider when costing interventions, and suggests strategies for overcoming these issues. Previous literature and techniques in both health economics and psychology are reviewed and synthesized. This article provides a set of generic, transferable costing guidelines. Key principles such as definition of study scope and focus on large costs, as well as pitfalls (e.g. overconfidence and uncertainty), are discussed. These new guidelines can be used by hospital staff and other researchers to cost their infection control programmes and interventions more accurately. Copyright © 2013 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jolly, A. D.; Chardot, L.; Sherburn, S.; Cole-Baker, J.; Scott, B. J.; Fournier, N.; Neuberg, J. N.
2012-04-01
Obtaining estimates of the seismic velocity and attenuation structure of volcanic systems is considered valuable from a monitoring perspective but can be extremely costly and time consuming due to the potential environmental impacts, safety issues and the permitting process. Here, we present an easy, low cost and environmentally benign alternative whereby the shallow velocity and attenuation structure can be obtained via high impact sandbag drops from helicopter. We conducted such a sandbag drop experiment at White Island volcano on 23 September 2011, during the final stage of a 6 month deployment of 14 broadband seismometers. Three drops were attempted, two at either end of a 5 station linear array within the crater floor, and the third within the volcano's shallow active acid crater lake. The bags were dropped from ~400 m height and contained ~700 kg of fine beach sand held within nylon sacks having a volume capacity of ~2.0 m3. The impact velocity was estimated at ~70 m/s yielding a kinetic energy of about 106 to 107 Nm. The source position was established by GPS on the resulting impact crater and was accurate to within ~5 m. The lake drop position was estimated from video footage relative to known ground features and was accurate to ~30 m. Impact timing was achieved by drop placement close to, but not on, the nearby seismometer recording systems. For the crater floor drops the timing was constrained to within ~0.05 s based on distance from the closest stations. The low kinetic energy and strong attenuation of the crater floor meant that strong first-P arrival times were limited to an area within ~1 km of the impact position. We obtained a rough velocity estimate of about 1.0-1.5 km/s for the unconsolidated crater floor and a velocity of ~1.5-2.0 km/s for rays traversing mostly through the consolidated rocks comprising the crater walls. Attenuation was found to be generally very strong (Q < 10) for both consolidated and unconsolidated parts of the volcano. Results show that low-cost sand bag drops can be viably used to determine shallow near surface velocity and attenuation structure in volcanic environments where use of other active source methods may be problematic due to environmental, permitting or cost issues.
FOCIS: A forest classification and inventory system using LANDSAT and digital terrain data
NASA Technical Reports Server (NTRS)
Strahler, A. H.; Franklin, J.; Woodcook, C. E.; Logan, T. L.
1981-01-01
Accurate, cost-effective stratification of forest vegetation and timber inventory is the primary goal of a Forest Classification and Inventory System (FOCIS). Conventional timber stratification using photointerpretation can be time-consuming, costly, and inconsistent from analyst to analyst. FOCIS was designed to overcome these problems by using machine processing techniques to extract and process tonal, textural, and terrain information from registered LANDSAT multispectral and digital terrain data. Comparison of samples from timber strata identified by conventional procedures showed that both have about the same potential to reduce the variance of timber volume estimates over simple random sampling.
Pre-Packaged Commercial PACE Financing Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallander, Michael
The objective of this project was to demonstrate a more streamlined method for facilitating commercial property assessed clean energy (PACE) retrofits. The Recipient aimed to prove that energy efficiency performance of simple, pre-packaged technologies (e.g., lighting and heating, ventilation and air conditioning (HVAC)) can be accurately estimated without the need for a detailed energy audit. A successful project would inspire consumer confidence in undertaking cost-effective retrofits.
Accurate Orientation Estimation Using AHRS under Conditions of Magnetic Distortion
Yadav, Nagesh; Bleakley, Chris
2014-01-01
Low cost, compact attitude heading reference systems (AHRS) are now being used to track human body movements in indoor environments by estimation of the 3D orientation of body segments. In many of these systems, heading estimation is achieved by monitoring the strength of the Earth's magnetic field. However, the Earth's magnetic field can be locally distorted due to the proximity of ferrous and/or magnetic objects. Herein, we propose a novel method for accurate 3D orientation estimation using an AHRS, comprised of an accelerometer, gyroscope and magnetometer, under conditions of magnetic field distortion. The system performs online detection and compensation for magnetic disturbances, due to, for example, the presence of ferrous objects. The magnetic distortions are detected by exploiting variations in magnetic dip angle, relative to the gravity vector, and in magnetic strength. We investigate and show the advantages of using both magnetic strength and magnetic dip angle for detecting the presence of magnetic distortions. The correction method is based on a particle filter, which performs the correction using an adaptive cost function and by adapting the variance during particle resampling, so as to place more emphasis on the results of dead reckoning of the gyroscope measurements and less on the magnetometer readings. The proposed method was tested in an indoor environment in the presence of various magnetic distortions and under various accelerations (up to 3 g). In the experiments, the proposed algorithm achieves <2° static peak-to-peak error and <5° dynamic peak-to-peak error, significantly outperforming previous methods. PMID:25347584
Fraga, Rafael de; Stow, Adam J; Magnusson, William E; Lima, Albertina P
2014-01-01
Studies leading to decision-making for environmental licensing often fail to provide accurate estimates of diversity. Measures of snake diversity are regularly obtained to assess development impacts in the rainforests of the Amazon Basin, but this taxonomic group may be subject to poor detection probabilities. Recently, the Brazilian government tried to standardize sampling designs by the implementation of a system (RAPELD) to quantify biological diversity using spatially-standardized sampling units. Consistency in sampling design allows the detection probabilities to be compared among taxa, and sampling effort and associated cost to be evaluated. The cost effectiveness of detecting snakes has received no attention in Amazonia. Here we tested the effects of reducing sampling effort on estimates of species densities and assemblage composition. We identified snakes in seven plot systems, each standardised with 14 plots. The 250 m long centre line of each plot followed an altitudinal contour. Surveys were repeated four times in each plot and detection probabilities were estimated for the 41 species encountered. Reducing the number of observations, or the size of the sampling modules, caused significant loss of information on species densities and local patterns of variation in assemblage composition. We estimated the cost to find a snake as $ 120 U.S., but general linear models indicated the possibility of identifying differences in assemblage composition for half the overall survey costs. Decisions to reduce sampling effort depend on the importance of lost information to target-issues, and may not be the preferred option if there is the potential for identifying individual snake species requiring specific conservation actions. However, in most studies of human disturbance on species assemblages, it is likely to be more cost-effective to focus on other groups of organisms with higher detection probabilities.
de Fraga, Rafael; Stow, Adam J.; Magnusson, William E.; Lima, Albertina P.
2014-01-01
Studies leading to decision-making for environmental licensing often fail to provide accurate estimates of diversity. Measures of snake diversity are regularly obtained to assess development impacts in the rainforests of the Amazon Basin, but this taxonomic group may be subject to poor detection probabilities. Recently, the Brazilian government tried to standardize sampling designs by the implementation of a system (RAPELD) to quantify biological diversity using spatially-standardized sampling units. Consistency in sampling design allows the detection probabilities to be compared among taxa, and sampling effort and associated cost to be evaluated. The cost effectiveness of detecting snakes has received no attention in Amazonia. Here we tested the effects of reducing sampling effort on estimates of species densities and assemblage composition. We identified snakes in seven plot systems, each standardised with 14 plots. The 250 m long centre line of each plot followed an altitudinal contour. Surveys were repeated four times in each plot and detection probabilities were estimated for the 41 species encountered. Reducing the number of observations, or the size of the sampling modules, caused significant loss of information on species densities and local patterns of variation in assemblage composition. We estimated the cost to find a snake as $ 120 U.S., but general linear models indicated the possibility of identifying differences in assemblage composition for half the overall survey costs. Decisions to reduce sampling effort depend on the importance of lost information to target-issues, and may not be the preferred option if there is the potential for identifying individual snake species requiring specific conservation actions. However, in most studies of human disturbance on species assemblages, it is likely to be more cost-effective to focus on other groups of organisms with higher detection probabilities. PMID:25147930
A cascaded two-step Kalman filter for estimation of human body segment orientation using MEMS-IMU.
Zihajehzadeh, S; Loh, D; Lee, M; Hoskinson, R; Park, E J
2014-01-01
Orientation of human body segments is an important quantity in many biomechanical analyses. To get robust and drift-free 3-D orientation, raw data from miniature body worn MEMS-based inertial measurement units (IMU) should be blended in a Kalman filter. Aiming at less computational cost, this work presents a novel cascaded two-step Kalman filter orientation estimation algorithm. Tilt angles are estimated in the first step of the proposed cascaded Kalman filter. The estimated tilt angles are passed to the second step of the filter for yaw angle calculation. The orientation results are benchmarked against the ones from a highly accurate tactical grade IMU. Experimental results reveal that the proposed algorithm provides robust orientation estimation in both kinematically and magnetically disturbed conditions.
Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics.
Keich, Uri; Kertesz-Farkas, Attila; Noble, William Stafford
2015-08-07
Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications.
Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics
2016-01-01
Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications. PMID:26152888
The burden of prenatal exposure to alcohol: revised measurement of cost.
Stade, Brenda; Ali, Alaa; Bennett, Dainel; Campbell, Douglas; Johnston, Mary; Lens, Cynthia; Tran, Sofia; Koren, Gideon
2009-01-01
In Canada the incidence of Fetal Alcohol Spectrum Disorder (FASD) is estimated to be 1 in 100 live births. FASD is the leading cause of developmental and cognitive disabilities in Canada. Only one study has examined the cost of FASD in Canada. In that study we did not include prospective data for infants under the age of one year, costs for adults beyond 21 years or costs for individuals living in institutions. To calculate a revised estimate of direct and indirect costs associated with FASD at the patient level. Cross-sectional study design was used. Two-hundred and fifty (250) participants completed the study tool. Participants included caregivers of children, youth and adults, with FASD, from day of birth to 53 years, living in urban and rural communities throughout Canada participated. Participants completed the Health Services Utilization Inventory (HSUI). Key cost components were elicited: direct costs: medical, education, social services, out-of-pocket costs; and indirect costs: productivity losses. Total average costs per individual with FASD were calculated by summing the costs for each in each cost component, and dividing by the sample size. Costs were extrapolated to one year. A stepwise multiple regression analysis was used to identify significant determinants of costs and to calculate the adjusted annual costs associated with FASD. Total adjusted annual costs associated with FASD at the individual level was $21,642 (95% CI, $19,842; $24,041), compared to $14,342 (95% CI, $12,986; $15,698) in the first study. Severity of the individual's condition, age, and relationship of the individual to the caregiver (biological, adoptive, foster) were significant determinants of costs (p < 0.001). Cost of FASD annually to Canada of those from day of birth to 53 years old, was $5.3 billion (95% CI, $4.12 billion; $6.4 billion). Study results demonstrated the cost burden of FASD in Canada was profound. Inclusion of infants aged 0 to 1 years, adults beyond the age of 21 years and costs associated with residing in institutions provided a more accurate estimate of the costs of FASD. Implications for practice, policy, and research are discussed. Key words: Alcohol, pregnancy, cost, economic burden, fetal alcohol spectrum disorder.
Konikoff, Jacob; Brookmeyer, Ron; Longosz, Andrew F.; Cousins, Matthew M.; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Koblin, Beryl A.; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Eshleman, Susan H.; Laeyendecker, Oliver
2013-01-01
Background A limiting antigen avidity enzyme immunoassay (HIV-1 LAg-Avidity assay) was recently developed for cross-sectional HIV incidence estimation. We evaluated the performance of the LAg-Avidity assay alone and in multi-assay algorithms (MAAs) that included other biomarkers. Methods and Findings Performance of testing algorithms was evaluated using 2,282 samples from individuals in the United States collected 1 month to >8 years after HIV seroconversion. The capacity of selected testing algorithms to accurately estimate incidence was evaluated in three longitudinal cohorts. When used in a single-assay format, the LAg-Avidity assay classified some individuals infected >5 years as assay positive and failed to provide reliable incidence estimates in cohorts that included individuals with long-term infections. We evaluated >500,000 testing algorithms, that included the LAg-Avidity assay alone and MAAs with other biomarkers (BED capture immunoassay [BED-CEIA], BioRad-Avidity assay, HIV viral load, CD4 cell count), varying the assays and assay cutoffs. We identified an optimized 2-assay MAA that included the LAg-Avidity and BioRad-Avidity assays, and an optimized 4-assay MAA that included those assays, as well as HIV viral load and CD4 cell count. The two optimized MAAs classified all 845 samples from individuals infected >5 years as MAA negative and estimated incidence within a year of sample collection. These two MAAs produced incidence estimates that were consistent with those from longitudinal follow-up of cohorts. A comparison of the laboratory assay costs of the MAAs was also performed, and we found that the costs associated with the optimal two assay MAA were substantially less than with the four assay MAA. Conclusions The LAg-Avidity assay did not perform well in a single-assay format, regardless of the assay cutoff. MAAs that include the LAg-Avidity and BioRad-Avidity assays, with or without viral load and CD4 cell count, provide accurate incidence estimates. PMID:24386116
Two-voice fundamental frequency estimation
NASA Astrophysics Data System (ADS)
de Cheveigné, Alain
2002-05-01
An algorithm is presented that estimates the fundamental frequencies of two concurrent voices or instruments. The algorithm models each voice as a periodic function of time, and jointly estimates both periods by cancellation according to a previously proposed method [de Cheveigné and Kawahara, Speech Commun. 27, 175-185 (1999)]. The new algorithm improves on the old in several respects; it allows an unrestricted search range, effectively avoids harmonic and subharmonic errors, is more accurate (it uses two-dimensional parabolic interpolation), and is computationally less costly. It remains subject to unavoidable errors when periods are in certain simple ratios and the task is inherently ambiguous. The algorithm is evaluated on a small database including speech, singing voice, and instrumental sounds. It can be extended in several ways; to decide the number of voices, to handle amplitude variations, and to estimate more than two voices (at the expense of increased processing cost and decreased reliability). It makes no use of instrument models, learned or otherwise, although it could usefully be combined with such models. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.
Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description
NASA Technical Reports Server (NTRS)
Hemm, Robert; Shapiro, Gerald
1998-01-01
This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.
Tuning support vector machines for minimax and Neyman-Pearson classification.
Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D
2010-10-01
This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.
Assessing the Cost of Global Biodiversity and Conservation Knowledge.
Juffe-Bignoli, Diego; Brooks, Thomas M; Butchart, Stuart H M; Jenkins, Richard B; Boe, Kaia; Hoffmann, Michael; Angulo, Ariadne; Bachman, Steve; Böhm, Monika; Brummitt, Neil; Carpenter, Kent E; Comer, Pat J; Cox, Neil; Cuttelod, Annabelle; Darwall, William R T; Di Marco, Moreno; Fishpool, Lincoln D C; Goettsch, Bárbara; Heath, Melanie; Hilton-Taylor, Craig; Hutton, Jon; Johnson, Tim; Joolia, Ackbar; Keith, David A; Langhammer, Penny F; Luedtke, Jennifer; Nic Lughadha, Eimear; Lutz, Maiko; May, Ian; Miller, Rebecca M; Oliveira-Miranda, María A; Parr, Mike; Pollock, Caroline M; Ralph, Gina; Rodríguez, Jon Paul; Rondinini, Carlo; Smart, Jane; Stuart, Simon; Symes, Andy; Tordoff, Andrew W; Woodley, Stephen; Young, Bruce; Kingston, Naomi
2016-01-01
Knowledge products comprise assessments of authoritative information supported by standards, governance, quality control, data, tools, and capacity building mechanisms. Considerable resources are dedicated to developing and maintaining knowledge products for biodiversity conservation, and they are widely used to inform policy and advise decision makers and practitioners. However, the financial cost of delivering this information is largely undocumented. We evaluated the costs and funding sources for developing and maintaining four global biodiversity and conservation knowledge products: The IUCN Red List of Threatened Species, the IUCN Red List of Ecosystems, Protected Planet, and the World Database of Key Biodiversity Areas. These are secondary data sets, built on primary data collected by extensive networks of expert contributors worldwide. We estimate that US$160 million (range: US$116-204 million), plus 293 person-years of volunteer time (range: 278-308 person-years) valued at US$ 14 million (range US$12-16 million), were invested in these four knowledge products between 1979 and 2013. More than half of this financing was provided through philanthropy, and nearly three-quarters was spent on personnel costs. The estimated annual cost of maintaining data and platforms for three of these knowledge products (excluding the IUCN Red List of Ecosystems for which annual costs were not possible to estimate for 2013) is US$6.5 million in total (range: US$6.2-6.7 million). We estimated that an additional US$114 million will be needed to reach pre-defined baselines of data coverage for all the four knowledge products, and that once achieved, annual maintenance costs will be approximately US$12 million. These costs are much lower than those to maintain many other, similarly important, global knowledge products. Ensuring that biodiversity and conservation knowledge products are sufficiently up to date, comprehensive and accurate is fundamental to inform decision-making for biodiversity conservation and sustainable development. Thus, the development and implementation of plans for sustainable long-term financing for them is critical.
Assessing the Cost of Global Biodiversity and Conservation Knowledge
Juffe-Bignoli, Diego; Brooks, Thomas M.; Butchart, Stuart H. M.; Jenkins, Richard B.; Boe, Kaia; Hoffmann, Michael; Angulo, Ariadne; Bachman, Steve; Böhm, Monika; Brummitt, Neil; Carpenter, Kent E.; Comer, Pat J.; Cox, Neil; Cuttelod, Annabelle; Darwall, William R. T.; Fishpool, Lincoln D. C.; Goettsch, Bárbara; Heath, Melanie; Hilton-Taylor, Craig; Hutton, Jon; Johnson, Tim; Joolia, Ackbar; Keith, David A.; Langhammer, Penny F.; Luedtke, Jennifer; Nic Lughadha, Eimear; Lutz, Maiko; May, Ian; Miller, Rebecca M.; Oliveira-Miranda, María A.; Parr, Mike; Pollock, Caroline M.; Ralph, Gina; Rodríguez, Jon Paul; Rondinini, Carlo; Smart, Jane; Stuart, Simon; Symes, Andy; Tordoff, Andrew W.; Young, Bruce; Kingston, Naomi
2016-01-01
Knowledge products comprise assessments of authoritative information supported by standards, governance, quality control, data, tools, and capacity building mechanisms. Considerable resources are dedicated to developing and maintaining knowledge products for biodiversity conservation, and they are widely used to inform policy and advise decision makers and practitioners. However, the financial cost of delivering this information is largely undocumented. We evaluated the costs and funding sources for developing and maintaining four global biodiversity and conservation knowledge products: The IUCN Red List of Threatened Species, the IUCN Red List of Ecosystems, Protected Planet, and the World Database of Key Biodiversity Areas. These are secondary data sets, built on primary data collected by extensive networks of expert contributors worldwide. We estimate that US$160 million (range: US$116–204 million), plus 293 person-years of volunteer time (range: 278–308 person-years) valued at US$ 14 million (range US$12–16 million), were invested in these four knowledge products between 1979 and 2013. More than half of this financing was provided through philanthropy, and nearly three-quarters was spent on personnel costs. The estimated annual cost of maintaining data and platforms for three of these knowledge products (excluding the IUCN Red List of Ecosystems for which annual costs were not possible to estimate for 2013) is US$6.5 million in total (range: US$6.2–6.7 million). We estimated that an additional US$114 million will be needed to reach pre-defined baselines of data coverage for all the four knowledge products, and that once achieved, annual maintenance costs will be approximately US$12 million. These costs are much lower than those to maintain many other, similarly important, global knowledge products. Ensuring that biodiversity and conservation knowledge products are sufficiently up to date, comprehensive and accurate is fundamental to inform decision-making for biodiversity conservation and sustainable development. Thus, the development and implementation of plans for sustainable long-term financing for them is critical. PMID:27529491
Healthcare Costs Attributable to Hypertension: Canadian Population-Based Cohort Study.
Weaver, Colin G; Clement, Fiona M; Campbell, Norm R C; James, Matthew T; Klarenbach, Scott W; Hemmelgarn, Brenda R; Tonelli, Marcello; McBrien, Kerry A
2015-09-01
Accurately documenting the current and future costs of hypertension is required to fully understand the potential economic impact of currently available and future interventions to prevent and treat hypertension. The objective of this work was to calculate the healthcare costs attributable to hypertension in Canada and to project these costs to 2020. Using population-based administrative data for the province of Alberta, Canada (>3 million residents) from 2002 to 2010, we identified individuals with and without diagnosed hypertension. We calculated their total healthcare costs and estimated costs attributable to hypertension using a regression model adjusting for comorbidities and sociodemographic factors. We then extrapolated hypertension-attributable costs to the rest of Canada and projected costs to the year 2020. Twenty-one percent of adults in Alberta had diagnosed hypertension in 2010, with a projected increase to 27% by 2020. The average individual with hypertension had annual healthcare costs of $5768, of which $2341 (41%) were attributed to hypertension. In Alberta, the healthcare costs attributable to hypertension were $1.4 billion in 2010. In Canada, the hypertension-attributable costs were estimated to be $13.9 billion in 2010, rising to $20.5 billion by 2020. The increase was ascribed to demographic changes (52%), increasing prevalence (16%), and increasing per-patient costs (32%). Hypertension accounts for a significant proportion of healthcare spending (10.2% of the Canadian healthcare budget) and is projected to rise even further. Interventions to prevent and treat hypertension may play a role in limiting this cost growth. © 2015 American Heart Association, Inc.
Voigt, Jeff; Sasha John, M; Taylor, Andrew; Krucoff, Mitchell; Reynolds, Matthew R; Michael Gibson, C
2014-05-01
The annual cost of heart failure (HF) is estimated at $39.2 billion. This has been acknowledged to underestimate the true costs for care. The objective of this analysis is to more accurately assess these costs. Publicly available data sources were used. Cost calculations incorporated relevant factors such as Medicare hospital cost-to-charge ratios, reimbursement from both government and private insurance, and out-of-pocket expenditures. A recently published Atherosclerosis Risk in Communities (ARIC) HF scheme was used to adjust the HF classification scheme. Costs were calculated with HF as the primary diagnosis (HF in isolation, or HFI) or HF as one of the diagnoses/part of a disease milieu (HF syndrome, or HFS). Total direct costs for HF were calculated at $60.2 billion (HFI) and $115.4 billion (HFS). Indirect costs were $10.6 billion for both. Costs attributable to HF may represent a much larger burden to US health care than what is commonly referenced. These revised and increased costs have implications for policy makers.
Using counterfactuals to evaluate the cost-effectiveness of controlling biological invasions.
McConnachie, Matthew M; van Wilgen, Brian W; Ferraro, Paul J; Forsyth, Aurelia T; Richardson, David M; Gaertner, Mirijam; Cowling, Richard M
2016-03-01
Prioritizing limited conservation funds for controlling biological invasions requires accurate estimates of the effectiveness of interventions to remove invasive species and their cost-effectiveness (cost per unit area or individual). Despite billions of dollars spent controlling biological invasions worldwide, it is unclear whether those efforts are effective, and cost-effective. The paucity of evidence results from the difficulty in measuring the effect of invasive species removal: a researcher must estimate the difference in outcomes (e.g. invasive species cover) between where the removal program intervened and what might have been observed if the program had not intervened. In the program evaluation literature, this is called a counterfactual analysis, which formally compares what actually happened and what would have happened in the absence of an intervention. When program implementation is not randomized, estimating counterfactual outcomes is especially difficult. We show how a thorough understanding of program implementation, combined with a matching empirical design can improve the way counterfactual outcomes are estimated in nonexperimental contexts. As a practical demonstration, we estimated the cost-effectiveness of South Africa's Working for Water program, arguably the world's most ambitious invasive species control program, in removing invasive alien trees from different land use types, across a large area in the Cape Floristic Region. We estimated that the proportion of the treatment area covered by invasive trees would have been 49% higher (5.5% instead of 2.7% of the grid cells occupied) had the program not intervened. Our estimates of cost per hectare to remove invasive species, however, are three to five times higher than the predictions made when the program was initiated. Had there been no control (counter-factual), invasive trees would have spread on untransformed land, but not on land parcels containing plantations or land transformed by agriculture or human settlements. This implies that the program might have prevented a larger area from being invaded if it had focused all of its clearing effort on untransformed land. Our results show that, with appropriate empirical designs, it is possible to better evaluate the impacts of invasive species removal and therefore to learn from past experiences.
Heating, Ventilation, and Air Conditioning Design Strategy for a Hot-Humid Production Builder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerrigan, P.
2014-03-01
Building Science Corporation (BSC) worked directly with the David Weekley Homes - Houston division to develop a cost-effective design for moving the HVAC system into conditioned space. In addition, BSC conducted energy analysis to calculate the most economical strategy for increasing the energy performance of future production houses in preparation for the upcoming code changes in 2015. This research project addressed the following questions: 1. What is the most cost effective, best performing and most easily replicable method of locating ducts inside conditioned space for a hot-humid production home builder that constructs one and two story single family detached residences?more » 2. What is a cost effective and practical method of achieving 50% source energy savings vs. the 2006 International Energy Conservation Code for a hot-humid production builder? 3. How accurate are the pre-construction whole house cost estimates compared to confirmed post construction actual cost?« less
Frenning, Göran
2015-01-01
When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975
NASA Technical Reports Server (NTRS)
Tanner, C. J.; Kruse, G. S.; Oman, B. H.
1975-01-01
A preliminary design analysis tool for rapidly performing trade-off studies involving fatigue, fracture, static strength, weight, and cost is presented. Analysis subprograms were developed for fatigue life, crack growth life, and residual strength; and linked to a structural synthesis module which in turn was integrated into a computer program. The part definition module of a cost and weight analysis program was expanded to be compatible with the upgraded structural synthesis capability. The resultant vehicle design and evaluation program is named VDEP-2. It is an accurate and useful tool for estimating purposes at the preliminary design stage of airframe development. A sample case along with an explanation of program applications and input preparation is presented.
Sabharwal, S; Carter, A W; Rashid, A; Darzi, A; Reilly, P; Gupte, C M
2016-02-01
The aims of this study were to estimate the cost of surgical treatment of fractures of the proximal humerus using a micro-costing methodology, contrast this cost with the national reimbursement tariff and establish the major determinants of cost. A detailed inpatient treatment pathway was constructed using semi-structured interviews with 32 members of hospital staff. Its content validity was established through a Delphi panel evaluation. Costs were calculated using time-driven activity-based costing (TDABC) and sensitivity analysis was performed to evaluate the determinants of cost The mean cost of the different surgical treatments was estimated to be £3282. Although this represented a profit of £1138 against the national tariff, hemiarthroplasty as a treatment choice resulted in a net loss of £952. Choice of implant and theatre staffing were the largest cost drivers. Operating theatre delays of more than one hour resulted in a loss of income Our findings indicate that the national tariff does not accurately represent the cost of treatment for this condition. Effective use of the operating theatre and implant discounting are likely to be more effective cost containment approaches than control of bed-day costs. This cost analysis of fractures of the proximal humerus reinforces the limitations of the national tariff within the English National Health Service, and underlines the importance of effective use of the operating theatre, as well as appropriate implant procurement where controlling costs of treatment is concerned. ©2016 The British Editorial Society of Bone & Joint Surgery.
Crott, Ralph; Lawson, Georges; Nollevaux, Marie-Cécile; Castiaux, Annick; Krug, Bruno
2016-09-01
Head and neck cancer (HNC) is predominantly a locoregional disease. Sentinel lymph node (SLN) biopsy offers a minimally invasive means of accurately staging the neck. Value in healthcare is determined by both outcomes and the costs associated with achieving them. Time-driven activity-based costing (TDABC) may offer more precise estimates of the true cost. Process maps were developed for nuclear medicine, operating room and pathology care phases. TDABC estimates the costs by combining information about the process with the unit cost of each resource used. Resource utilization is based on observation of care and staff interviews. Unit costs are calculated as a capacity cost rate, measured as a Euros/min (2014), for each resource consumed. Multiplying together the unit costs and resource quantities and summing across all resources used will produce the average cost for each phase of care. Three time equations with six different scenarios were modeled based on the type of camera, the number of SLN and the type of staining used. Total times for different SLN scenarios vary between 284 and 307 min, respectively, with a total cost between 2794 and 3541€. The unit costs vary between 788€/h for the intraoperative evaluation with a gamma-probe and 889€/h for a preoperative imaging with a SPECT/CT. The unit costs for the lymphadenectomy and the pathological examination are, respectively, 560 and 713€/h. A 10 % increase of time per individual activity generates only 1 % change in the total cost. TDABC evaluates the cost of SLN in HNC. The total costs across all phases which varied between 2761 and 3744€ per standard case.
Profitable capitation requires accurate costing.
West, D A; Hicks, L L; Balas, E A; West, T D
1996-01-01
In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.
Registration of surface structures using airborne focused ultrasound.
Sundström, N; Börjesson, P O; Holmer, N G; Olsson, L; Persson, H W
1991-01-01
A low-cost measuring system, based on a personal computer combined with standard equipment for complex measurements and signal processing, has been assembled. Such a system increases the possibilities for small hospitals and clinics to finance advanced measuring equipment. A description of equipment developed for airborne ultrasound together with a personal computer-based system for fast data acquisition and processing is given. Two air-adapted ultrasound transducers with high lateral resolution have been developed. Furthermore, a few results for fast and accurate estimation of signal arrival time are presented. The theoretical estimation models developed are applied to skin surface profile registrations.
Optical rangefinding applications using communications modulation technique
NASA Astrophysics Data System (ADS)
Caplan, William D.; Morcom, Christopher John
2010-10-01
A novel range detection technique combines optical pulse modulation patterns with signal cross-correlation to produce an accurate range estimate from low power signals. The cross-correlation peak is analyzed by a post-processing algorithm such that the phase delay is proportional to the range to target. This technique produces a stable range estimate from noisy signals. The advantage is higher accuracy obtained with relatively low optical power transmitted. The technique is useful for low cost, low power and low mass sensors suitable for tactical use. The signal coding technique allows applications including IFF and battlefield identification systems.
Anaphylaxis: a payor's perspective on epinephrine autoinjectors.
Dunn, Jeffrey D; Sclar, David A
2014-01-01
The scope of expenditures due to anaphylaxis likely is underestimated by health care payors because anaphylaxis is underdiagnosed and, when reported, most costs of anaphylaxis borne by payors relate to direct medical expenses. Direct costs of anaphylaxis have been estimated at $1.2 billion per year, with direct expenditures of $294 million for epinephrine, and indirect costs of $609 million. More accurate diagnostic coding will allow payors to improve their understanding of the full impact of anaphylaxis on health care plans, employers, patients, and their families. Similarly, more accurate diagnosis and treatment of anaphylaxis should have a direct effect on overall cost savings achieved in this disease state. This includes savings in both direct costs, such as emergency department visits, and indirect costs, such as lost productivity of patients and caregivers. Educating medical personnel on treatment guidelines regarding the specific use of appropriate epinephrine autoinjectors will contribute to cost savings. Even though the cost of autoinjectors has been increasing, evidence indicates that the cost of improper response to, and treatment of, anaphylaxis outweighs that increase. At this time, there are several branded epinephrine autoinjectors and one generic equivalent for one of these branded products available on the US market; the branded autoinjectors are not considered equivalents for substitution. Barriers to coverage and access, such as managed care organization tier classification, medication copay, and socioeconomic status of specific patients, need to be examined more closely and addressed. Education in the proper use of epinephrine autoinjectors, including regular checking of medication expiration dates, is critical for proper management of anaphylaxis and minimizing the costs of anaphylactic events. Managed care organizations can play a role in educational initiatives. Copyright © 2014 Elsevier Inc. All rights reserved.
Real-time POD-CFD Wind-Load Calculator for PV Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huayamave, Victor; Divo, Eduardo; Ceballos, Andres
The primary objective of this project is to create an accurate web-based real-time wind-load calculator. This is of paramount importance for (1) the rapid and accurate assessments of the uplift and downforce loads on a PV mounting system, (2) identifying viable solutions from available mounting systems, and therefore helping reduce the cost of mounting hardware and installation. Wind loading calculations for structures are currently performed according to the American Society of Civil Engineers/ Structural Engineering Institute Standard ASCE/SEI 7; the values in this standard were calculated from simplified models that do not necessarily take into account relevant characteristics such asmore » those from full 3D effects, end effects, turbulence generation and dissipation, as well as minor effects derived from shear forces on installation brackets and other accessories. This standard does not include provisions that address the special requirements of rooftop PV systems, and attempts to apply this standard may lead to significant design errors as wind loads are incorrectly estimated. Therefore, an accurate calculator would be of paramount importance for the preliminary assessments of the uplift and downforce loads on a PV mounting system, identifying viable solutions from available mounting systems, and therefore helping reduce the cost of the mounting system and installation. The challenge is that although a full-fledged three-dimensional computational fluid dynamics (CFD) analysis would properly and accurately capture the complete physical effects of air flow over PV systems, it would be impractical for this tool, which is intended to be a real-time web-based calculator. CFD routinely requires enormous computation times to arrive at solutions that can be deemed accurate and grid-independent even in powerful and massively parallel computer platforms. This work is expected not only to accelerate solar deployment nationwide, but also help reach the SunShot Initiative goals of reducing the total installed cost of solar energy systems by 75%. The largest percentage of the total installed cost of solar energy system is associated with balance of system cost, with up to 40% going to “soft” costs; which include customer acquisition, financing, contracting, permitting, interconnection, inspection, installation, performance, operations, and maintenance. The calculator that is being developed will provide wind loads in real-time for any solar system designs and suggest the proper installation configuration and hardware; and therefore, it is anticipated to reduce system design, installation and permitting costs.« less
Analytical Model For Fluid Dynamics In A Microgravity Environment
NASA Technical Reports Server (NTRS)
Naumann, Robert J.
1995-01-01
Report presents analytical approximation methodology for providing coupled fluid-flow, heat, and mass-transfer equations in microgravity environment. Experimental engineering estimates accurate to within factor of 2 made quickly and easily, eliminating need for time-consuming and costly numerical modeling. Any proposed experiment reviewed to see how it would perform in microgravity environment. Model applied in commercial setting for preliminary design of low-Grashoff/Rayleigh-number experiments.
UWB Wind Turbine Blade Deflection Sensing for Wind Energy Cost Reduction
Zhang, Shuai; Jensen, Tobias Lindstrøm; Franek, Ondrej; Eggers, Patrick C. F.; Olesen, Kim; Byskov, Claus; Pedersen, Gert Frølund
2015-01-01
A new application of utilizing ultra-wideband (UWB) technology to sense wind turbine blade deflections is introduced in this paper for wind energy cost reduction. The lower UWB band of 3.1–5.3 GHz is applied. On each blade, there will be one UWB blade deflection sensing system, which consists of two UWB antennas at the blade root and one UWB antenna at the blade tip. The detailed topology and challenges of this deflection sensing system are addressed. Due to the complexity of the problem, this paper will first realize the on-blade UWB radio link in the simplest case, where the tip antenna is situated outside (and on the surface of) a blade tip. To investigate this case, full-blade time-domain measurements are designed and conducted under different deflections. The detailed measurement setups and results are provided. If the root and tip antenna locations are properly selected, the first pulse is always of sufficient quality for accurate estimations under different deflections. The measured results reveal that the blade tip-root distance and blade deflection can be accurately estimated in the complicated and lossy wireless channels around a wind turbine blade. Some future research topics on this application are listed finally. PMID:26274964
Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J
2018-03-01
Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Yang, Yanxi
2018-05-01
We present a new wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry (2-D WTP). First of all, the maximum value point is extracted from two-dimensional wavelet transform coefficient modulus, and the local extreme value points over 90% of maximum value are also obtained, they both constitute wavelet ridge candidates. Then, the gradient of rotate factor is introduced into the Abid's cost function, and the logarithmic Logistic model is used to adjust and improve the cost function weights so as to obtain more reasonable value estimation. At last, the dynamic programming method is used to accurately find the optimal wavelet ridge, and the wrapped phase can be obtained by extracting the phase at the ridge. Its advantage is that, the fringe pattern with low signal-to-noise ratio can be demodulated accurately, and its noise immunity will be better. Meanwhile, only one fringe pattern is needed to projected to measured object, so dynamic three-dimensional (3-D) measurement in harsh environment can be realized. Computer simulation and experimental results show that, for the fringe pattern with noise pollution, the 3-D surface recovery accuracy by the proposed algorithm is increased. In addition, the demodulation phase accuracy of Morlet, Fan and Cauchy mother wavelets are compared.
A Comprehensive Study of Costs Associated With Recurrent Clostridium difficile Infection.
Rodrigues, Rodrigo; Barber, Grant E; Ananthakrishnan, Ashwin N
2017-02-01
BACKGROUND Clostridium difficile infection (CDI) is the most common healthcare-associated infection and is associated with considerable morbidity. Recurrent CDI is a key contributing factor to this morbidity. Despite an estimated 83,000 recurrences annually in the United States, there are few accurate estimates of costs associated with recurrent CDI. OBJECTIVE We performed this study (1) to identify the health consequences of recurrent CDI including need for repeat hospitalization, intensive care unit (ICU) stay, and surgery; (2) to determine costs associated with recurrent CDI and identify determinants of such costs; and (3) to compare the outcomes and costs of recurrent CDI to those who develop reinfection. METHODS We identified all patients with confirmed recurrent CDI between January to December 2013 at a single referral center. Healthcare burden associated with recurrence including diagnostic testing, pharmacologic treatment, and inpatient and outpatient healthcare visits were identified in the 12 months following the first recurrence. Total healthcare costs were calculated, and the predictors of high healthcare utilization were identified. RESULTS Our study population included 98 patients with recurrent CDI. The median interval between the initial infection and recurrence was 37 days. The mean age of the cohort was 67 years, two-thirds were women (62%), and the mean Charlson index was 8.6. During the year following the first recurrence of CDI, each patient underwent a mean of 4.4 stool C. difficile toxin tests and received a mean of 2.5 prescriptions for oral vancomycin (range, 0-6). Most patients (84%) with recurrence had a CDI-related hospitalization, and 6% underwent colectomy. The mean total CDI-associated cost was $34,104 per patient, with hospitalization costs accounting for 68%, surgery 20%, and drug treatment 8% of this cost, respectively. Extrapolating to the United States overall, we estimate an annual cost of $2.8 billion related to recurrent CDI. CONCLUSION Recurrent CDI is associated with considerable morbidity and cost. Infect Control Hosp Epidemiol 2017;38:196-202.
Fast algorithm for spectral processing with application to on-line welding quality assurance
NASA Astrophysics Data System (ADS)
Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.
2006-10-01
A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.
Estimating cardiac fiber orientations in pig hearts using registered ultrasound and MR image volumes
NASA Astrophysics Data System (ADS)
Dormer, James D.; Meng, Yuguang; Zhang, Xiaodong; Jiang, Rong; Wagner, Mary B.; Fei, Baowei
2017-03-01
Heart fiber mechanics can be important predictors in current and future cardiac function. Accurate knowledge of these mechanics could enable cardiologists to provide a diagnosis before conditions progress. Magnetic resonance diffusion tensor imaging (MR-DTI) has been used to determine cardiac fiber orientations. Ultrasound is capable of providing anatomical information in real time, enabling a physician to quickly adjust parameters to optimize image scans. If known fiber orientations from a template heart measured using DTI can be accurately deformed onto a cardiac ultrasound volume, fiber orientations could be estimated for the patient without the need for a costly MR scan while still providing cardiologists valuable information about the heart mechanics. In this study, we apply the method to pig hearts, which are a close representation of human heart anatomy. Experiments from pig hearts show that the registration method achieved an average Dice similarity coefficient (DSC) of 0.819 +/- 0.050 between the ultrasound and deformed MR volumes and that the proposed ultrasound-based method is able to estimate the cardiac fiber orientation in pig hearts.
What is the cost of palliative care in the UK? A systematic review.
Gardiner, Clare; Ryan, Tony; Gott, Merryn
2018-04-13
Little is known about the cost of a palliative care approach in the UK, and there is an absence of robust activity and unit cost data. The aim of this study was to review evidence on the costs of specialist and generalist palliative care in the UK, and to explore different approaches used for capturing activity and unit cost data. A systematic review with narrative synthesis. Four electronic databases were searched for empirical literature on the costs of a palliative care approach in the UK, and a narrative method was used to synthesise the data. Ten papers met our inclusion criteria. The studies displayed significant variation in their estimates of the cost of palliative care, therefore it was not possible to present an accurate aggregate cost of palliative care in the UK. The majority of studies explored costs from a National Health Service perspective and only two studies included informal care costs. Approaches to estimating activity and costs varied. Particular challenges were noted with capturing activity and cost data for hospice and informal care. The data are limited, and the heterogeneity is such that it is not possible to provide an aggregate cost of palliative care in the UK. It is notable that the costs of hospice care and informal care are often neglected in economic studies. Further work is needed to address methodological and practical challenges in order to gain a more complete understanding of the costs of palliative care. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2018-02-01
The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.
Nefdt, Rory; Ribaira, Eric; Diallo, Khassoum
2014-10-01
To ensure correct and appropriate funding is available, there is a need to estimate resource needs for improved planning and implementation of integrated Community Case Management (iCCM). To compare and estimate costs for commodity and human resource needs for iCCM, based on treatment coverage rates, bottlenecks and national targets in Ethiopia, Kenya and Zambia from 2014 to 2016. Resource needs were estimated using Ministry of Health (MoH) targets fronm 2014 to 2016 for implementation of case management of pneumonia, diarrhea and malaria through iCCM based on epidemiological, demographic, economic, intervention coverage and other health system parameters. Bottleneck analysis adjusted cost estimates against system barriers. Ethiopia, Kenya and Zambia were chosen to compare differences in iCCM costs in different programmatic implementation landscapes. Coverage treatment rates through iCCM are lowest in Ethiopia, followed by Kenya and Zambia, but Ethiopia had the greatest increases between 2009 and 2012. Deployment of health extension workers (HEWs) in Ethiopia is more advanced compared to Kenya and Zambia, which have fewer equivalent cadres (called commu- nity health workers (CHWs)) covering a smaller proportion of the population. Between 2014 and 2016, the propor- tion of treatments through iCCM compared to health centres are set to increase from 30% to 81% in Ethiopia, 1% to 18% in Kenya and 3% to 22% in Zambia. The total estimated cost of iCCM for these three years are USD 75,531,376 for Ethiopia, USD 19,839,780 for Kenya and USD 33,667,742 for Zambia. Projected per capita expen- diture for 2016 is USD 0.28 for Ethiopia, USD 0.20 in Kenya and USD 0.98 in Zambia. Commodity costs for pneumonia and diarrhea were a small fraction of the total iCCM budget for all three countries (less than 3%), while around 80% of the costs related to human resources. Analysis of coverage, demography and epidemiology data improves estimates of fimding requirements for iCCM. Bottleneck analysis adjusts cost estimates by including system barriers, thus reflecting a more accurate estimate of potential resource utilization. Adding pneumonia and diarrhea interventions to existing large scale community-based malaria case management programs is likely to require relatively small and nationally affordable investments. iCCM can be implemented for USD 0.09 to 0.98 per capita per annum, depending on the stage of scale-up and targets set by the MoH.
Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien
2018-01-01
In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.
Resch, Stephen
2018-01-01
Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964
A review of models and micrometeorological methods used to estimate wetland evapotranspiration
Drexler, J.Z.; Snyder, R.L.; Spano, D.; Paw, U.K.T.
2004-01-01
Within the past decade or so, the accuracy of evapotranspiration (ET) estimates has improved due to new and increasingly sophisticated methods. Yet despite a plethora of choices concerning methods, estimation of wetland ET remains insufficiently characterized due to the complexity of surface characteristics and the diversity of wetland types. In this review, we present models and micrometeorological methods that have been used to estimate wetland ET and discuss their suitability for particular wetland types. Hydrological, soil monitoring and lysimetric methods to determine ET are not discussed. Our review shows that, due to the variability and complexity of wetlands, there is no single approach that is the best for estimating wetland ET. Furthermore, there is no single foolproof method to obtain an accurate, independent measure of wetland ET. Because all of the methods reviewed, with the exception of eddy covariance and LIDAR, require measurements of net radiation (Rn) and soil heat flux (G), highly accurate measurements of these energy components are key to improving measurements of wetland ET. Many of the major methods used to determine ET can be applied successfully to wetlands of uniform vegetation and adequate fetch, however, certain caveats apply. For example, with accurate Rn and G data and small Bowen ratio (??) values, the Bowen ratio energy balance method can give accurate estimates of wetland ET. However, large errors in latent heat flux density can occur near sunrise and sunset when the Bowen ratio ?? ??? - 1??0. The eddy covariance method provides a direct measurement of latent heat flux density (??E) and sensible heat flux density (II), yet this method requires considerable expertise and expensive instrumentation to implement. A clear advantage of using the eddy covariance method is that ??E can be compared with Rn-G H, thereby allowing for an independent test of accuracy. The surface renewal method is inexpensive to replicate and, therefore, shows particular promise for characterizing variability in ET as a result of spatial heterogeneity. LIDAR is another method that has special utility in a heterogeneous wetland environment, because it provides an integrated value for ET from a surface. The main drawback of LIDAR is the high cost of equipment and the need for an independent ET measure to assess accuracy. If Rn and G are measured accurately, the Priestley-Taylor equation can be used successfully with site-specific calibration factors to estimate wetland ET. The 'crop' cover coefficient (Kc) method can provide accurate wetland ET estimates if calibrated for the environmental and climatic characteristics of a particular area. More complicated equations such as the Penman and Penman-Monteith equations also can be used to estimate wetland ET, but surface variability and lack of information on aerodynamic and surface resistances make use of such equations somewhat questionable. ?? 2004 John Wiley and Sons, Ltd.
Orientation estimation algorithm applied to high-spin projectiles
NASA Astrophysics Data System (ADS)
Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.
2014-06-01
High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.
Estimating Physical Activity Energy Expenditure with the Kinect Sensor in an Exergaming Environment
Nathan, David; Huynh, Du Q.; Rubenson, Jonas; Rosenberg, Michael
2015-01-01
Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming environment, the results could be incorporated into game play to more accurately control the energy expenditure requirements. PMID:26000460
What is a hospital bed day worth? A contingent valuation study of hospital Chief Executive Officers.
Page, Katie; Barnett, Adrain G; Graves, Nicholas
2017-02-14
Decreasing hospital length of stay, and so freeing up hospital beds, represents an important cost saving which is often used in economic evaluations. The savings need to be accurately quantified in order to make optimal health care resource allocation decisions. Traditionally the accounting cost of a bed is used. We argue instead that the economic cost of a bed day is the better value for making resource decisions, and we describe our valuation method and estimations for costing this important resource. We performed a contingent valuation using 37 Australian Chief Executive Officers' (CEOs) willingness to pay (WTP) to release bed days in their hospitals, both generally and using specific cases. We provide a succinct thematic analysis from qualitative interviews post survey completion, which provide insight into the decision making process. On average CEOs are willing to pay a marginal rate of $216 for a ward bed day and $436 for an Intensive Care Unit (ICU) bed day, with estimates of uncertainty being greater for ICU beds. These estimates are significantly lower (four times for ward beds and seven times for ICU beds) than the traditional accounting costs often used. Key themes to emerge from the interviews include the importance of national funding and targets, and their associated incentive structures, as well as the aversion to discuss bed days as an economic resource. This study highlights the importance for valuing bed days as an economic resource to inform cost effectiveness models and thus improve hospital decision making and resource allocation. Significantly under or over valuing the resource is very likely to result in sub-optimal decision making. We discuss the importance of recognising the opportunity costs of this resource and highlight areas for future research.
Activity-based costing and its application in a Turkish university hospital.
Yereli, Ayşe Necef
2009-03-01
Resource management in hospitals is of increasing importance in today's global economy. Traditional accounting systems have become inadequate for managing hospital resources and accurately determining service costs. Conversely, the activity-based costing approach to hospital accounting is an effective cost management model that determines costs and evaluates financial performance across departments. Obtaining costs that are more accurate can enable hospitals to analyze and interpret costing decisions and make more accurate budgeting decisions. Traditional and activity-based costing approaches were compared using a cost analysis of gall bladder surgeries in the general surgery department of one university hospital in Manisa, Turkey. Copyright (c) AORN, Inc, 2009.
Review of the NURE assessment of the U.S. Gulf Coast Uranium Province
Hall, Susan M.
2013-01-01
Historic exploration and development were used to evaluate the reliability of domestic uranium reserves and potential resources estimated by the U.S. Department of Energy national uranium resource evaluation (NURE) program in the U.S. Gulf Coast Uranium Province. NURE estimated 87 million pounds of reserves in the $30/lb U3O8 cost category in the Coast Plain uranium resource region, most in the Gulf Coast Uranium Province. Since NURE, 40 million pounds of reserves have been mined, and 38 million pounds are estimated to remain in place as of 2012, accounting for all but 9 million pounds of U3O8 in the reserve or production categories in the NURE estimate. Considering the complexities and uncertainties of the analysis, this study indicates that the NURE reserve estimates for the province were accurate. An unconditional potential resource of 1.4 billion pounds of U3O8, 600 million pounds of U3O8 in the forward cost category of $30/lb U3O8 (1980 prices), was estimated in 106 favorable areas by the NURE program in the province. Removing potential resources from the non-productive Houston embayment, and those reserves estimated below historic and current mining depths reduces the unconditional potential resource 33% to about 930 million pounds of U3O8, and that in the $30/lb cost category 34% to 399 million pounds of U3O8. Based on production records and reserve estimates tabulated for the region, most of the production since 1980 is likely from the reserves identified by NURE. The potential resource predicted by NURE has not been developed, likely due to a variety of factors related to the low uranium prices that have prevailed since 1980.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timbario, Thomas A.; Timbario, Thomas J.; Laffen, Melissa J.
2011-04-12
Currently, several cost-per-mile calculators exist that can provide estimates of acquisition and operating costs for consumers and fleets. However, these calculators are limited in their ability to determine the difference in cost per mile for consumer versus fleet ownership, to calculate the costs beyond one ownership period, to show the sensitivity of the cost per mile to the annual vehicle miles traveled (VMT), and to estimate future increases in operating and ownership costs. Oftentimes, these tools apply a constant percentage increase over the time period of vehicle operation, or in some cases, no increase in direct costs at all overmore » time. A more accurate cost-per-mile calculator has been developed that allows the user to analyze these costs for both consumers and fleets. Operating costs included in the calculation tool include fuel, maintenance, tires, and repairs; ownership costs include insurance, registration, taxes and fees, depreciation, financing, and tax credits. The calculator was developed to allow simultaneous comparisons of conventional light-duty internal combustion engine (ICE) vehicles, mild and full hybrid electric vehicles (HEVs), and fuel cell vehicles (FCVs). Additionally, multiple periods of operation, as well as three different annual VMT values for both the consumer case and fleets can be investigated to the year 2024. These capabilities were included since today's “cost to own” calculators typically include the ability to evaluate only one VMT value and are limited to current model year vehicles. The calculator allows the user to select between default values or user-defined values for certain inputs including fuel cost, vehicle fuel economy, manufacturer's suggested retail price (MSRP) or invoice price, depreciation and financing rates.« less
Liquid electrolyte informatics using an exhaustive search with linear regression.
Sodeyama, Keitaro; Igarashi, Yasuhiko; Nakayama, Tomofumi; Tateyama, Yoshitaka; Okada, Masato
2018-06-14
Exploring new liquid electrolyte materials is a fundamental target for developing new high-performance lithium-ion batteries. In contrast to solid materials, disordered liquid solution properties have been less studied by data-driven information techniques. Here, we examined the estimation accuracy and efficiency of three information techniques, multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), and exhaustive search with linear regression (ES-LiR), by using coordination energy and melting point as test liquid properties. We then confirmed that ES-LiR gives the most accurate estimation among the techniques. We also found that ES-LiR can provide the relationship between the "prediction accuracy" and "calculation cost" of the properties via a weight diagram of descriptors. This technique makes it possible to choose the balance of the "accuracy" and "cost" when the search of a huge amount of new materials was carried out.
Effects of well spacing on geological storage site distribution costs and surface footprint.
Eccles, Jordan; Pratson, Lincoln F; Chandel, Munish Kumar
2012-04-17
Geological storage studies thus far have not evaluated the scale and cost of the network of distribution pipelines that will be needed to move CO(2) from a central receiving point at a storage site to injection wells distributed about the site. Using possible injection rates for deep-saline sandstone aquifers, we estimate that the footprint of a sequestration site could range from <100 km(2) to >100,000 km(2), and that distribution costs could be <$0.10/tonne to >$10/tonne. Our findings are based on two models for determining well spacing: one which minimizes spacing in order to maximize use of the volumetric capacity of the reservoir, and a second that determines spacing to minimize subsurface pressure interference between injection wells. The interference model, which we believe more accurately reflects reservoir dynamics, produces wider well spacings and a counterintuitive relationship whereby total injection site footprint and thus distribution cost declines with decreasing permeability for a given reservoir thickness. This implies that volumetric capacity estimates should be reexamined to include well spacing constraints, since wells will need to be spaced further apart than void space calculations might suggest. We conclude that site-selection criteria should include thick, low-permeability reservoirs to minimize distribution costs and site footprint.
Pricise Target Geolocation Based on Integeration of Thermal Video Imagery and Rtk GPS in Uavs
NASA Astrophysics Data System (ADS)
Hosseinpoor, H. R.; Samadzadegan, F.; Dadras Javan, F.
2015-12-01
There are an increasingly large number of uses for Unmanned Aerial Vehicles (UAVs) from surveillance, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy which implicates that it cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using a linear Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors and Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process.
Can, Ahmet Selçuk
2009-05-16
The aim of this study is to perform a cost-effectiveness comparison between palpation-guided thyroid fine-needle aspiration biopsies (P-FNA) and ultrasound-guided thyroid FNA biopsies (USG-FNA). Each nodule was considered as a case. Diagnostic steps were history and physical examination, TSH measurement, Tc99m thyroid scintigraphy for nodules with a low TSH level, initial P-FNA versus initial USG-FNA, repeat USG-FNA for nodules with initial inadequate P-FNA or USG-FNA, hemithyroidectomy for inadequate repeat USG-FNA. American Thyroid Association thyroid nodule management guidelines were simulated in estimating the cost of P-FNA strategy. American Association of Clinical Endocrinologists guidelines were simulated for USG-FNA strategy. Total costs were estimated by adding the cost of each diagnostic step to reach a diagnosis for 100 nodules. Strategy cost was found by dividing the total cost to 100. Incremental cost-effectiveness ratio (ICER) was calculated by dividing the difference between strategy cost of USG-FNA and P-FNA to the difference between accuracy of USG-FNA and P-FNA. A positive ICER indicates more and a negative ICER indicates less expense to achieve one more additional accurate diagnosis of thyroid cancer for USG-FNA. Seventy-eight P-FNAs and 190 USG-FNAs were performed between April 2003 and May 2008. There were no differences in age, gender, thyroid function, frequency of multinodular goiter, nodule location and diameter (median nodule diameter: 18.4 mm in P-FNA and 17.0 mm in USG-FNA) between groups. Cytology results in P-FNA versus USG-FNA groups were as follows: benign 49% versus 62% (p = 0.04), inadequate 42% versus 29% (p = 0.03), malignant 3% (p = 1.00) and indeterminate 6% (p = 0.78) for both. Eleven nodules from P-FNA and 18 from USG-FNA group underwent surgery. The accuracy of P-FNA was 0.64 and USG-FNA 0.72. Unit cost of P-FNA was 148 Euros and USG-FNA 226 Euros. The cost of P-FNA strategy was 534 Euros and USG-FNA strategy 523 Euros. Strategy cost includes the expense of repeat USG-FNA for initial inadequate FNAs and surgery for repeat inadequate USG-FNAs. ICER was -138 Euros. Universal application of USG-FNA for all thyroid nodules is cost-effective and saves 138 Euros per additional accurate diagnosis of benign versus malignant thyroid nodular disease. ClinicalTrials.gov, NCT00571090.
Can, Ahmet Selçuk
2009-01-01
Background The aim of this study is to perform a cost-effectiveness comparison between palpation-guided thyroid fine-needle aspiration biopsies (P-FNA) and ultrasound-guided thyroid FNA biopsies (USG-FNA). Methods Each nodule was considered as a case. Diagnostic steps were history and physical examination, TSH measurement, Tc99m thyroid scintigraphy for nodules with a low TSH level, initial P-FNA versus initial USG-FNA, repeat USG-FNA for nodules with initial inadequate P-FNA or USG-FNA, hemithyroidectomy for inadequate repeat USG-FNA. American Thyroid Association thyroid nodule management guidelines were simulated in estimating the cost of P-FNA strategy. American Association of Clinical Endocrinologists guidelines were simulated for USG-FNA strategy. Total costs were estimated by adding the cost of each diagnostic step to reach a diagnosis for 100 nodules. Strategy cost was found by dividing the total cost to 100. Incremental cost-effectiveness ratio (ICER) was calculated by dividing the difference between strategy cost of USG-FNA and P-FNA to the difference between accuracy of USG-FNA and P-FNA. A positive ICER indicates more and a negative ICER indicates less expense to achieve one more additional accurate diagnosis of thyroid cancer for USG-FNA. Results Seventy-eight P-FNAs and 190 USG-FNAs were performed between April 2003 and May 2008. There were no differences in age, gender, thyroid function, frequency of multinodular goiter, nodule location and diameter (median nodule diameter: 18.4 mm in P-FNA and 17.0 mm in USG-FNA) between groups. Cytology results in P-FNA versus USG-FNA groups were as follows: benign 49% versus 62% (p = 0.04), inadequate 42% versus 29% (p = 0.03), malignant 3% (p = 1.00) and indeterminate 6% (p = 0.78) for both. Eleven nodules from P-FNA and 18 from USG-FNA group underwent surgery. The accuracy of P-FNA was 0.64 and USG-FNA 0.72. Unit cost of P-FNA was 148 Euros and USG-FNA 226 Euros. The cost of P-FNA strategy was 534 Euros and USG-FNA strategy 523 Euros. Strategy cost includes the expense of repeat USG-FNA for initial inadequate FNAs and surgery for repeat inadequate USG-FNAs. ICER was -138 Euros. Conclusion Universal application of USG-FNA for all thyroid nodules is cost-effective and saves 138 Euros per additional accurate diagnosis of benign versus malignant thyroid nodular disease. Trial registration ClinicalTrials.gov, NCT00571090 PMID:19445710
Accurate characterisation of hole size and location by projected fringe profilometry
NASA Astrophysics Data System (ADS)
Wu, Yuxiang; Dantanarayana, Harshana G.; Yue, Huimin; Huntley, Jonathan M.
2018-06-01
The ability to accurately estimate the location and geometry of holes is often required in the field of quality control and automated assembly. Projected fringe profilometry is a potentially attractive technique on account of being non-contacting, of lower cost, and orders of magnitude faster than the traditional coordinate measuring machine. However, we demonstrate in this paper that fringe projection is susceptible to significant (hundreds of µm) measurement artefacts in the neighbourhood of hole edges, which give rise to errors of a similar magnitude in the estimated hole geometry. A mechanism for the phenomenon is identified based on the finite size of the imaging system’s point spread function and the resulting bias produced near to sample discontinuities in geometry and reflectivity. A mathematical model is proposed, from which a post-processing compensation algorithm is developed to suppress such errors around the holes. The algorithm includes a robust and accurate sub-pixel edge detection method based on a Fourier descriptor of the hole contour. The proposed algorithm was found to reduce significantly the measurement artefacts near the hole edges. As a result, the errors in estimated hole radius were reduced by up to one order of magnitude, to a few tens of µm for hole radii in the range 2–15 mm, compared to those from the uncompensated measurements.
Estimation of Land Surface Fluxes and Their Uncertainty via Variational Data Assimilation Approach
NASA Astrophysics Data System (ADS)
Abdolghafoorian, A.; Farhadi, L.
2016-12-01
Accurate estimation of land surface heat and moisture fluxes as well as root zone soil moisture is crucial in various hydrological, meteorological, and agricultural applications. "In situ" measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state variables. In this work, we applied a novel approach based on the variational data assimilation (VDA) methodology to estimate land surface fluxes and soil moisture profile from the land surface states. This study accounts for the strong linkage between terrestrial water and energy cycles by coupling the dual source energy balance equation with the water balance equation through the mass flux of evapotranspiration (ET). Heat diffusion and moisture diffusion into the column of soil are adjoined to the cost function as constraints. This coupling results in more accurate prediction of land surface heat and moisture fluxes and consequently soil moisture at multiple depths with high temporal frequency as required in many hydrological, environmental and agricultural applications. One of the key limitations of VDA technique is its tendency to be ill-posed, meaning that a continuum of possibilities exists for different parameters that produce essentially identical measurement-model misfit errors. On the other hand, the value of heat and moisture flux estimation to decision-making processes is limited if reasonable estimates of the corresponding uncertainty are not provided. In order to address these issues, in this research uncertainty analysis will be performed to estimate the uncertainty of retrieved fluxes and root zone soil moisture. The assimilation algorithm is tested with a series of experiments using a synthetic data set generated by the simultaneous heat and water (SHAW) model. We demonstrate the VDA performance by comparing the (synthetic) true measurements (including profile of soil moisture and temperature, land surface water and heat fluxes, and root water uptake) with VDA estimates. In addition, the feasibility of extending the proposed approach to use remote sensing observations is tested by limiting the number of LST observations and soil moisture observations.
Fishkel, Vanina S; Monge, Fernando C; von Petery, Felicitas M; Tapper, Karen E; Peña, Teresa M; Torres, Florencia; Poletta, Fernando A; Elgart, Jorge F; Avagnina, Alejandra; Denninghoff, Valeria
2018-05-04
The detection of high-grade intraepithelial lesions requires highly sensitive and specific methods that allow more accurate diagnoses. This contributes to a proper management of preneoplastic lesions, thus avoiding overtreatment. The purpose of this study was to analyze the value of immunostaining for p16 in the morphologic assessment of cervical intraepithelial neoplasia 2 lesions, to help differentiate between low-grade (p16-negative) and high-grade (p16-positive) squamous intraepithelial lesions. The direct medical cost of the treatment of cervical intraepithelial neoplasia 2 morphologic lesions was estimated. A retrospective observational cross-sectional study was carried out. This study analyzed 46 patients treated with excisional procedures because of cervical intraepithelial neoplasia 2 lesions, using loop electrosurgical excision procedures. Immunostaining for the biomarker was performed. For the estimation of overtreatment, percentages (%) and their 95% confidence interval were calculated. Of the 41 patients analyzed, 32 (78%) showed overexpression of p16 and 9 (22%) were negative (95% confidence interval, 11%-38%). Mean follow-up was 2.9 years, using cervical cytology testing (Pap) and colposcopy. High-risk human papillomavirus DNA tests were performed in 83% of patients. These retrospective results reveal the need for larger biopsy samples, which would allow a more accurate prediction of lesion risk. Considering the cost of p16 staining, and assuming the proper management of the low-grade lesion, an average of US$919 could be saved for each patient with a p16-negative result, which represents a global direct cost reduction of 10%.
Offshore oil in the Alaskan Arctic
NASA Technical Reports Server (NTRS)
Weeks, W. F.; Weller, G.
1984-01-01
Oil and gas deposits in the Alaskan Arctic are estimated to contain up to 40 percent of the remaining undiscovered crude oil and oil-equivalent natural gas within U.S. jurisdiction. Most (65 to 70 percent) of these estimated reserves are believed to occuur offshore beneath the shallow, ice-covered seas of the Alaskan continental shelf. Offshore recovery operations for such areas are far from routine, with the primary problems associated with the presence of ice. Some problems that must be resolved if efficient, cost-effective, environmentally safe, year-round offshore production is to be achieved include the accurate estimation of ice forces on offshore structures, the proper placement of pipelines beneath ice-produced gouges in the sea floor, and the cleanup of oil spills in pack ice areas.
Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates
NASA Astrophysics Data System (ADS)
Moore, Christopher J.; Gair, Jonathan R.
2014-12-01
Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.
Sacks, Naomi C; Burgess, James F; Cabral, Howard J; McDonnell, Marie E; Pizer, Steven D
2015-08-01
Accurate estimates of the effects of cost sharing on adherence to medications prescribed for use together, also called concurrent adherence, are important for researchers, payers, and policymakers who want to reduce barriers to adherence for chronic condition patients prescribed multiple medications concurrently. But measure definition consensus is lacking, and the effects of different definitions on estimates of cost-related nonadherence are unevaluated. To (a) compare estimates of cost-related nonadherence using different measure definitions and (b) provide guidance for analyses of the effects of cost sharing on concurrent adherence. This is a retrospective cohort study of Medicare Part D beneficiaries aged 65 years and older who used multiple oral antidiabetics concurrently in 2008 and 2009. We compared patients with standard coverage, which contains cost-sharing requirements in deductible (100%), initial (25%), and coverage gap (100%) phases, to patients with a low-income subsidy (LIS) and minimal cost-sharing requirements. Data source was the IMS Health Longitudinal Prescription Database. Patients with standard coverage were propensity matched to controls with LIS coverage. Propensity score was developed using logistic regression to model likelihood of Part D standard enrollment, controlling for sociodemographic and health status characteristics. For analysis, 3 definitions were used for unadjusted and adjusted estimates of adherence: (1) patients adherent to All medications; (2) patients adherent on Average; and (3) patients adherent to Any medication. Analyses were conducted using the full study sample and then repeated in analytic subgroups where patients used (a) 1 or more costly branded oral antidiabetics or (b) inexpensive generics only. We identified 12,771 propensity matched patients with Medicare Part D standard (N = 6,298) or LIS (N = 6,473) coverage who used oral antidiabetics in 2 or more of the same classes in 2008 and 2009. In this sample, estimates of the effects of cost sharing on concurrent adherence varied by measure definition, coverage type, and proportion of patients using more costly branded drugs. Adherence rates ranged from 37% (All: standard patients using 1+ branded) to 97% (Any: LIS using generics only). In adjusted estimates, standard patients using branded drugs had 0.63 (95% CI = 0.57-0.70) and 0.70 (95% CI = 0.63-0.77) times the odds of concurrent adherence using All and Average definitions, respectively. The Any subgroup was not significant (OR = 0.89, 95% CI = 0.87-1.17). Estimates also varied in the full-study sample (All: OR = 0.79, 95% CI = 0.74-0.85; Average: OR = 0.83, 95% CI = 0.77-0.89) and generics-only subgroup, although cost-sharing effects were smaller. The Any subgroup generated no significant estimates. Different concurrent adherence measure definitions lead to markedly different findings of the effects of cost sharing on concurrent adherence, with All and Average subgroups sensitive to these effects. However, when more study patients use inexpensive generics, estimates of these effects on adherence to branded medications with higher cost-sharing requirements may be diluted. When selecting a measure definition, researchers, payers, and policy analysts should consider the range of medication prices patients face, use a measure sensitive to the effects of cost sharing on adherence, and perform subgroup analyses for patients prescribed more medications for which they must pay more, since these patients are most vulnerable to cost-related nonadherence.
Hiligsmann, Mickaël; Ethgen, Olivier; Bruyère, Olivier; Richy, Florent; Gathon, Henry-Jean; Reginster, Jean-Yves
2009-01-01
Markov models are increasingly used in economic evaluations of treatments for osteoporosis. Most of the existing evaluations are cohort-based Markov models missing comprehensive memory management and versatility. In this article, we describe and validate an original Markov microsimulation model to accurately assess the cost-effectiveness of prevention and treatment of osteoporosis. We developed a Markov microsimulation model with a lifetime horizon and a direct health-care cost perspective. The patient history was recorded and was used in calculations of transition probabilities, utilities, and costs. To test the internal consistency of the model, we carried out an example calculation for alendronate therapy. Then, external consistency was investigated by comparing absolute lifetime risk of fracture estimates with epidemiologic data. For women at age 70 years, with a twofold increase in the fracture risk of the average population, the costs per quality-adjusted life-year gained for alendronate therapy versus no treatment were estimated at €9105 and €15,325, respectively, under full and realistic adherence assumptions. All the sensitivity analyses in terms of model parameters and modeling assumptions were coherent with expected conclusions and absolute lifetime risk of fracture estimates were within the range of previous estimates, which confirmed both internal and external consistency of the model. Microsimulation models present some major advantages over cohort-based models, increasing the reliability of the results and being largely compatible with the existing state of the art, evidence-based literature. The developed model appears to be a valid model for use in economic evaluations in osteoporosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shonder, J.A.
Geothermal heat pumps (GHPs) have been shown to have a number of benefits over other technologies used to heat and cool buildings and provide hot water, combining high levels of occupant comfort with low operating and maintenance costs. Public facilities represent an increasingly important market for GHPs, and schools are a particularly good application, given the large land area that normally surrounds them. Nevertheless, some barriers remain to the increased use of GHPs in institutional and commercial applications. First, because GHPs are perceived as having higher installation costs than other space conditioning technologies, they are sometimes not considered as anmore » option in feasibility studies. When they are considered, it can be difficult to compile the information required to compare them with other technologies. For example, a life cycle cost analysis requires estimates of installation costs and annually recurring energy and maintenance costs. But most cost estimators are unfamiliar with GHP technology, and no published GHP construction cost estimating guide is available. For this reason, estimates of installed costs tend to be very conservative, furthering the perception that GHPs are more costly than other technologies. Because GHP systems are not widely represented in the various softwares used by engineers to predict building energy use, it is also difficult to estimate the annual energy use of a building having GHP systems. Very little published data is available on expected maintenance costs either. Because of this lack of information, developing an accurate estimate of the life cycle cost of a GHP system requires experience and expertise that are not available in all institutions or in all areas of the country. In 1998, Oak Ridge National Laboratory (ORNL) entered into an agreement with the Lincoln, Nebraska, Public School District and Lincoln Electric Service, the local electric utility in the Lincoln area, to study four new, identical elementary schools built in the district that are served by GHPs. ORNL was provided with complete as-built construction plans for the schools and associated equipment, access to original design calculations and cost estimates, extensive equipment operating data [both from the buildings' energy management systems (EMSs) and from utility meters], and access to the school district's complete maintenance record database, not only for the four GHP schools, but for the other schools in the district using conventional space conditioning equipment. Using this information, we were able to reproduce the process used by the Lincoln school district and the consulting engineering firm to select GHPs over other options to provide space conditioning for the four schools. The objective was to determine whether this decision was the correct one, or whether some other technology would have been more cost-effective. An additional objective was to identify all of the factors that make it difficult for building owners and their engineers to consider GHPs in their projects so that ongoing programs can remove these impediments over time.« less
Costs and cost-effectiveness of periviable care.
Caughey, Aaron B; Burchfield, David J
2014-02-01
With increasing concerns regarding rapidly expanding healthcare costs, cost-effectiveness analysis allows assessment of whether marginal gains from new technology are worth the increased costs. Particular methodologic issues related to cost and cost-effectiveness analysis in the area of neonatal and periviable care include how costs are estimated, such as the use of charges and whether long-term costs are included; the challenges of measuring utilities; and whether to use a maternal, neonatal, or dual perspective in such analyses. A number of studies over the past three decades have examined the costs and the cost-effectiveness of neonatal and periviable care. Broadly, while neonatal care is costly, it is also cost effective as it produces both life-years and quality-adjusted life-years (QALYs). However, as the gestational age of the neonate decreases, the costs increase and the cost-effectiveness threshold is harder to achieve. In the periviable range of gestational age (22-24 weeks of gestation), whether the care is cost effective is questionable and is dependent on the perspective. Understanding the methodology and salient issues of cost-effectiveness analysis is critical for researchers, editors, and clinicians to accurately interpret results of the growing body of cost-effectiveness studies related to the care of periviable pregnancies and neonates. Copyright © 2014 Elsevier Inc. All rights reserved.
Subramanian, Sujha; Tangka, Florence; Edwards, Patrick; Hoover, Sonja; Cole-Beebe, Maggie
2016-12-01
This article reports on the methods and framework we have developed to guide economic evaluation of noncommunicable disease registries. We developed a cost data collection instrument, the Centers for Disease Control and Prevention's (CDC's) International Registry Costing Tool (IntRegCosting Tool), based on established economics methods We performed in-depth case studies, site visit interviews, and pilot testing in 11 registries from multiple countries including India, Kenya, Uganda, Colombia, and Barbados to assess the overall quality of the data collected from cancer and cardiovascular registries. Overall, the registries were able to use the IntRegCosting Tool to assign operating expenditures to specific activities. We verified that registries were able to provide accurate estimation of labor costs, which is the largest expenditure incurred by registries. We also identified several factors that can influence the cost of registry operations, including size of the geographic area served, data collection approach, local cost of living, presence of rural areas, volume of cases, extent of consolidation of records to cases, and continuity of funding. Internal and external registry factors reveal that a single estimate for the cost of registry operations is not feasible; costs will vary on the basis of factors that may be beyond the control of the registries. Some factors, such as data collection approach, can be modified to improve the efficiency of registry operations. These findings will inform both future economic data collection using a web-based tool and cost and cost-effectiveness analyses of registry operations in low- and middle-income countries (LMICs) and other locations with similar characteristics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lessons learned in deploying software estimation technology and tools
NASA Technical Reports Server (NTRS)
Panlilio-Yap, Nikki; Ho, Danny
1994-01-01
Developing a software product involves estimating various project parameters. This is typically done in the planning stages of the project when there is much uncertainty and very little information. Coming up with accurate estimates of effort, cost, schedule, and reliability is a critical problem faced by all software project managers. The use of estimation models and commercially available tools in conjunction with the best bottom-up estimates of software-development experts enhances the ability of a product development group to derive reasonable estimates of important project parameters. This paper describes the experience of the IBM Software Solutions (SWS) Toronto Laboratory in selecting software estimation models and tools and deploying their use to the laboratory's product development groups. It introduces the SLIM and COSTAR products, the software estimation tools selected for deployment to the product areas, and discusses the rationale for their selection. The paper also describes the mechanisms used for technology injection and tool deployment, and concludes with a discussion of important lessons learned in the technology and tool insertion process.
NASA Astrophysics Data System (ADS)
Tiwari, Vaibhav
2018-07-01
The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.
Global Impact Estimation of ISO 50001 Energy Management System for Industrial and Service Sectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghajanzadeh, Arian; Therkelsen, Peter L.; Rao, Prakash
A methodology has been developed to determine the impacts of ISO 50001 Energy Management System (EnMS) at a region or country level. The impacts of ISO 50001 EnMS include energy, CO2 emissions, and cost savings. This internationally recognized and transparent methodology has been embodied in a user friendly Microsoft Excel® based tool called ISO 50001 Impact Estimator Tool (IET 50001). However, the tool inputs are critical in order to get accurate and defensible results. This report is intended to document the data sources used and assumptions made to calculate the global impact of ISO 50001 EnMS.
Scribbans, T D; Berg, K; Narazaki, K; Janssen, I; Gurd, B J
2015-09-01
There is currently little information regarding the ability of metabolic prediction equations to accurately predict oxygen uptake and exercise intensity from heart rate (HR) during intermittent sport. The purpose of the present study was to develop and, cross-validate equations appropriate for accurately predicting oxygen cost (VO2) and energy expenditure from HR during intermittent sport participation. Eleven healthy adult males (19.9±1.1yrs) were recruited to establish the relationship between %VO2peak and %HRmax during low-intensity steady state endurance (END), moderate-intensity interval (MOD) and high intensity-interval exercise (HI), as performed on a cycle ergometer. Three equations (END, MOD, and HI) for predicting %VO2peak based on %HRmax were developed. HR and VO2 were directly measured during basketball games (6 male, 20.8±1.0 yrs; 6 female, 20.0±1.3yrs) and volleyball drills (12 female; 20.8±1.0yrs). Comparisons were made between measured and predicted VO2 and energy expenditure using the 3 equations developed and 2 previously published equations. The END and MOD equations accurately predicted VO2 and energy expenditure, while the HI equation underestimated, and the previously published equations systematically overestimated VO2 and energy expenditure. Intermittent sport VO2 and energy expenditure can be accurately predicted from heart rate data using either the END (%VO2peak=%HRmax x 1.008-17.17) or MOD (%VO2peak=%HRmax x 1.2-32) equations. These 2 simple equations provide an accessible and cost-effective method for accurate estimation of exercise intensity and energy expenditure during intermittent sport.
A new method for examining the cost savings of reducing COPD exacerbations.
Mapel, Douglas W; Schum, Michael; Lydick, Eva; Marton, Jeno P
2010-01-01
Some treatments for chronic obstructive pulmonary disease (COPD) can reduce exacerbations, and thus could have a favourable impact on overall healthcare costs. To evaluate a new method for assessing the potential cost savings of COPD controller medications based on the incidence of exacerbations and their related resource utilization in the general population. Patients with COPD (n = 1074) enrolled in a regional managed care system in the US were identified using administrative data and divided by their medication use into three groups (salbutamol, ipratropium and salmeterol). Exacerbations were captured using International Classification of Diseases, Ninth Edition (ICD-9) and current procedural terminology (CPT) codes, then logistic regression models were created that described the risk of exacerbations for each comparator group and exacerbation type over a 6-month period. A Monte Carlo simulation was then applied 1000 times to provide the range of potential exacerbation reductions and cost consequences in response to a range of hypothetical examples of COPD controller medications. Exacerbation events for each group could be modelled such that the events predicted by the Monte Carlo estimates were very close to the actual prevalences. The estimated cost per exacerbation avoided depended on the incidence of exacerbation in the various subpopulations, the assumed relative risk reduction, the projected daily cost for new therapy, and the costs of exacerbation treatment. COPD exacerbation events can be accurately modelled from the healthcare utilization data of a defined cohort with sufficient accuracy for cost-effectiveness analysis. Treatments that reduce the risk or severity of exacerbations are likely to be cost effective among those patients who have frequent exacerbations and hospitalizations.
The effect of misclassification errors on case mix measurement.
Sutherland, Jason M; Botz, Chas K
2006-12-01
Case mix systems have been implemented for hospital reimbursement and performance measurement across Europe and North America. Case mix categorizes patients into discrete groups based on clinical information obtained from patient charts in an attempt to identify clinical or cost difference amongst these groups. The diagnosis related group (DRG) case mix system is the most common methodology, with variants adopted in many countries. External validation studies of coding quality have confirmed that widespread variability exists between originally recorded diagnoses and re-abstracted clinical information. DRG assignment errors in hospitals that share patient level cost data for the purpose of establishing cost weights affects cost weight accuracy. The purpose of this study is to estimate bias in cost weights due to measurement error of reported clinical information. DRG assignment error rates are simulated based on recent clinical re-abstraction study results. Our simulation study estimates that 47% of cost weights representing the least severe cases are over weight by 10%, while 32% of cost weights representing the most severe cases are under weight by 10%. Applying the simulated weights to a cross-section of hospitals, we find that teaching hospitals tend to be under weight. Since inaccurate cost weights challenges the ability of case mix systems to accurately reflect patient mix and may lead to potential distortions in hospital funding, bias in hospital case mix measurement highlights the role clinical data quality plays in hospital funding in countries that use DRG-type case mix systems. Quality of clinical information should be carefully considered from hospitals that contribute financial data for establishing cost weights.
Developing a lower-cost atmospheric CO2 monitoring system using commercial NDIR sensor
NASA Astrophysics Data System (ADS)
Arzoumanian, E.; Bastos, A.; Gaynullin, B.; Laurent, O.; Vogel, F. R.
2017-12-01
Cities release to the atmosphere about 44 % of global energy-related CO2. It is clear that accurate estimates of the magnitude of anthropogenic and natural urban emissions are needed to assess their influence on the carbon balance. A dense ground-based CO2 monitoring network in cities would potentially allow retrieving sector specific CO2 emission estimates when combined with an atmospheric inversion framework using reasonably accurate observations (ca. 1 ppm for hourly means). One major barrier for denser observation networks can be the high cost of high precision instruments or high calibration cost of cheaper and unstable instruments. We have developed and tested a novel inexpensive NDIR sensors for CO2 measurements which fulfils cost and typical parameters requirements (i.e. signal stability, efficient handling, and connectivity) necessary for this task. Such sensors are essential in the market of emissions estimates in cities from continuous monitoring networks as well as for leak detection of MRV (monitoring, reporting, and verification) services for industrial sites. We conducted extensive laboratory tests (short and long-term repeatability, cross-sensitivities, etc.) on a series of prototypes and the final versions were also tested in a climatic chamber. On four final HPP prototypes the sensitivity to pressure and temperature were precisely quantified and correction&calibration strategies developed. Furthermore, we fully integrated these HPP sensors in a Raspberry PI platform containing the CO2 sensor and additional sensors (pressure, temperature and humidity sensors), gas supply pump and a fully automated data acquisition unit. This platform was deployed in parallel to Picarro G2401 instruments in the peri-urban site Saclay - next to Paris, and in the urban site Jussieu - Paris, France. These measurements were conducted over several months in order to characterize the long-term drift of our HPP instruments and the ability of the correction and calibration scheme to provide bias free observations. From the lessons learned in the laboratory tests and field measurements, we developed a specific correction and calibration strategy for our NDIR sensors. Latest results and calibration strategies will be shown.
Leacock, William B.; Eby, Lisa A.; Stanford, Jack A.
2016-01-01
Accurately estimating population sizes is often a critical component of fisheries research and management. Although there is a growing appreciation of the importance of small-scale salmon population dynamics to the stability of salmon stock-complexes, our understanding of these populations is constrained by a lack of efficient and cost-effective monitoring tools for streams. Weirs are expensive, labor intensive, and can disrupt natural fish movements. While conventional video systems avoid some of these shortcomings, they are expensive and require excessive amounts of labor to review footage for data collection. Here, we present a novel method for quantifying salmon in small streams (<15 m wide, <1 m deep) that uses both time-lapse photography and video in a model-based double sampling scheme. This method produces an escapement estimate nearly as accurate as a video-only approach, but with substantially less labor, money, and effort. It requires servicing only every 14 days, detects salmon 24 h/day, is inexpensive, and produces escapement estimates with confidence intervals. In addition to escapement estimation, we present a method for estimating in-stream salmon abundance across time, data needed by researchers interested in predator--prey interactions or nutrient subsidies. We combined daily salmon passage estimates with stream specific estimates of daily mortality developed using previously published data. To demonstrate proof of concept for these methods, we present results from two streams in southwest Kodiak Island, Alaska in which high densities of sockeye salmon spawn. PMID:27326378
Winzer, Eva; Luger, Maria; Schindler, Karin
2018-06-01
Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Draxl, Caroline; Hopson, Thomas
Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less
Hoffmann, Stefan A; Wohltat, Christian; Müller, Kristian M; Arndt, Katja M
2017-01-01
For various experimental applications, microbial cultures at defined, constant densities are highly advantageous over simple batch cultures. Due to high costs, however, devices for continuous culture at freely defined densities still experience limited use. We have developed a small-scale turbidostat for research purposes, which is manufactured from inexpensive components and 3D printed parts. A high degree of spatial system integration and a graphical user interface provide user-friendly operability. The used optical density feedback control allows for constant continuous culture at a wide range of densities and offers to vary culture volume and dilution rates without additional parametrization. Further, a recursive algorithm for on-line growth rate estimation has been implemented. The employed Kalman filtering approach based on a very general state model retains the flexibility of the used control type and can be easily adapted to other bioreactor designs. Within several minutes it can converge to robust, accurate growth rate estimates. This is particularly useful for directed evolution experiments or studies on metabolic challenges, as it allows direct monitoring of the population fitness.
Zhao, Lin; Guan, Dongxue; Landry, René Jr.; Cheng, Jianhua; Sydorenko, Kostyantyn
2015-01-01
Target positioning systems based on MEMS gyros and laser rangefinders (LRs) have extensive prospects due to their advantages of low cost, small size and easy realization. The target positioning accuracy is mainly determined by the LR’s attitude derived by the gyros. However, the attitude error is large due to the inherent noises from isolated MEMS gyros. In this paper, both accelerometer/magnetometer and LR attitude aiding systems are introduced to aid MEMS gyros. A no-reset Federated Kalman Filter (FKF) is employed, which consists of two local Kalman Filters (KF) and a Master Filter (MF). The local KFs are designed by using the Direction Cosine Matrix (DCM)-based dynamic equations and the measurements from the two aiding systems. The KFs can estimate the attitude simultaneously to limit the attitude errors resulting from the gyros. Then, the MF fuses the redundant attitude estimates to yield globally optimal estimates. Simulation and experimental results demonstrate that the FKF-based system can improve the target positioning accuracy effectively and allow for good fault-tolerant capability. PMID:26512672
Polinski, Jennifer M.; Maclure, Malcolm; Marshall, Blair; Cassels, Alan; Agnew-Blais, Jessica; Patrick, Amanda R.; Schneeweiss, Sebastian
2010-01-01
Background British Columbia implemented a generic substitution (GS) and Reference Drug Program (RDP) to contain drug expenditures without negatively affecting health outcomes. Years after implementation, these policies remain controversial among physicians. Objective To assess British Columbia general practitioners’ (GPs) opinions of RDP and GS stratified by knowledge of drug costs. Methods In telephone interviews, GPs ranked the economic and clinical appropriateness of drug policy options on a 5-point Likert scale. Responses to economic questions were stratified and compared according to the accuracy (±$10 of the actual cost) of GPs’ cost estimates for a 30-day supply of atorvastatin and omeprazole. Results The majority of 210 interviewed GPs rated the economic appropriateness of GS and RDP positively (79% and 65%) but fewer rated them clinically appropriate (60% and 43%). Ratings for GS were more favorable than RDP, economically (mean=4.3 v. 3.8, p=0.0005) and clinically (mean=3.7 v 3.1, p=0.006). GP’s assessment of the therapeutic equivalence among ACE inhibitors and among CCBs correlated with their ratings of the respective RDPs (ρ=0.3, p=0.03, and ρ=0.4, p=0.02). GPs underestimated the price for omeprazole by C$28 (33%) and atorvastatin by C$28 (34%). GPs with accurate cost estimates were equally as likely to favorably rank the economic appropriateness of RDP as those with inaccurate estimates (mean = 3.7 v. 4.0, p=0.0847). GS was assessed similarly (mean = 4.2 v. 4.5, p=0.0712). Conclusions In British Columbia, the majority of GPs hold favorable opinions of GS and RDP, but simply educating physicians about drug prices will not make them more supportive of cost-containment policies. PMID:18641423
Coplanar electrode microfluidic chip enabling accurate sheathless impedance cytometry.
De Ninno, Adele; Errico, Vito; Bertani, Francesca Romana; Businaro, Luca; Bisegna, Paolo; Caselli, Federica
2017-03-14
Microfluidic impedance cytometry offers a simple non-invasive method for single-cell analysis. Coplanar electrode chips are especially attractive due to ease of fabrication, yielding miniaturized, reproducible, and ultimately low-cost devices. However, their accuracy is challenged by the dependence of the measured signal on particle trajectory within the interrogation volume, that manifests itself as an error in the estimated particle size, unless any kind of focusing system is used. In this paper, we present an original five-electrode coplanar chip enabling accurate particle sizing without the need for focusing. The chip layout is designed to provide a peculiar signal shape from which a new metric correlating with particle trajectory can be extracted. This metric is exploited to correct the estimated size of polystyrene beads of 5.2, 6 and 7 μm nominal diameter, reaching coefficient of variations lower than the manufacturers' quoted values. The potential impact of the proposed device in the field of life sciences is demonstrated with an application to Saccharomyces cerevisiae yeast.
Estimating zero-g flow rates in open channels having capillary pumping vanes
NASA Astrophysics Data System (ADS)
Srinivasan, Radhakrishnan
2003-02-01
In vane-type surface tension propellant management devices (PMD) commonly used in satellite fuel tanks, the propellant is transported along guiding vanes from a reservoir at the inlet of the device to a sump at the outlet from where it is pumped to the satellite engine. The pressure gradient driving this free-surface flow under zero-gravity (zero-g) conditions is generated by surface tension and is related to the differential curvatures of the propellant-gas interface at the inlet and outlet of the PMD. A new semi-analytical procedure is prescribed for accurately calculating the extremely small fuel flow rates under reasonably idealized conditions. Convergence of the algorithm is demonstrated by detailed numerical calculations. Owing to the substantial cost and the technical hurdles involved in accurately estimating these minuscule flow rates by either direct numerical simulation or by experimental methods which simulate zero-g conditions in the lab, it is expected that the proposed method will be an indispensable tool in the design and operation of satellite fuel tanks.
Young, David W
2015-11-01
Historically, hospital departments have computed the costs of individual tests or procedures using the ratio of cost to charges (RCC) method, which can produce inaccurate results. To determine a more accurate cost of a test or procedure, the activity-based costing (ABC) method must be used. Accurate cost calculations will ensure reliable information about the profitability of a hospital's DRGs.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
On the value of the phenotypes in the genomic era.
Gonzalez-Recio, O; Coffey, M P; Pryce, J E
2014-12-01
Genetic improvement programs around the world rely on the collection of accurate phenotypic data. These phenotypes have an inherent value that can be estimated as the contribution of an additional record to genetic gain. Here, the contribution of phenotypes to genetic gain was calculated using traditional progeny testing (PT) and 2 genomic selection (GS) strategies that, for simplicity, included either males or females in the reference population. A procedure to estimate the theoretical economic contribution of a phenotype to a breeding program is described for both GS and PT breeding programs through the increment in genetic gain per unit of increase in estimated breeding value reliability obtained when an additional phenotypic record is added. The main factors affecting the value of a phenotype were the economic value of the trait, the number of phenotypic records already available for the trait, and its heritability. Furthermore, the value of a phenotype was affected by several other factors, including the cost of establishing the breeding program and the cost of phenotyping and genotyping. The cost of achieving a reliability of 0.60 was assessed for different reference populations for GS. Genomic reference populations of more sires with small progeny group sizes (e.g., 20 equivalent daughters) had a lower cost than those reference populations with either large progeny group sizes for fewer genotyped sires, or female reference populations, unless the heritability was large and the cost of phenotyping exceeded a few hundred dollars; then, female reference populations were preferable from an economic perspective. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Cravotta, Charles A.; Means, Brent P; Arthur, Willam; McKenzie, Robert M; Parkhurst, David L.
2015-01-01
Alkaline chemicals are commonly added to discharges from coal mines to increase pH and decrease concentrations of acidity and dissolved aluminum, iron, manganese, and associated metals. The annual cost of chemical treatment depends on the type and quantities of chemicals added and sludge produced. The AMDTreat computer program, initially developed in 2003, is widely used to compute such costs on the basis of the user-specified flow rate and water quality data for the untreated AMD. Although AMDTreat can use results of empirical titration of net-acidic or net-alkaline effluent with caustic chemicals to accurately estimate costs for treatment, such empirical data are rarely available. A titration simulation module using the geochemical program PHREEQC has been incorporated with AMDTreat 5.0+ to improve the capability of AMDTreat to estimate: (1) the quantity and cost of caustic chemicals to attain a target pH, (2) the chemical composition of the treated effluent, and (3) the volume of sludge produced by the treatment. The simulated titration results for selected caustic chemicals (NaOH, CaO, Ca(OH)2, Na2CO3, or NH3) without aeration or with pre-aeration can be compared with or used in place of empirical titration data to estimate chemical quantities, treated effluent composition, sludge volume (precipitated metals plus unreacted chemical), and associated treatment costs. This paper describes the development, evaluation, and potential utilization of the PHREEQC titration module with the new AMDTreat 5.0+ computer program available at http://www.amd.osmre.gov/.
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-01-01
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-05-23
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.
Economic impact of medication non-adherence by disease groups: a systematic review
Fernandez-Llimos, Fernando; Frommer, Michael; Benrimoj, Charlie; Garcia-Cardenas, Victoria
2018-01-01
Objective To determine the economic impact of medication non-adherence across multiple disease groups. Design Systematic review. Evidence review A comprehensive literature search was conducted in PubMed and Scopus in September 2017. Studies quantifying the cost of medication non-adherence in relation to economic impact were included. Relevant information was extracted and quality assessed using the Drummond checklist. Results Seventy-nine individual studies assessing the cost of medication non-adherence across 14 disease groups were included. Wide-scoping cost variations were reported, with lower levels of adherence generally associated with higher total costs. The annual adjusted disease-specific economic cost of non-adherence per person ranged from $949 to $44 190 (in 2015 US$). Costs attributed to ‘all causes’ non-adherence ranged from $5271 to $52 341. Medication possession ratio was the metric most used to calculate patient adherence, with varying cut-off points defining non-adherence. The main indicators used to measure the cost of non-adherence were total cost or total healthcare cost (83% of studies), pharmacy costs (70%), inpatient costs (46%), outpatient costs (50%), emergency department visit costs (27%), medical costs (29%) and hospitalisation costs (18%). Drummond quality assessment yielded 10 studies of high quality with all studies performing partial economic evaluations to varying extents. Conclusion Medication non-adherence places a significant cost burden on healthcare systems. Current research assessing the economic impact of medication non-adherence is limited and of varying quality, failing to provide adaptable data to influence health policy. The correlation between increased non-adherence and higher disease prevalence should be used to inform policymakers to help circumvent avoidable costs to the healthcare system. Differences in methods make the comparison among studies challenging and an accurate estimation of true magnitude of the cost impossible. Standardisation of the metric measures used to estimate medication non-adherence and development of a streamlined approach to quantify costs is required. PROSPERO registration number CRD42015027338. PMID:29358417
Testing Software Development Project Productivity Model
NASA Astrophysics Data System (ADS)
Lipkin, Ilya
Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc... This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.
Williams, R B
1999-08-01
A compartmentalised model is presented for the estimation of the monetary losses suffered by the world's poultry industry resulting from coccidiosis of chickens and costs of its control. The model is designed so that the major elements of loss may be separately quantified for any chicken-producing entity, e.g., a farm, a poultry company, a country, etc. Examples are presented and the sources, reliability and geographical relevance of the data used for each parameter are provided. Loss elements for specific geographical areas should be recalculated at appropriate intervals to take into account local and international fluctuations in costs of chicks feed, labour, financial inflation and world currency exchange rates. Equations are given for relationships among numbers of chickens, liveweights, weights of carcasses, feed consumptions, feed conversion ratio (FCR), prices of feeds, prices of anticoccidial therapeutic and prophylactic drugs, values of chickens, chicken rearing costs; and effects of coccidiosis on mortality, weight gain and FCR. Using these equations, it is theoretically possible for an international team of representatives, each using reliable local data, to calculate simultaneously each relevant loss element for their respective countries. Addition of these elements could give, for the first time, an accurate global estimate of the losses due to chicken coccidiosis. The total cost of coccidiosis in chickens in the United Kingdom in 1995 is estimated to have been at least GB pound silver 38 588 795, of which 98.1% involved broilers (80.6% due to effects on mortality, weight gain and feed conversion, and 17.5% due to the cost of chemoprophylaxis and therapy). The costs of poor performance due to coccidiosis and its chemical control totalled 4.54% of the gross revenue from UK sales of live broilers. This model includes a new method for comparing the profitabilities of different treatments in commercial trials. providing actual costs rather than the arbitrary numerical scores of other methods. Although originally designed for the study of coccidiosis, the model is equally applicable to any disease. It should be of value to agricultural economists, the animal feed and poultry industries, animal health companies, and to research scientists (particularly for preparing grant applications).
Department of Defense Progress in Financial Management Reform
2000-05-09
financial reporting , incomplete documentation, and weak internal controls, including computer controls, continue to prevent the government from accurately reporting a significant portion of its assets, liabilities, and costs. Material financial management deficiencies identified at DOD, taken together, continue to represent the single largest obstacle that must be effectively addressed to achieve an unqualified opinion on the U.S. government’s consolidated financial statements. DOD’s vast operations--with an estimated $1 trillion in assets, nearly $1
Velayudhan, D. E.; Kim, I. H.; Nyachoti, C. M.
2015-01-01
Feed is single most expensive input in commercial pork production representing more than 50% of the total cost of production. The greatest proportion of this cost is associated with the energy component, thus making energy the most important dietary in terms of cost. For efficient pork production, it is imperative that diets are formulated to accurately match dietary energy supply to requirements for maintenance and productive functions. To achieve this goal, it is critical that the energy value of feeds is precisely determined and that the energy system that best meets the energy needs of a pig is used. Therefore, the present review focuses on dietary supply and needs for pigs and the available energy systems for formulating swine diets with particular emphasis on the net energy system. In addition to providing a more accurate estimate of the energy available to the animal in an ingredient and the subsequent diet, diets formulated using the this system are typically lower in crude protein, which leads to additional benefits in terms of reduced nitrogen excretion and consequent environmental pollution. Furthermore, using the net energy system may reduce diet cost as it allows for increased use of feedstuffs containing fibre in place of feedstuffs containing starch. A brief review of the use of distiller dried grains with solubles in swine diets as an energy source is included. PMID:25557670
Velayudhan, D E; Kim, I H; Nyachoti, C M
2015-01-01
Feed is single most expensive input in commercial pork production representing more than 50% of the total cost of production. The greatest proportion of this cost is associated with the energy component, thus making energy the most important dietary in terms of cost. For efficient pork production, it is imperative that diets are formulated to accurately match dietary energy supply to requirements for maintenance and productive functions. To achieve this goal, it is critical that the energy value of feeds is precisely determined and that the energy system that best meets the energy needs of a pig is used. Therefore, the present review focuses on dietary supply and needs for pigs and the available energy systems for formulating swine diets with particular emphasis on the net energy system. In addition to providing a more accurate estimate of the energy available to the animal in an ingredient and the subsequent diet, diets formulated using the this system are typically lower in crude protein, which leads to additional benefits in terms of reduced nitrogen excretion and consequent environmental pollution. Furthermore, using the net energy system may reduce diet cost as it allows for increased use of feedstuffs containing fibre in place of feedstuffs containing starch. A brief review of the use of distiller dried grains with solubles in swine diets as an energy source is included.
Giorda, C B; Rossi, M C; Ozzello, O; Gentile, S; Aglialoro, A; Chiambretti, A; Baccetti, F; Gentile, F M; Romeo, F; Lucisano, G; Nicolucci, A
2017-03-01
To obtain an accurate picture of the total costs of hypoglycemia, including the indirect costs and comparing the differences between type 1 (T1DM) and type 2 diabetes mellitus (T2DM). HYPOS-1 was a multicenter, retrospective cohort study which analyzed the data of 2229 consecutive patients seen at 18 diabetes clinics. Data on healthcare resource use and indirect costs by diabetes type were collected via a questionnaire. The domains of inpatient admission and hospital stay, work days lost, and third-party assistance were also explored. Resource utilization was reported as estimated incidence rates (IRs) of hypoglycemic episodes per 100 person-years and estimated costs as IRs per person-years. For every 100 patients with T1DM, 9 emergency room (ER) visits and 6 emergency medical service calls for hypoglycemia were required per year; for every 100 patients with T2DM, 3 ER visits and 1 inpatient admission were required, with over 3 nights spent in hospital. Hypoglycemia led to 58 work days per 100 person-years lost by the patient or a family member in T1DM versus 19 in T2DM. The costs in T1DM totaled €90.99 per person-year and €62.04 in T2DM. Direct and indirect costs making up the total differed by type of diabetes (60% indirect costs in T1DM versus 43% in T2DM). The total cost associated with hypoglycemia in Italy is estimated to be €107 million per year. Indirect costs meaningfully contribute to the total costs associated with hypoglycemia. As compared with T1DM, T2DM requires fewer ER visits and incurs lower indirect costs but more frequent hospital use. Copyright © 2016 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.
Vannecke, T P W; Lampens, D R A; Ekama, G A; Volcke, E I P
2015-01-01
Simple titration methods certainly deserve consideration for on-site routine monitoring of volatile fatty acid (VFA) concentration and alkalinity during anaerobic digestion (AD), because of their simplicity, speed and cost-effectiveness. In this study, the 5 and 8 pH point titration methods for measuring the VFA concentration and carbonate system alkalinity (H2CO3*-alkalinity) were assessed and compared. For this purpose, synthetic solutions with known H2CO3*-alkalinity and VFA concentration as well as samples from anaerobic digesters treating three different kind of solid wastes were analysed. The results of these two related titration methods were verified with photometric and high-pressure liquid chromatography measurements. It was shown that photometric measurements lead to overestimations of the VFA concentration in the case of coloured samples. In contrast, the 5 pH point titration method provides an accurate estimation of the VFA concentration, clearly corresponding with the true value. Concerning the H2CO3*-alkalinity, the most accurate and precise estimations, showing very similar results for repeated measurements, were obtained using the 8 pH point titration. Overall, it was concluded that the 5 pH point titration method is the preferred method for the practical monitoring of AD of solid wastes due to its robustness, cost efficiency and user-friendliness.
Cost effectiveness of the stream-gaging program in northeastern California
Hoffard, S.H.; Pearce, V.F.; Tasker, Gary D.; Doyle, W.H.
1984-01-01
Results are documented of a study of the cost effectiveness of the stream-gaging program in northeastern California. Data uses and funding sources were identified for the 127 continuous stream gages currently being operated in the study area. One stream gage was found to have insufficient data use to warrant cooperative Federal funding. Flow-routing and multiple-regression models were used to simulate flows at selected gaging stations. The models may be sufficiently accurate to replace two of the stations. The average standard error of estimate of streamflow records is 12.9 percent. This overall level of accuracy could be reduced to 12.0 percent using computer-recommended service routes and visit frequencies. (USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.
Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3
Results of an exercise to estimate the costs of interpersonal violence in Jamaica.
Ward, E; McCartney, T; Brown, D W; Grant, A; Butchart, A; Taylor, M; Bhoorasingh, P; Wong, H; Morris, C; Deans-Clarke, A M; East, J; Valentine, C; Dundas, S; Pinnock, C
2009-11-01
This report describes the application of a draft version of the World Health Organization (WHO)/ United States Centers for Disease Control and Prevention (CDC) Manual for estimating the economic costs of injuries due to interpersonal and self-directed violence to measure costs of injuries from interpersonal violence. Fatal incidence data was obtained from the Jamaica Constabulary Force. The incidence of nonfatal violence-related injuries that required hospitalization was estimated using data obtained from patients treated at and/or admitted to three Type A government hospitals in 2006. During 2006, direct medical cost (J$2.1 billion) of injuries due to interpersonal violence accounted for about 12% of Jamaica's total health budget while productivity losses due to violence-related injuries accounted for approximately J$27.5 billion or 160% of Jamaica's total health expenditure and 4% of Jamaica's Gross Domestic Product. The availability of accurate and reliable data of the highest quality from health-related information systems is critical for providing useful data on the burden of violence and injury to decision-makers. As Ministries of Health take a leading role in violence and injury prevention, data collection and information systems must have a central role. This study describes the results of one approach to examining the economic burden of interpersonal violence in developing countries where the burden of violence is heaviest. The WHO-CDC manual also tested in Thailand and Brazil is a first step towards generating a reference point for resource allocation, priority setting and prevention advocacy.
Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu
2016-06-23
Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.
Metabolic costs of daily activity in older adults (Chores XL) study: design and methods.
Corbett, Duane B; Wanigatunga, Amal A; Valiani, Vincenzo; Handberg, Eileen M; Buford, Thomas W; Brumback, Babette; Casanova, Ramon; Janelle, Christopher M; Manini, Todd M
2017-06-01
For over 20 years, normative data has guided the prescription of physical activity. This data has since been applied to research and used to plan interventions. While this data seemingly provides accurate estimates of the metabolic cost of daily activities in young adults, the accuracy of use among older adults is less clear. As such, a thorough evaluation of the metabolic cost of daily activities in community dwelling adults across the lifespan is needed. The Metabolic Costs of Daily Activity in Older Adults Study is a cross-sectional study designed to compare the metabolic cost of daily activities in 250 community dwelling adults across the lifespan. Participants (20+ years) performed 38 common daily activities while expiratory gases were measured using a portable indirect calorimeter (Cosmed K4b2). The metabolic cost was examined as a metabolic equivalent value (O 2 uptake relative to 3.5 milliliter• min-1•kg-1), a function of work rate - metabolic economy, and a relative value of resting and peak oxygen uptake. The primary objective is to determine age-related differences in the metabolic cost of common lifestyle and exercise activities. Secondary objectives include (a) investigating the effect of functional impairment on the metabolic cost of daily activities, (b) evaluating the validity of perception-based measurement of exertion across the lifespan, and (c) validating activity sensors for estimating the type and intensity of physical activity. Results of this study are expected to improve the effectiveness by which physical activity and nutrition is recommended for adults across the lifespan.
Parametric modelling of cost data in medical studies.
Nixon, R M; Thompson, S G
2004-04-30
The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.
Accounting for unsearched areas in estimating wind turbine-caused fatality
Huso, Manuela M.P.; Dalthorp, Dan
2014-01-01
With wind energy production expanding rapidly, concerns about turbine-induced bird and bat fatality have grown and the demand for accurate estimation of fatality is increasing. Estimation typically involves counting carcasses observed below turbines and adjusting counts by estimated detection probabilities. Three primary sources of imperfect detection are 1) carcasses fall into unsearched areas, 2) carcasses are removed or destroyed before sampling, and 3) carcasses present in the searched area are missed by observers. Search plots large enough to comprise 100% of turbine-induced fatality are expensive to search and may nonetheless contain areas unsearchable because of dangerous terrain or impenetrable brush. We evaluated models relating carcass density to distance from the turbine to estimate the proportion of carcasses expected to fall in searched areas and evaluated the statistical cost of restricting searches to areas near turbines where carcass density is highest and search conditions optimal. We compared 5 estimators differing in assumptions about the relationship of carcass density to distance from the turbine. We tested them on 6 different carcass dispersion scenarios at each of 3 sites under 2 different search regimes. We found that even simple distance-based carcass-density models were more effective at reducing bias than was a 5-fold expansion of the search area. Estimators incorporating fitted rather than assumed models were least biased, even under restricted searches. Accurate estimates of fatality at wind-power facilities will allow critical comparisons of rates among turbines, sites, and regions and contribute to our understanding of the potential environmental impact of this technology.
The problem of estimating recent genetic connectivity in a changing world.
Samarasin, Pasan; Shuter, Brian J; Wright, Stephen I; Rodd, F Helen
2017-02-01
Accurate understanding of population connectivity is important to conservation because dispersal can play an important role in population dynamics, microevolution, and assessments of extirpation risk and population rescue. Genetic methods are increasingly used to infer population connectivity because advances in technology have made them more advantageous (e.g., cost effective) relative to ecological methods. Given the reductions in wildlife population connectivity since the Industrial Revolution and more recent drastic reductions from habitat loss, it is important to know the accuracy of and biases in genetic connectivity estimators when connectivity has declined recently. Using simulated data, we investigated the accuracy and bias of 2 common estimators of migration (movement of individuals among populations) rate. We focused on the timing of the connectivity change and the magnitude of that change on the estimates of migration by using a coalescent-based method (Migrate-n) and a disequilibrium-based method (BayesAss). Contrary to expectations, when historically high connectivity had declined recently: (i) both methods over-estimated recent migration rates; (ii) the coalescent-based method (Migrate-n) provided better estimates of recent migration rate than the disequilibrium-based method (BayesAss); (iii) the coalescent-based method did not accurately reflect long-term genetic connectivity. Overall, our results highlight the problems with comparing coalescent and disequilibrium estimates to make inferences about the effects of recent landscape change on genetic connectivity among populations. We found that contrasting these 2 estimates to make inferences about genetic-connectivity changes over time could lead to inaccurate conclusions. © 2016 Society for Conservation Biology.
Real-time yield estimation based on deep learning
NASA Astrophysics Data System (ADS)
Rahnemoonfar, Maryam; Sheppard, Clay
2017-05-01
Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.
NASA Astrophysics Data System (ADS)
Omer, Galal; Mutanga, Onisimo; Abdel-Rahman, Elfatih M.; Peerbhay, Kabir; Adam, Elhadi
2017-09-01
Forest nitrogen (N) and carbon (C) are among the most important biochemical components of tree organic matter, and the estimation of their concentrations can help to monitor the nutrient uptake processes and health of forest trees. Traditionally, these tree biochemical components are estimated using costly, labour intensive, time-consuming and subjective analytical protocols. The use of very high spatial resolution multispectral data and advanced machine learning regression algorithms such as support vector machines (SVM) and artificial neural networks (ANN) provide an opportunity to accurately estimate foliar N and C concentrations over intact and fragmented forest ecosystems. In the present study, the utility of spectral vegetation indices calculated from WorldView-2 (WV-2) imagery for mapping leaf N and C concentrations of fragmented and intact indigenous forest ecosystems was explored. We collected leaf samples from six tree species in the fragmented as well as intact Dukuduku indigenous forest ecosystems. Leaf samples (n = 85 for each of the fragmented and intact forests) were subjected to chemical analysis for estimating the concentrations of N and C. We used 70% of samples for training our models and 30% for validating the accuracy of our predictive empirical models. The study showed that the N concentration was significantly higher (p = 0.03) in the intact forests than in the fragmented forest. There was no significant difference (p = 0.55) in the C concentration between the intact and fragmented forest strata. The results further showed that the foliar N and C concentrations could be more accurately estimated using the fragmented stratum data compared with the intact stratum data. Further, SVM achieved relatively more accurate N (maximum R2 Val = 0.78 and minimum RMSEVal = 1.07% of the mean) and C (maximum R2 Val = 0.67 and minimum RMSEVal = 1.64% of the mean) estimates compared with ANN (maximum R2Val = 0.70 for N and 0.51 for C and minimum RMSEVal = 5.40% of the mean for N and 2.21% of the mean for C). Overall, SVM regressions achieved more accurate models for estimating forest foliar N and C concentrations in the fragmented and intact indigenous forests compared to the ANN regression method. It is concluded that the successful application of the WV-2 data integrated with SVM can provide an accurate framework for mapping the concentrations of biochemical elements in two indigenous forest ecosystems.
Evaluation of Amino Acid and Energy Utilization in Feedstuff for Swine and Poultry Diets
Kong, C.; Adeola, O.
2014-01-01
An accurate feed formulation is essential for optimizing feed efficiency and minimizing feed cost for swine and poultry production. Because energy and amino acid (AA) account for the major cost of swine and poultry diets, a precise determination of the availability of energy and AA in feedstuffs is essential for accurate diet formulations. Therefore, the methodology for determining the availability of energy and AA should be carefully selected. The total collection and index methods are 2 major procedures for estimating the availability of energy and AA in feedstuffs for swine and poultry diets. The total collection method is based on the laborious production of quantitative records of feed intake and output, whereas the index method can avoid the laborious work, but greatly relies on accurate chemical analysis of index compound. The direct method, in which the test feedstuff in a diet is the sole source of the component of interest, is widely used to determine the digestibility of nutritional components in feedstuffs. In some cases, however, it may be necessary to formulate a basal diet and a test diet in which a portion of the basal diet is replaced by the feed ingredient to be tested because of poor palatability and low level of the interested component in the test ingredients. For the digestibility of AA, due to the confounding effect on AA composition of protein in feces by microorganisms in the hind gut, ileal digestibility rather than fecal digestibility has been preferred as the reliable method for estimating AA digestibility. Depending on the contribution of ileal endogenous AA losses in the ileal digestibility calculation, ileal digestibility estimates can be expressed as apparent, standardized, and true ileal digestibility, and are usually determined using the ileal cannulation method for pigs and the slaughter method for poultry. Among these digestibility estimates, the standardized ileal AA digestibility that corrects apparent ileal digestibility for basal endogenous AA losses, provides appropriate information for the formulation of swine and poultry diets. The total quantity of energy in feedstuffs can be partitioned into different components including gross energy (GE), digestible energy (DE), metabolizable energy (ME), and net energy based on the consideration of sequential energy losses during digestion and metabolism from GE in feeds. For swine, the total collection method is suggested for determining DE and ME in feedstuffs whereas for poultry the classical ME assay and the precision-fed method are applicable. Further investigation for the utilization of ME may be conducted by measuring either heat production or energy retention using indirect calorimetry or comparative slaughter method, respectively. This review provides information on the methodology used to determine accurate estimates of AA and energy availability for formulating swine and poultry diets. PMID:25050031
Evaluation of amino Acid and energy utilization in feedstuff for Swine and poultry diets.
Kong, C; Adeola, O
2014-07-01
An accurate feed formulation is essential for optimizing feed efficiency and minimizing feed cost for swine and poultry production. Because energy and amino acid (AA) account for the major cost of swine and poultry diets, a precise determination of the availability of energy and AA in feedstuffs is essential for accurate diet formulations. Therefore, the methodology for determining the availability of energy and AA should be carefully selected. The total collection and index methods are 2 major procedures for estimating the availability of energy and AA in feedstuffs for swine and poultry diets. The total collection method is based on the laborious production of quantitative records of feed intake and output, whereas the index method can avoid the laborious work, but greatly relies on accurate chemical analysis of index compound. The direct method, in which the test feedstuff in a diet is the sole source of the component of interest, is widely used to determine the digestibility of nutritional components in feedstuffs. In some cases, however, it may be necessary to formulate a basal diet and a test diet in which a portion of the basal diet is replaced by the feed ingredient to be tested because of poor palatability and low level of the interested component in the test ingredients. For the digestibility of AA, due to the confounding effect on AA composition of protein in feces by microorganisms in the hind gut, ileal digestibility rather than fecal digestibility has been preferred as the reliable method for estimating AA digestibility. Depending on the contribution of ileal endogenous AA losses in the ileal digestibility calculation, ileal digestibility estimates can be expressed as apparent, standardized, and true ileal digestibility, and are usually determined using the ileal cannulation method for pigs and the slaughter method for poultry. Among these digestibility estimates, the standardized ileal AA digestibility that corrects apparent ileal digestibility for basal endogenous AA losses, provides appropriate information for the formulation of swine and poultry diets. The total quantity of energy in feedstuffs can be partitioned into different components including gross energy (GE), digestible energy (DE), metabolizable energy (ME), and net energy based on the consideration of sequential energy losses during digestion and metabolism from GE in feeds. For swine, the total collection method is suggested for determining DE and ME in feedstuffs whereas for poultry the classical ME assay and the precision-fed method are applicable. Further investigation for the utilization of ME may be conducted by measuring either heat production or energy retention using indirect calorimetry or comparative slaughter method, respectively. This review provides information on the methodology used to determine accurate estimates of AA and energy availability for formulating swine and poultry diets.
Cole, Stephen R.; Hudgens, Michael G.; Tien, Phyllis C.; Anastos, Kathryn; Kingsley, Lawrence; Chmiel, Joan S.; Jacobson, Lisa P.
2012-01-01
To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding. PMID:22302074
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, P.D.; Butt, N.M.; Sarma, K.R.
This report covers a preliminary conceptual design and economic evaluation of a commercial scale plant capable of converting high-sulfur bituminous caking coal to a high-Btu pipeline quality SNG. The plant, which has a rated capacity of 250 Billion Btu per day SNG, is based on Cities Service/Rockwell hydrogasification technology. Two cases of plant design were examined to produce cost estimates accurate to +-25% in 1979 dollars. The base case, designed for moderate production of liquids (5.8% conversion of carbon to liquid product), has a cost of SNG of $4.43/MMBtu using the utility financing method (UFM) and $6.42/MMBtu using the discountedmore » cash flow method (DCFM) of financing. The alternate case, zero liquids production, has gas costs of $5.00 (UFM) and $6.96 (DCFM). Further tests by Rockwell have indicated that 11.4% carbon conversion to liquid products (99% benzene) is possible. If the plant is scaled up to produce the same amoung of SNG with this increased yield of liquid, and if the value of the benzene produced is estimated to be $0.90 per gallon, the costs of gas for this case are $4.38/MMBtu (UFM) and $6,48/MMBtu (DCFM). If the value of benzene is taken as $2.00 per gallon, these costs become $3.14/MMBtu (UFM) and $5.23/MMBtu (DCFM). The economic assumptions involved in these calculations are detailed.« less
Horizon Based Orientation Estimation for Planetary Surface Navigation
NASA Technical Reports Server (NTRS)
Bouyssounouse, X.; Nefian, A. V.; Deans, M.; Thomas, A.; Edwards, L.; Fong, T.
2016-01-01
Planetary rovers navigate in extreme environments for which a Global Positioning System (GPS) is unavailable, maps are restricted to relatively low resolution provided by orbital imagery, and compass information is often lacking due to weak or not existent magnetic fields. However, an accurate rover localization is particularly important to achieve the mission success by reaching the science targets, avoiding negative obstacles visible only in orbital maps, and maintaining good communication connections with ground. This paper describes a horizon solution for precise rover orientation estimation. The detected horizon in imagery provided by the on board navigation cameras is matched with the horizon rendered over the existing terrain model. The set of rotation parameters (roll, pitch yaw) that minimize the cost function between the two horizon curves corresponds to the rover estimated pose.
Estimating the age-specific duration of herpes zoster vaccine protection: a matter of model choice?
Bilcke, Joke; Ogunjimi, Benson; Hulstaert, Frank; Van Damme, Pierre; Hens, Niel; Beutels, Philippe
2012-04-05
The estimation of herpes zoster (HZ) vaccine efficacy by time since vaccination and age at vaccination is crucial to assess the effectiveness and cost-effectiveness of HZ vaccination. Published estimates for the duration of protection from the vaccine diverge substantially, although based on data from the same trial for a follow-up period of 5 years. Different models were used to obtain these estimates, but it is unclear which of these models is most appropriate (if any). Only one study estimated vaccine efficacy by age at vaccination and time since vaccination combined. Recently, data became available from the same trial for a follow-up period of 7 years. We aim to elaborate on estimating HZ vaccine efficacy (1) by estimating it as a function of time since vaccination and age at vaccination, (2) by comparing the fits of a range of models, and (3) by fitting these models on data for a follow-up period of 5 and 7 years. Although the models' fit to data are very comparable, they differ substantially in how they estimate vaccine efficacy to change as a function of time since vaccination and age at vaccination. An accurate estimation of HZ vaccine efficacy by time since vaccination and age at vaccination is hampered by the lack of insight in the biological processes underlying HZ vaccine protection, and by the fact that such data are currently not available in sufficient detail. Uncertainty about the choice of model to estimate this important parameter should be acknowledged in cost-effectiveness analyses. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Farhadi, L.; Abdolghafoorian, A.
2015-12-01
The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states
A medical cost estimation with fuzzy neural network of acute hepatitis patients in emergency room.
Kuo, R J; Cheng, W C; Lien, W C; Yang, T J
2015-10-01
Taiwan is an area where chronic hepatitis is endemic. Liver cancer is so common that it has been ranked first among cancer mortality rates since the early 1980s in Taiwan. Besides, liver cirrhosis and chronic liver diseases are the sixth or seventh in the causes of death. Therefore, as shown by the active research on hepatitis, it is not only a health threat, but also a huge medical cost for the government. The estimated total number of hepatitis B carriers in the general population aged more than 20 years old is 3,067,307. Thus, a case record review was conducted from all patients with diagnosis of acute hepatitis admitted to the Emergency Department (ED) of a well-known teaching-oriented hospital in Taipei. The cost of medical resource utilization is defined as the total medical fee. In this study, a fuzzy neural network is employed to develop the cost forecasting model. A total of 110 patients met the inclusion criteria. The computational results indicate that the FNN model can provide more accurate forecasts than the support vector regression (SVR) or artificial neural network (ANN). In addition, unlike SVR and ANN, FNN can also provide fuzzy IF-THEN rules for interpretation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Estimating the Cost-Effectiveness of Implementation: Is Sufficient Evidence Available?
Whyte, Sophie; Dixon, Simon; Faria, Rita; Walker, Simon; Palmer, Stephen; Sculpher, Mark; Radford, Stefanie
2016-01-01
Timely implementation of recommended interventions can provide health benefits to patients and cost savings to the health service provider. Effective approaches to increase the implementation of guidance are needed. Since investment in activities that improve implementation competes for funding against other health generating interventions, it should be assessed in term of its costs and benefits. In 2010, the National Institute for Health and Care Excellence released a clinical guideline recommending natriuretic peptide (NP) testing in patients with suspected heart failure. However, its implementation in practice was variable across the National Health Service in England. This study demonstrates the use of multi-period analysis together with diffusion curves to estimate the value of investing in implementation activities to increase uptake of NP testing. Diffusion curves were estimated based on historic data to produce predictions of future utilization. The value of an implementation activity (given its expected costs and effectiveness) was estimated. Both a static population and a multi-period analysis were undertaken. The value of implementation interventions encouraging the utilization of NP testing is shown to decrease over time as natural diffusion occurs. Sensitivity analyses indicated that the value of the implementation activity depends on its efficacy and on the population size. Value of implementation can help inform policy decisions of how to invest in implementation activities even in situations in which data are sparse. Multi-period analysis is essential to accurately quantify the time profile of the value of implementation given the natural diffusion of the intervention and the incidence of the disease. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Yilmaz, Banu; Aras, Egemen; Nacar, Sinan; Kankal, Murat
2018-05-23
The functional life of a dam is often determined by the rate of sediment delivery to its reservoir. Therefore, an accurate estimate of the sediment load in rivers with dams is essential for designing and predicting a dam's useful lifespan. The most credible method is direct measurements of sediment input, but this can be very costly and it cannot always be implemented at all gauging stations. In this study, we tested various regression models to estimate suspended sediment load (SSL) at two gauging stations on the Çoruh River in Turkey, including artificial bee colony (ABC), teaching-learning-based optimization algorithm (TLBO), and multivariate adaptive regression splines (MARS). These models were also compared with one another and with classical regression analyses (CRA). Streamflow values and previously collected data of SSL were used as model inputs with predicted SSL data as output. Two different training and testing dataset configurations were used to reinforce the model accuracy. For the MARS method, the root mean square error value was found to range between 35% and 39% for the test two gauging stations, which was lower than errors for other models. Error values were even lower (7% to 15%) using another dataset. Our results indicate that simultaneous measurements of streamflow with SSL provide the most effective parameter for obtaining accurate predictive models and that MARS is the most accurate model for predicting SSL. Copyright © 2017 Elsevier B.V. All rights reserved.
Cost analysis of PET and comprehensive lifestyle modification for the reversal of atherosclerosis.
Delgado, Rigoberto I; Swint, J Michael; Lairson, David R; Johnson, Nils P; Gould, K Lance; Sdringola, Stefano
2014-01-01
We present a preliminary cost analysis of a combination intervention using PET and comprehensive lifestyle modification to reverse atherosclerosis. With a sensitivity of 92%-95% and specificity of 85%-95%, PET is an essential tool for high-precision diagnosis of coronary artery disease, accurately guiding optimal treatment for both symptomatic and asymptomatic patients. PET imaging provides a powerful visual and educational aid for helping patients identify and adopt appropriate treatments. However, little is known about the operational cost of using the technology for this purpose. The analysis was done in the context of the Century Health Study for Cardiovascular Medicine (Century Trial), a 1,300-patient, randomized study combining PET imaging with lifestyle changes. Our methodology included a microcosting and time study focusing on estimating average direct and indirect costs. The total cost of the Century Trial in present-value terms is $9.2 million, which is equal to $7,058 per patient. Sensitivity analysis indicates that the present value of total costs is likely to range between $8.8 and $9.7 million, which is equivalent to $6,655-$7,606 per patient. The clinical relevance of the Century Trial is significant since it is, to our knowledge, the first randomized controlled trial to combine high-precision imaging with lifestyle strategies. The Century Trial is in its second year of a 5-y protocol, and we present preliminary findings. The results of this cost study, however, provide policy makers with an early estimate of the costs of implementing, at large scale, a combined intervention such as the Century Trial. Further, we believe that imaging-guided lifestyle management may have considerable potential for improving outcomes and reducing health-care costs by eliminating unnecessary invasive procedures.
A Highly Reliable and Cost-Efficient Multi-Sensor System for Land Vehicle Positioning.
Li, Xu; Xu, Qimin; Li, Bin; Song, Xianghui
2016-05-25
In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS) which consists of one vertical gyroscope and two horizontal accelerometers, low-cost GPS, and supplementary sensors and sources. First, pitch and roll angle are accurately estimated based on a vehicle kinematic model. Meanwhile, the negative effect of the uncertain nonlinear drift of MEMS inertial sensors is eliminated by an H∞ filter. Further, a distributed-dual-H∞ filtering (DDHF) mechanism is adopted to address the uncertain nonlinear drift of the MEMS-RISS and make full use of the supplementary sensors and sources. The DDHF is composed of a main H∞ filter (MHF) and an auxiliary H∞ filter (AHF). Finally, a generalized regression neural network (GRNN) module with good approximation capability is specially designed for the MEMS-RISS. A hybrid methodology which combines the GRNN module and the AHF is utilized to compensate for RISS position errors during GPS outages. To verify the effectiveness of the proposed solution, road-test experiments with various scenarios were performed. The experimental results illustrate that the proposed system can achieve accurate and reliable positioning for land vehicles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamada, Yuki; Grippo, Mark A.
2015-01-01
A monitoring plan that incorporates regional datasets and integrates cost-effective data collection methods is necessary to sustain the long-term environmental monitoring of utility-scale solar energy development in expansive, environmentally sensitive desert environments. Using very high spatial resolution (VHSR; 15 cm) multispectral imagery collected in November 2012 and January 2014, an image processing routine was developed to characterize ephemeral streams, vegetation, and land surface in the southwestern United States where increased utility-scale solar development is anticipated. In addition to knowledge about desert landscapes, the methodology integrates existing spectral indices and transformation (e.g., visible atmospherically resistant index and principal components); a newlymore » developed index, erosion resistance index (ERI); and digital terrain and surface models, all of which were derived from a common VHSR image. The methodology identified fine-scale ephemeral streams with greater detail than the National Hydrography Dataset and accurately estimated vegetation distribution and fractional cover of various surface types. The ERI classified surface types that have a range of erosive potentials. The remote-sensing methodology could ultimately reduce uncertainty and monitoring costs for all stakeholders by providing a cost-effective monitoring approach that accurately characterizes the land resources at potential development sites.« less
A Highly Reliable and Cost-Efficient Multi-Sensor System for Land Vehicle Positioning
Li, Xu; Xu, Qimin; Li, Bin; Song, Xianghui
2016-01-01
In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS) which consists of one vertical gyroscope and two horizontal accelerometers, low-cost GPS, and supplementary sensors and sources. First, pitch and roll angle are accurately estimated based on a vehicle kinematic model. Meanwhile, the negative effect of the uncertain nonlinear drift of MEMS inertial sensors is eliminated by an H∞ filter. Further, a distributed-dual-H∞ filtering (DDHF) mechanism is adopted to address the uncertain nonlinear drift of the MEMS-RISS and make full use of the supplementary sensors and sources. The DDHF is composed of a main H∞ filter (MHF) and an auxiliary H∞ filter (AHF). Finally, a generalized regression neural network (GRNN) module with good approximation capability is specially designed for the MEMS-RISS. A hybrid methodology which combines the GRNN module and the AHF is utilized to compensate for RISS position errors during GPS outages. To verify the effectiveness of the proposed solution, road-test experiments with various scenarios were performed. The experimental results illustrate that the proposed system can achieve accurate and reliable positioning for land vehicles. PMID:27231917
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwabe, P.; Lensink, S.; Hand, M.
2011-03-01
The lifetime cost of wind energy is comprised of a number of components including the investment cost, operation and maintenance costs, financing costs, and annual energy production. Accurate representation of these cost streams is critical in estimating a wind plant's cost of energy. Some of these cost streams will vary over the life of a given project. From the outset of project development, investors in wind energy have relatively certain knowledge of the plant's lifetime cost of wind energy. This is because a wind energy project's installed costs and mean wind speed are known early on, and wind generation generallymore » has low variable operation and maintenance costs, zero fuel cost, and no carbon emissions cost. Despite these inherent characteristics, there are wide variations in the cost of wind energy internationally, which is the focus of this report. Using a multinational case-study approach, this work seeks to understand the sources of wind energy cost differences among seven countries under International Energy Agency (IEA) Wind Task 26 - Cost of Wind Energy. The participating countries in this study include Denmark, Germany, the Netherlands, Spain, Sweden, Switzerland, and the United States. Due to data availability, onshore wind energy is the primary focus of this study, though a small sample of reported offshore cost data is also included.« less
Process Cost Modeling for Multi-Disciplinary Design Optimization
NASA Technical Reports Server (NTRS)
Bao, Han P.; Freeman, William (Technical Monitor)
2002-01-01
For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This report outlines the development of a process-based cost model in which the physical elements of the vehicle are costed according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this report is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool. In successive sections, the report addresses the issues of cost modeling as follows. First, an introduction is presented to provide the background for the research work. Next, a quick review of cost estimation techniques is made with the intention to highlight their inappropriateness for what is really needed at the conceptual phase of the design process. The First-Order Process Velocity Cost Model (FOPV) is discussed at length in the next section. This is followed by an application of the FOPV cost model to a generic wing. For designs that have no precedence as far as acquisition costs are concerned, cost data derived from the FOPV cost model may not be accurate enough because of new requirements for shape complexity, material, equipment and precision/tolerance. The concept of Cost Modulus is introduced at this point to compensate for these new burdens on the basic processes. This is treated in section 5. The cost of a design must be conveniently linked to its CAD representation. The interfacing of CAD models and spreadsheets containing the cost equations is the subject of the next section, section 6. The last section of the report is a summary of the progress made so far, and the anticipated research work to be achieved in the future.
Clinical trials finance and operations.
O'Brien, Jennifer A
2007-01-01
The National Coverage Decision of 2000 was designed to enhance the participation in clinical trials for both patients and physicians by mandating the governmental coverage for services in a clinical trial that are considered "routine" regardless of the trial. Participation in clinical trials can be a practice builder as well as a contribution to the betterment of medical science. Without proper coverage analysis, study budgeting, accurate time estimates, and effective negotiation prior to signing the contract, participation in clinical trials can cost a practice rather than benefit it.
2004-03-01
Breusch - Pagan test for constant variance of the residuals. Using Microsoft Excel® we calculate a p-value of 0.841237. This high p-value, which is above...our alpha of 0.05, indicates that our residuals indeed pass the Breusch - Pagan test for constant variance. In addition to the assumption tests , we...Wilk Test for Normality – Support (Reduced) Model (OLS) Finally, we perform a Breusch - Pagan test for constant variance of the residuals. Using
The Accuracy of Pedometers in Measuring Walking Steps on a Treadmill in College Students.
Husted, Hannah M; Llewellyn, Tamra L
2017-01-01
Pedometers are a popular way for people to track if they have reached the recommended 10,000 daily steps. Therefore, the purpose of this study was to determine the accuracy of four brands of pedometers at measuring steps, and to determine if a relationship exists between pedometer cost and accuracy. The hypothesis was that the more expensive brands of pedometers (the Fitbit Charge™ and Omron HJ-303™) would yield more accurate step counts than less expensive brands (the SmartHealth - Walking FIT™ and Sportline™). While wearing all pedometers at once, one male and eleven female college students (mean ± SD; age = 20.8 ± 0.94 years) walked 400 meters on a treadmill for 5 minutes at 3.5 miles per hour. The pedometer step counts were recorded at the end. Video analysis of the participants' feet was later completed to count the number of steps actually taken (actual steps). When compared to the actual steps, the Sportline™ brand (-3.83 ± 22.05) was the only pedometer that was significantly similar. The other three brands significantly under-estimated steps (Fitbit™ 55.00 ± 42.58, SmartHealth™ 43.50 ± 49.71, and Omron™ 28.58 ± 33.86), with the Fitbit being the least accurate. These results suggest an inverse relationship between cost and accuracy for the four specific brands tested, and that waist pedometers are more accurate than wrist pedometers. The results concerning the Fitbit are striking considering its high cost and popularity among consumers today. Further research should be conducted to improve the accuracy of pedometers.
Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration
NASA Astrophysics Data System (ADS)
Zhou, Jian; Qi, Jinyi
2011-03-01
Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.
NASA Technical Reports Server (NTRS)
1979-01-01
Satellites provide an excellent platform from which to observe crops on the scale and frequency required to provide accurate crop production estimates on a worldwide basis. Multispectral imaging sensors aboard these platforms are capable of providing data from which to derive acreage and production estimates. The issue of sensor swath width was examined. The quantitative trade trade necessary to resolve the combined issue of sensor swath width, number of platforms, and their orbits was generated and are included. Problems with different swath width sensors were analyzed and an assessment of system trade-offs of swath width versus number of satellites was made for achieving Global Crop Production Forecasting.
Pricise Target Geolocation and Tracking Based on Uav Video Imagery
NASA Astrophysics Data System (ADS)
Hosseinpoor, H. R.; Samadzadegan, F.; Dadrasjavan, F.
2016-06-01
There is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this study compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation.
DNA copy number, including telomeres and mitochondria, assayed using next-generation sequencing.
Castle, John C; Biery, Matthew; Bouzek, Heather; Xie, Tao; Chen, Ronghua; Misura, Kira; Jackson, Stuart; Armour, Christopher D; Johnson, Jason M; Rohl, Carol A; Raymond, Christopher K
2010-04-16
DNA copy number variations occur within populations and aberrations can cause disease. We sought to develop an improved lab-automatable, cost-efficient, accurate platform to profile DNA copy number. We developed a sequencing-based assay of nuclear, mitochondrial, and telomeric DNA copy number that draws on the unbiased nature of next-generation sequencing and incorporates techniques developed for RNA expression profiling. To demonstrate this platform, we assayed UMC-11 cells using 5 million 33 nt reads and found tremendous copy number variation, including regions of single and homogeneous deletions and amplifications to 29 copies; 5 times more mitochondria and 4 times less telomeric sequence than a pool of non-diseased, blood-derived DNA; and that UMC-11 was derived from a male individual. The described assay outputs absolute copy number, outputs an error estimate (p-value), and is more accurate than array-based platforms at high copy number. The platform enables profiling of mitochondrial levels and telomeric length. The assay is lab-automatable and has a genomic resolution and cost that are tunable based on the number of sequence reads.
DNA copy number, including telomeres and mitochondria, assayed using next-generation sequencing
2010-01-01
Background DNA copy number variations occur within populations and aberrations can cause disease. We sought to develop an improved lab-automatable, cost-efficient, accurate platform to profile DNA copy number. Results We developed a sequencing-based assay of nuclear, mitochondrial, and telomeric DNA copy number that draws on the unbiased nature of next-generation sequencing and incorporates techniques developed for RNA expression profiling. To demonstrate this platform, we assayed UMC-11 cells using 5 million 33 nt reads and found tremendous copy number variation, including regions of single and homogeneous deletions and amplifications to 29 copies; 5 times more mitochondria and 4 times less telomeric sequence than a pool of non-diseased, blood-derived DNA; and that UMC-11 was derived from a male individual. Conclusion The described assay outputs absolute copy number, outputs an error estimate (p-value), and is more accurate than array-based platforms at high copy number. The platform enables profiling of mitochondrial levels and telomeric length. The assay is lab-automatable and has a genomic resolution and cost that are tunable based on the number of sequence reads. PMID:20398377
Steen, Aaron J; Mann, Julianne A; Carlberg, Valerie M; Kimball, Alexa B; Musty, Michael J; Simpson, Eric L
2017-04-01
The American Academy of Dermatology recommends dermatologists understand the costs of dermatologic care. This study sought to measure dermatology providers' understanding of the cost of dermatologic care and how those costs are communicated to patients. We also aimed to understand the perspectives of patients and dermatological trainees on how cost information enters into the care they receive or provide. Surveys were systematically developed and distributed to 3 study populations: dermatology providers, residents, and patients. Response rates were over 95% in all 3 populations. Dermatology providers and residents consistently underestimated the costs of commonly recommended dermatologic medications but accurately predicted the cost of common dermatologic procedures. Dermatology patients preferred to know the cost of procedures and medications, even when covered by insurance. In this population, the costs of dermatologic medications frequently interfered with patients' ability to properly adhere to prescribed regimens. The surveyed population was limited to the northwestern United States and findings may not be generalizable. Cost estimations were based on average reimbursement rates, which vary by insurer. Improving dermatology providers' awareness and communication of the costs of dermatologic care might enhance medical decision-making, improve adherence and outcomes, and potentially reduce overall health care expenditures. Copyright © 2016 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
Commuting by bike in Belgium, the costs of minor accidents.
Aertsens, Joris; de Geus, Bas; Vandenbulcke, Grégory; Degraeuwe, Bart; Broekx, Steven; De Nocker, Leo; Liekens, Inge; Mayeres, Inge; Meeusen, Romain; Thomas, Isabelle; Torfs, Rudi; Willems, Hanny; Int Panis, Luc
2010-11-01
Minor bicycle accidents are defined as "bicycle accidents not involving death or heavily injured persons, implying that possible hospital visits last less than 24 hours". Statistics about these accidents and related injuries are very poor, because they are mostly not reported to police, hospitals or insurance companies. Yet, they form a major share of all bicycle accidents. Official registrations underestimate the number of minor accidents and do not provide cost data, nor the distance cycled. Therefore related policies are hampered by a lack of accurate data. This paper provides more insight into the importance of minor bicycle accidents and reports the frequency, risk and resulting costs of minor bicycle accidents. Direct costs, including the damage to bike and clothes as well as medical costs and indirect costs such as productivity loss and leisure time lost are calculated. We also estimate intangible costs of pain and psychological suffering and costs for other parties involved in the accident. Data were collected during the SHAPES project using several electronic surveys. The weekly prospective registration that lasted a year, covered 1187 persons that cycled 1,474,978 km. 219 minor bicycle accidents were reported. Resulting in a frequency of 148 minor bicycle accidents per million kilometres. We analyzed the economic costs related to 118 minor bicycle accidents in detail. The average total cost of these accidents is estimated at 841 euro (95% CI: 579-1205) per accident or 0.125 euro per kilometre cycled. Overall, productivity loss is the most important component accounting for 48% of the total cost. Intangible costs, which in past research were mostly neglected, are an important burden related to minor bicycle accidents (27% of the total cost). Even among minor accidents there are important differences in the total cost depending on the severity of the injury. 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hufkens, K.; Richardson, A. D.; Migliavacca, M.; Frolking, S. E.; Braswell, B. H.; Milliman, T.; Friedl, M. A.
2010-12-01
In recent years several studies have used digital cameras and webcams to monitor green leaf phenology. Such "near-surface" remote sensing has been shown to be a cost effective means of accurately capturing phenology. Specifically, it allows for accurate tracking of intra- and inter-annual phenological dynamics at high temporal frequency and over broad spatial scales compared to visual observations or tower-based fAPAR and broadband NDVI measurements. Near surface remote sensing measurements therefore show promise for bridging the gap between traditional in-situ measurements of phenology and satellite remote sensing data. For this work, we examined the relationship between phenophase estimates derived from satellite remote sensing (MODIS) and near-earth remote sensing derived from webcams for a select set of sites with high-quality webcam data. A logistic model was used to characterize phenophases for both the webcam and MODIS data. We documented model fit accuracy, phenophase estimates, and model biases for both data sources. Our results show that different vegetation indices (VI's) derived from MODIS produce significantly different phenophase estimates compared to corresponding estimates derived from webcam data. Different VI's showed markedly different radiometric properties, and as a result, influenced phenophase estimates. The study shows that phenophase estimates are not only highly dependent on the algorithm used but also depend on the VI used by the phenology retrieval algorithm. These results highlight the need for a better understanding of how near-earth and satellite remote data relate to eco-physiological and canopy changes during different parts of the growing season.
Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D
2014-04-01
The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50-100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.
Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.
2014-01-01
The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.
Optimising cluster survey design for planning schistosomiasis preventive chemotherapy.
Knowles, Sarah C L; Sturrock, Hugh J W; Turner, Hugo; Whitton, Jane M; Gower, Charlotte M; Jemu, Samuel; Phillips, Anna E; Meite, Aboulaye; Thomas, Brent; Kollie, Karsor; Thomas, Catherine; Rebollo, Maria P; Styles, Ben; Clements, Michelle; Fenwick, Alan; Harrison, Wendy E; Fleming, Fiona M
2017-05-01
The cornerstone of current schistosomiasis control programmes is delivery of praziquantel to at-risk populations. Such preventive chemotherapy requires accurate information on the geographic distribution of infection, yet the performance of alternative survey designs for estimating prevalence and converting this into treatment decisions has not been thoroughly evaluated. We used baseline schistosomiasis mapping surveys from three countries (Malawi, Côte d'Ivoire and Liberia) to generate spatially realistic gold standard datasets, against which we tested alternative two-stage cluster survey designs. We assessed how sampling different numbers of schools per district (2-20) and children per school (10-50) influences the accuracy of prevalence estimates and treatment class assignment, and we compared survey cost-efficiency using data from Malawi. Due to the focal nature of schistosomiasis, up to 53% simulated surveys involving 2-5 schools per district failed to detect schistosomiasis in low endemicity areas (1-10% prevalence). Increasing the number of schools surveyed per district improved treatment class assignment far more than increasing the number of children sampled per school. For Malawi, surveys of 15 schools per district and 20-30 children per school reliably detected endemic schistosomiasis and maximised cost-efficiency. In sensitivity analyses where treatment costs and the country considered were varied, optimal survey size was remarkably consistent, with cost-efficiency maximised at 15-20 schools per district. Among two-stage cluster surveys for schistosomiasis, our simulations indicated that surveying 15-20 schools per district and 20-30 children per school optimised cost-efficiency and minimised the risk of under-treatment, with surveys involving more schools of greater cost-efficiency as treatment costs rose.
Laboratory Workflow Analysis of Culture of Periprosthetic Tissues in Blood Culture Bottles.
Peel, Trisha N; Sedarski, John A; Dylla, Brenda L; Shannon, Samantha K; Amirahmadi, Fazlollaah; Hughes, John G; Cheng, Allen C; Patel, Robin
2017-09-01
Culture of periprosthetic tissue specimens in blood culture bottles is more sensitive than conventional techniques, but the impact on laboratory workflow has yet to be addressed. Herein, we examined the impact of culture of periprosthetic tissues in blood culture bottles on laboratory workflow and cost. The workflow was process mapped, decision tree models were constructed using probabilities of positive and negative cultures drawn from our published study (T. N. Peel, B. L. Dylla, J. G. Hughes, D. T. Lynch, K. E. Greenwood-Quaintance, A. C. Cheng, J. N. Mandrekar, and R. Patel, mBio 7:e01776-15, 2016, https://doi.org/10.1128/mBio.01776-15), and the processing times and resource costs from the laboratory staff time viewpoint were used to compare periprosthetic tissues culture processes using conventional techniques with culture in blood culture bottles. Sensitivity analysis was performed using various rates of positive cultures. Annualized labor savings were estimated based on salary costs from the U.S. Labor Bureau for Laboratory staff. The model demonstrated a 60.1% reduction in mean total staff time with the adoption of tissue inoculation into blood culture bottles compared to conventional techniques (mean ± standard deviation, 30.7 ± 27.6 versus 77.0 ± 35.3 h per month, respectively; P < 0.001). The estimated annualized labor cost savings of culture using blood culture bottles was $10,876.83 (±$337.16). Sensitivity analysis was performed using various rates of culture positivity (5 to 50%). Culture in blood culture bottles was cost-effective, based on the estimated labor cost savings of $2,132.71 for each percent increase in test accuracy. In conclusion, culture of periprosthetic tissue in blood culture bottles is not only more accurate than but is also cost-saving compared to conventional culture methods. Copyright © 2017 American Society for Microbiology.
A practical tool for modeling biospecimen user fees.
Matzke, Lise; Dee, Simon; Bartlett, John; Damaraju, Sambasivarao; Graham, Kathryn; Johnston, Randal; Mes-Masson, Anne-Marie; Murphy, Leigh; Shepherd, Lois; Schacter, Brent; Watson, Peter H
2014-08-01
The question of how best to attribute the unit costs of the annotated biospecimen product that is provided to a research user is a common issue for many biobanks. Some of the factors influencing user fees are capital and operating costs, internal and external demand and market competition, and moral standards that dictate that fees must have an ethical basis. It is therefore important to establish a transparent and accurate costing tool that can be utilized by biobanks and aid them in establishing biospecimen user fees. To address this issue, we built a biospecimen user fee calculator tool, accessible online at www.biobanking.org . The tool was built to allow input of: i) annual operating and capital costs; ii) costs categorized by the major core biobanking operations; iii) specimen products requested by a biobank user; and iv) services provided by the biobank beyond core operations (e.g., histology, tissue micro-array); as well as v) several user defined variables to allow the calculator to be adapted to different biobank operational designs. To establish default values for variables within the calculator, we first surveyed the members of the Canadian Tumour Repository Network (CTRNet) management committee. We then enrolled four different participants from CTRNet biobanks to test the hypothesis that the calculator tool could change approaches to user fees. Participants were first asked to estimate user fee pricing for three hypothetical user scenarios based on their biobanking experience (estimated pricing) and then to calculate fees for the same scenarios using the calculator tool (calculated pricing). Results demonstrated significant variation in estimated pricing that was reduced by calculated pricing, and that higher user fees are consistently derived when using the calculator. We conclude that adoption of this online calculator for user fee determination is an important first step towards harmonization and realistic user fees.
Favato, Giampiero; Easton, Tania; Vecchiato, Riccardo; Noikokyris, Emmanouil
2017-05-09
The protective (herd) effect of the selective vaccination of pubertal girls against human papillomavirus (HPV) implies a high probability that one of the two partners involved in intercourse is immunised, hence preventing the other from this sexually transmitted infection. The dynamic transmission models used to inform immunisation policy should include consideration of sexual behaviours and population mixing in order to demonstrate an ecological validity, whereby the scenarios modelled remain faithful to the real-life social and cultural context. The primary aim of this review is to test the ecological validity of the universal HPV vaccination cost-effectiveness modelling available in the published literature. The research protocol related to this systematic review has been registered in the International Prospective Register of Systematic Reviews (PROSPERO: CRD42016034145). Eight published economic evaluations were reviewed. None of the studies showed due consideration of the complexities of human sexual behaviour and the impact this may have on the transmission of HPV. Our findings indicate that all the included models might be affected by a different degree of ecological bias, which implies an inability to reflect the natural demographic and behavioural trends in their outcomes and, consequently, to accurately inform public healthcare policy. In particular, ecological bias have the effect to over-estimate the preference-based outcomes of selective immunisation. A relatively small (15-20%) over-estimation of quality-adjusted life years (QALYs) gained with selective immunisation programmes could induce a significant error in the estimate of cost-effectiveness of universal immunisation, by inflating its incremental cost effectiveness ratio (ICER) beyond the acceptability threshold. The results modelled here demonstrate the limitations of the cost-effectiveness studies for HPV vaccination, and highlight the concern that public healthcare policy might have been built upon incomplete studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hybrid-Based Dense Stereo Matching
NASA Astrophysics Data System (ADS)
Chuang, T. Y.; Ting, H. W.; Jaw, J. J.
2016-06-01
Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.
Review of the NURE Assessment of the U.S. Gulf Coast Uranium Province
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Susan M., E-mail: SusanHall@usgs.gov
2013-09-15
Historic exploration and development were used to evaluate the reliability of domestic uranium reserves and potential resources estimated by the U.S. Department of Energy national uranium resource evaluation (NURE) program in the U.S. Gulf Coast Uranium Province. NURE estimated 87 million pounds of reserves in themore » $$30/lb U{sub 3}O{sub 8} cost category in the Coast Plain uranium resource region, most in the Gulf Coast Uranium Province. Since NURE, 40 million pounds of reserves have been mined, and 38 million pounds are estimated to remain in place as of 2012, accounting for all but 9 million pounds of U{sub 3}O{sub 8} in the reserve or production categories in the NURE estimate. Considering the complexities and uncertainties of the analysis, this study indicates that the NURE reserve estimates for the province were accurate. An unconditional potential resource of 1.4 billion pounds of U{sub 3}O{sub 8}, 600 million pounds of U{sub 3}O{sub 8} in the forward cost category of $$30/lb U{sub 3}O{sub 8} (1980 prices), was estimated in 106 favorable areas by the NURE program in the province. Removing potential resources from the non-productive Houston embayment, and those reserves estimated below historic and current mining depths reduces the unconditional potential resource 33% to about 930 million pounds of U{sub 3}O{sub 8}, and that in the $30/lb cost category 34% to 399 million pounds of U{sub 3}O{sub 8}. Based on production records and reserve estimates tabulated for the region, most of the production since 1980 is likely from the reserves identified by NURE. The potential resource predicted by NURE has not been developed, likely due to a variety of factors related to the low uranium prices that have prevailed since 1980.« less
A multiscale forecasting method for power plant fleet management
NASA Astrophysics Data System (ADS)
Chen, Hongmei
In recent years the electric power industry has been challenged by a high level of uncertainty and volatility brought on by deregulation and globalization. A power producer must minimize the life cycle cost while meeting stringent safety and regulatory requirements and fulfilling customer demand for high reliability. Therefore, to achieve true system excellence, a more sophisticated system-level decision-making process with a more accurate forecasting support system to manage diverse and often widely dispersed generation units as a single, easily scaled and deployed fleet system in order to fully utilize the critical assets of a power producer has been created as a response. The process takes into account the time horizon for each of the major decision actions taken in a power plant and develops methods for information sharing between them. These decisions are highly interrelated and no optimal operation can be achieved without sharing information in the overall process. The process includes a forecasting system to provide information for planning for uncertainty. A new forecasting method is proposed, which utilizes a synergy of several modeling techniques properly combined at different time-scales of the forecasting objects. It can not only take advantages of the abundant historical data but also take into account the impact of pertinent driving forces from the external business environment to achieve more accurate forecasting results. Then block bootstrap is utilized to measure the bias in the estimate of the expected life cycle cost which will actually be needed to drive the business for a power plant in the long run. Finally, scenario analysis is used to provide a composite picture of future developments for decision making or strategic planning. The decision-making process is applied to a typical power producer chosen to represent challenging customer demand during high-demand periods. The process enhances system excellence by providing more accurate market information, evaluating the impact of external business environment, and considering cross-scale interactions between decision actions. Along with this process, system operation strategies, maintenance schedules, and capacity expansion plans that guide the operation of the power plant are optimally identified, and the total life cycle costs are estimated.
Burden of osteoporosis and fractures.
Keen, Richard W
2003-09-01
Osteoporosis currently affects up to one in three women and one in 12 men. In 1990, there were 1.6 million hip fractures per annum worldwide and this number is estimated to reach 6 million by 2050. This increase in the number of fractures is due to an increase in the number of elderly people in the population, improved survival, and an increase in the age-specific fracture rates of unknown etiology. The rising number of osteoporotic fractures and their associated morbidity will place a heavy burden on future health care resources. In the United States, the cost for the management of osteoporosis has been estimated at $17 billion. The majority of this cost is spent on the acute surgical and medical management following hip fracture, and the subsequent rehabilitation. Currently, only minimal costs are utilized for treatment and prevention of osteoporosis. Hopefully, however, an accurate assessment of the burden of osteoporosis on the individual and the health care system will enable the targeting of resources to tackle this growing problem. With an increasing number of effective pharmaceutical interventions, it is critical that these agents are targeted to those at greatest risk for future fracture. This will ultimately reduce the burden of osteoporosis in future years.
Normalization of energy-dependent gamma survey data.
Whicker, Randy; Chambers, Douglas
2015-05-01
Instruments and methods for normalization of energy-dependent gamma radiation survey data to a less energy-dependent basis of measurement are evaluated based on relevant field data collected at 15 different sites across the western United States along with a site in Mongolia. Normalization performance is assessed relative to measurements with a high-pressure ionization chamber (HPIC) due to its "flat" energy response and accurate measurement of the true exposure rate from both cosmic and terrestrial radiation. While analytically ideal for normalization applications, cost and practicality disadvantages have increased demand for alternatives to the HPIC. Regression analysis on paired measurements between energy-dependent sodium iodide (NaI) scintillation detectors (5-cm by 5-cm crystal dimensions) and the HPIC revealed highly consistent relationships among sites not previously impacted by radiological contamination (natural sites). A resulting generalized data normalization factor based on the average sensitivity of NaI detectors to naturally occurring terrestrial radiation (0.56 nGy hHPIC per nGy hNaI), combined with the calculated site-specific estimate of cosmic radiation, produced reasonably accurate predictions of HPIC readings at natural sites. Normalization against two to potential alternative instruments (a tissue-equivalent plastic scintillator and energy-compensated NaI detector) did not perform better than the sensitivity adjustment approach at natural sites. Each approach produced unreliable estimates of HPIC readings at radiologically impacted sites, though normalization against the plastic scintillator or energy-compensated NaI detector can address incompatibilities between different energy-dependent instruments with respect to estimation of soil radionuclide levels. The appropriate data normalization method depends on the nature of the site, expected duration of the project, survey objectives, and considerations of cost and practicality.
Budiman, Erwin S; Samant, Navendu; Resch, Ansgar
2013-03-01
Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors. Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. Further research is necessary to validate the findings of this model-based approach. © 2013 Diabetes Technology Society.
Budiman, Erwin S.; Samant, Navendu; Resch, Ansgar
2013-01-01
Background Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. Methods We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors.. Results Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Conclusions Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. PMID:23566995
Parker, David; Belaud-Rotureau, Marc-Antoine
2014-01-01
Break-apart fluorescence in situ hybridization (FISH) is the gold standard test for anaplastic lymphoma kinase (ALK) gene rearrangement. However, this methodology often is assumed to be expensive and potentially cost-prohibitive given the low prevalence of ALK-positive non-small cell lung cancer (NSCLC) cases. To more accurately estimate the cost of ALK testing by FISH, we developed a micro-cost model that accounts for all cost elements of the assay, including laboratory reagents, supplies, capital equipment, technical and pathologist labor, and the acquisition cost of the commercial test and associated reagent kits and controls. By applying a set of real-world base-case parameter values, we determined that the cost of a single ALK break-apart FISH test result is $278.01. Sensitivity analysis on the parameters of batch size, testing efficiency, and the cost of the commercial diagnostic testing products revealed that the cost per result is highly sensitive to batch size, but much less so to efficiency or product cost. This implies that ALK testing by FISH will be most cost effective when performed in high-volume centers. Our results indicate that testing cost may not be the primary determinant of crizotinib (Xalkori(®)) treatment cost effectiveness, and suggest that testing cost is an insufficient reason to limit the use of FISH testing for ALK rearrangement.
Parker, David; Belaud-Rotureau, Marc-Antoine
2014-01-01
Break-apart fluorescence in situ hybridization (FISH) is the gold standard test for anaplastic lymphoma kinase (ALK) gene rearrangement. However, this methodology often is assumed to be expensive and potentially cost-prohibitive given the low prevalence of ALK-positive non-small cell lung cancer (NSCLC) cases. To more accurately estimate the cost of ALK testing by FISH, we developed a micro-cost model that accounts for all cost elements of the assay, including laboratory reagents, supplies, capital equipment, technical and pathologist labor, and the acquisition cost of the commercial test and associated reagent kits and controls. By applying a set of real-world base-case parameter values, we determined that the cost of a single ALK break-apart FISH test result is $278.01. Sensitivity analysis on the parameters of batch size, testing efficiency, and the cost of the commercial diagnostic testing products revealed that the cost per result is highly sensitive to batch size, but much less so to efficiency or product cost. This implies that ALK testing by FISH will be most cost effective when performed in high-volume centers. Our results indicate that testing cost may not be the primary determinant of crizotinib (Xalkori®) treatment cost effectiveness, and suggest that testing cost is an insufficient reason to limit the use of FISH testing for ALK rearrangement. PMID:25520569
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Wei; Huang, Guo H., E-mail: huang@iseis.org; Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan, S4S 0A2
2012-06-15
Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerancemore » intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.« less
Accurate Estimation of Target amounts Using Expanded BASS Model for Demand-Side Management
NASA Astrophysics Data System (ADS)
Kim, Hyun-Woong; Park, Jong-Jin; Kim, Jin-O.
2008-10-01
The electricity demand in Korea has rapidly increased along with a steady economic growth since 1970s. Therefore Korea has positively propelled not only SSM (Supply-Side Management) but also DSM (Demand-Side Management) activities to reduce investment cost of generating units and to save supply costs of electricity through the enhancement of whole national energy utilization efficiency. However study for rebate, which have influence on success or failure on DSM program, is not sufficient. This paper executed to modeling mathematically expanded Bass model considering rebates, which have influence on penetration amounts for DSM program. To reflect rebate effect more preciously, the pricing function using in expanded Bass model directly reflects response of potential participants for rebate level.
Unbiased simulation of near-Clifford quantum circuits
Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.; ...
2017-06-28
Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3
McLawhorn, Alexander S; Carroll, Kaitlin M; Blevins, Jason L; DeNegre, Scott T; Mayman, David J; Jerabek, Seth A
2015-10-01
Template-directed instrumentation (TDI) for total knee arthroplasty (TKA) may streamline operating room (OR) workflow and reduce costs by preselecting implants and minimizing instrument tray burden. A decision model simulated the economics of TDI. Sensitivity analyses determined thresholds for model variables to ensure TDI success. A clinical pilot was reviewed. The accuracy of preoperative templates was validated, and 20 consecutive primary TKAs were performed using TDI. The model determined that preoperative component size estimation should be accurate to ±1 implant size for 50% of TKAs to implement TDI. The pilot showed that preoperative template accuracy exceeded 97%. There were statistically significant improvements in OR turnover time and in-room time for TDI compared to an historical cohort of TKAs. TDI reduces costs and improves OR efficiency. Copyright © 2015 Elsevier Inc. All rights reserved.
Estimating Local Chlamydia Incidence and Prevalence Using Surveillance Data
White, Peter J.
2017-01-01
Background: Understanding patterns of chlamydia prevalence is important for addressing inequalities and planning cost-effective control programs. Population-based surveys are costly; the best data for England come from the Natsal national surveys, which are only available once per decade, and are nationally representative but not powered to compare prevalence in different localities. Prevalence estimates at finer spatial and temporal scales are required. Methods: We present a method for estimating local prevalence by modeling the infection, testing, and treatment processes. Prior probability distributions for parameters describing natural history and treatment-seeking behavior are informed by the literature or calibrated using national prevalence estimates. By combining them with surveillance data on numbers of chlamydia tests and diagnoses, we obtain estimates of local screening rates, incidence, and prevalence. We illustrate the method by application to data from England. Results: Our estimates of national prevalence by age group agree with the Natsal-3 survey. They could be improved by additional information on the number of diagnosed cases that were asymptomatic. There is substantial local-level variation in prevalence, with more infection in deprived areas. Incidence in each sex is strongly correlated with prevalence in the other. Importantly, we find that positivity (the proportion of tests which were positive) does not provide a reliable proxy for prevalence. Conclusion: This approach provides local chlamydia prevalence estimates from surveillance data, which could inform analyses to identify and understand local prevalence patterns and assess local programs. Estimates could be more accurate if surveillance systems recorded additional information, including on symptoms. See video abstract at, http://links.lww.com/EDE/B211. PMID:28306613
A long-term evaluation of biopsy darts and DNA to estimate cougar density
Beausoleil, Richard A.; Clark, Joseph D.; Maletzke, Benjamin T.
2016-01-01
Accurately estimating cougar (Puma concolor) density is usually based on long-term research consisting of intensive capture and Global Positioning System collaring efforts and may cost hundreds of thousands of dollars annually. Because wildlife agency budgets rarely accommodate this approach, most infer cougar density from published literature, rely on short-term studies, or use hunter harvest data as a surrogate in their jurisdictions; all of which may limit accuracy and increase risk of management actions. In an effort to develop a more cost-effective long-term strategy, we evaluated a research approach using citizen scientists with trained hounds to tree cougars and collect tissue samples with biopsy darts. We then used the DNA to individually identify cougars and employed spatially explicit capture–recapture models to estimate cougar densities. Overall, 240 tissue samples were collected in northeastern Washington, USA, producing 166 genotypes (including recaptures and excluding dependent kittens) of 133 different cougars (8-25/yr) from 2003 to 2011. Mark–recapture analyses revealed a mean density of 2.2 cougars/100 km2 (95% CI=1.1-4.3) and stable to decreasing population trends (β=-0.048, 95% CI=-0.106–0.011) over the 9 years of study, with an average annual harvest rate of 14% (range=7-21%). The average annual cost per year for field sampling and genotyping was US$11,265 ($422.24/sample or $610.73/successfully genotyped sample). Our results demonstrated that long-term biopsy sampling using citizen scientists can increase capture success and provide reliable cougar-density information at a reasonable cost.
Predicting hospital accounting costs
Newhouse, Joseph P.; Cretin, Shan; Witsberger, Christina J.
1989-01-01
Two alternative methods to Medicare Cost Reports that provide information about hospital costs more promptly but less accurately are investigated. Both employ utilization data from current-year bills. The first attaches costs to utilization data using cost-charge ratios from the previous year's cost report; the second uses charges from current year's bills. The first method is the more accurate of the two, but even using it, only 40 percent of hospitals had predicted costs within plus or minus 5 percent of actual costs. The feasibility and cost of obtaining cost reports from a small, fast-track sample of hospitals should be investigated. PMID:10313352
Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.
Dai, Tianjiao; Shete, Sanjay
2016-08-30
In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.
NASA Astrophysics Data System (ADS)
Dobriyal, Pariva; Qureshi, Ashi; Badola, Ruchi; Hussain, Syed Ainul
2012-08-01
SummaryThe maintenance of elevated soil moisture is an important ecosystem service of the natural ecosystems. Understanding the patterns of soil moisture distribution is useful to a wide range of agencies concerned with the weather and climate, soil conservation, agricultural production and landscape management. However, the great heterogeneity in the spatial and temporal distribution of soil moisture and the lack of standard methods to estimate this property limit its quantification and use in research. This literature based review aims to (i) compile the available knowledge on the methods used to estimate soil moisture at the landscape level, (ii) compare and evaluate the available methods on the basis of common parameters such as resource efficiency, accuracy of results and spatial coverage and (iii) identify the method that will be most useful for forested landscapes in developing countries. On the basis of the strengths and weaknesses of each of the methods reviewed we conclude that the direct method (gravimetric method) is accurate and inexpensive but is destructive, slow and time consuming and does not allow replications thereby having limited spatial coverage. The suitability of indirect methods depends on the cost, accuracy, response time, effort involved in installation, management and durability of the equipment. Our review concludes that measurements of soil moisture using the Time Domain Reflectometry (TDR) and Ground Penetrating Radar (GPR) methods are instantaneously obtained and accurate. GPR may be used over larger areas (up to 500 × 500 m a day) but is not cost-effective and difficult to use in forested landscapes in comparison to TDR. This review will be helpful to researchers, foresters, natural resource managers and agricultural scientists in selecting the appropriate method for estimation of soil moisture keeping in view the time and resources available to them and to generate information for efficient allocation of water resources and maintenance of soil moisture regime.
Rapid Detection of Small Movements with GNSS Doppler Observables
NASA Astrophysics Data System (ADS)
Hohensinn, Roland; Geiger, Alain
2017-04-01
High-alpine terrain reacts very sensitively to varying environmental conditions. As an example, increasing temperatures cause thawing of permafrost areas. This, in turn causes an increasing threat by natural hazards like debris flow (e.g. rock glaciers) or rockfalls. The Institute of Geodesy and Photogrammetry is contributing to alpine mass-movement monitoring systems in different project areas in the Swiss Alps. A main focus lies on providing geodetic mass-movement information derived from GNSS static solutions on a daily and a sub-daily basis, obtained with low-cost and autonomous GNSS stations. Another focus is set on rapidly providing reliable geodetic information in real-time i.e. for an integration in early warning systems. One way to achieve this is the estimation of accurate station velocities from observations of range rates, which can be obtained as Doppler observables from time derivatives of carrier phase measurements. The key for this method lies in a precise modeling of prominent effects contributing to the observed range rates, which are satellite velocity, atmospheric delay rates and relativistic effects. A suitable observation model is then devised, which accounts for these predictions. The observation model, combined with a simple kinematic movement model forms the basis for the parameter estimation. Based on the estimated station velocities, movements are then detected using a statistical test. To improve the reliablity of the estimated parameters, another spotlight is set on an on-line quality control procedure. We will present the basic algorithms as well as results from first tests which were carried out with a low-cost GPS L1 phase receiver. With a u-blox module and a sampling rate of 5 Hz, accuracies on the mm/s level can be obtained and velocities down to 1 cm/s can be detected. Reliable and accurate station velocities and movement information can be provided within seconds.
Quantifying the life-history response to increased male exposure in female Drosophila melanogaster.
Edward, Dominic A; Fricke, Claudia; Gerrard, Dave T; Chapman, Tracey
2011-02-01
Precise estimates of costs and benefits, the fitness economics, of mating are of key importance in understanding how selection shapes the coevolution of male and female mating traits. However, fitness is difficult to define and quantify. Here, we used a novel application of an established analytical technique to calculate individual- and population-based estimates of fitness-including those sensitive to the timing of reproduction-to measure the effects on females of increased exposure to males. Drosophila melanogaster females were exposed to high and low frequencies of contact with males, and life-history traits for each individual female were recorded. We then compared different fitness estimates to determine which of them best described the changes in life histories. We predicted that rate-sensitive estimates would be more accurate, as mating influences the rate of offspring production in this species. The results supported this prediction. Increased exposure to males led to significantly decreased fitness within declining but not stable or increasing populations. There was a net benefit of increased male exposure in expanding populations, despite a significant decrease in lifespan. The study shows how a more accurate description of fitness, and new insights can be achieved by considering individual life-history strategies within the context of population growth. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.
Time-dependent classification accuracy curve under marker-dependent sampling.
Zhu, Zhaoyin; Wang, Xiaofei; Saha-Chaudhuri, Paramita; Kosinski, Andrzej S; George, Stephen L
2016-07-01
Evaluating the classification accuracy of a candidate biomarker signaling the onset of disease or disease status is essential for medical decision making. A good biomarker would accurately identify the patients who are likely to progress or die at a particular time in the future or who are in urgent need for active treatments. To assess the performance of a candidate biomarker, the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are commonly used. In many cases, the standard simple random sampling (SRS) design used for biomarker validation studies is costly and inefficient. In order to improve the efficiency and reduce the cost of biomarker validation, marker-dependent sampling (MDS) may be used. In a MDS design, the selection of patients to assess true survival time is dependent on the result of a biomarker assay. In this article, we introduce a nonparametric estimator for time-dependent AUC under a MDS design. The consistency and the asymptotic normality of the proposed estimator is established. Simulation shows the unbiasedness of the proposed estimator and a significant efficiency gain of the MDS design over the SRS design. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
NASA Astrophysics Data System (ADS)
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
NASA Astrophysics Data System (ADS)
Qayyum, Abdul; Saad, Naufal M.; Kamel, Nidal; Malik, Aamir Saeed
2018-01-01
The monitoring of vegetation near high-voltage transmission power lines and poles is tedious. Blackouts present a huge challenge to power distribution companies and often occur due to tree growth in hilly and rural areas. There are numerous methods of monitoring hazardous overgrowth that are expensive and time-consuming. Accurate estimation of tree and vegetation heights near power poles can prevent the disruption of power transmission in vulnerable zones. This paper presents a cost-effective approach based on a convolutional neural network (CNN) algorithm to compute the height (depth maps) of objects proximal to power poles and transmission lines. The proposed CNN extracts and classifies features by employing convolutional pooling inputs to fully connected data layers that capture prominent features from stereo image patches. Unmanned aerial vehicle or satellite stereo image datasets can thus provide a feasible and cost-effective approach that identifies threat levels based on height and distance estimations of hazardous vegetation and other objects. Results were compared with extant disparity map estimation techniques, such as graph cut, dynamic programming, belief propagation, and area-based methods. The proposed method achieved an accuracy rate of 90%.
Measuring food intake with digital photography
Martin, Corby K.; Nicklas, Theresa; Gunturk, Bahadir; Correa, John B.; Allen, H. Raymond; Champagne, Catherine
2014-01-01
The Digital Photography of Foods Method accurately estimates the food intake of adults and children in cafeterias. When using this method, imags of food selection and leftovers are quickly captured in the cafeteria. These images are later compared to images of “standard” portions of food using a computer application. The amount of food selected and discarded is estimated based upon this comparison, and the application automatically calculates energy and nutrient intake. Herein, we describe this method, as well as a related method called the Remote Food Photography Method (RFPM), which relies on Smartphones to estimate food intake in near real-time in free-living conditions. When using the RFPM, participants capture images of food selection and leftovers using a Smartphone and these images are wirelessly transmitted in near real-time to a server for analysis. Because data are transferred and analyzed in near real-time, the RFPM provides a platform for participants to quickly receive feedback about their food intake behavior and to receive dietary recommendations to achieve weight loss and health promotion goals. The reliability and validity of measuring food intake with the RFPM in adults and children will also be reviewed. The body of research reviewed herein demonstrates that digital imaging accurately estimates food intake in many environments and it has many advantages over other methods, including reduced participant burden, elimination of the need for participants to estimate portion size, and incorporation of computer automation to improve the accuracy, efficiency, and the cost-effectiveness of the method. PMID:23848588
Accurate Heart Rate Monitoring During Physical Exercises Using PPG.
Temko, Andriy
2017-09-01
The challenging task of heart rate (HR) estimation from the photoplethysmographic (PPG) signal, during intensive physical exercises, is tackled in this paper. The study presents a detailed analysis of a novel algorithm (WFPV) that exploits a Wiener filter to attenuate the motion artifacts, a phase vocoder to refine the HR estimate and user-adaptive post-processing to track the subject physiology. Additionally, an offline version of the HR estimation algorithm that uses Viterbi decoding is designed for scenarios that do not require online HR monitoring (WFPV+VD). The performance of the HR estimation systems is rigorously compared with existing algorithms on the publically available database of 23 PPG recordings. On the whole dataset of 23 PPG recordings, the algorithms result in average absolute errors of 1.97 and 1.37 BPM in the online and offline modes, respectively. On the test dataset of 10 PPG recordings which were most corrupted with motion artifacts, WFPV has an error of 2.95 BPM on its own and 2.32 BPM in an ensemble with two existing algorithms. The error rate is significantly reduced when compared with the state-of-the art PPG-based HR estimation methods. The proposed system is shown to be accurate in the presence of strong motion artifacts and in contrast to existing alternatives has very few free parameters to tune. The algorithm has a low computational cost and can be used for fitness tracking and health monitoring in wearable devices. The MATLAB implementation of the algorithm is provided online.
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
NASA Astrophysics Data System (ADS)
Palatella, Luigi; Trevisan, Anna; Rambaldi, Sandro
2013-08-01
Valuable information for estimating the traffic flow is obtained with current GPS technology by monitoring position and velocity of vehicles. In this paper, we present a proof of concept study that shows how the traffic state can be estimated using only partial and noisy data by assimilating them in a dynamical model. Our approach is based on a data assimilation algorithm, developed by the authors for chaotic geophysical models, designed to be equivalent but computationally much less demanding than the traditional extended Kalman filter. Here we show that the algorithm is even more efficient if the system is not chaotic and demonstrate by numerical experiments that an accurate reconstruction of the complete traffic state can be obtained at a very low computational cost by monitoring only a small percentage of vehicles.
NASA Astrophysics Data System (ADS)
Fattoruso, Grazia; Longobardi, Antonia; Pizzuti, Alfredo; Molinara, Mario; Marocco, Claudio; De Vito, Saverio; Tortorella, Francesco; Di Francia, Girolamo
2017-06-01
Rainfall data collection gathered in continuous by a distributed rain gauge network is instrumental to more effective hydro-geological risk forecasting and management services though the input estimated rainfall fields suffer from prediction uncertainty. Optimal rain gauge networks can generate accurate estimated rainfall fields. In this research work, a methodology has been investigated for evaluating an optimal rain gauges network aimed at robust hydrogeological hazard investigations. The rain gauges of the Sarno River basin (Southern Italy) has been evaluated by optimizing a two-objective function that maximizes the estimated accuracy and minimizes the total metering cost through the variance reduction algorithm along with the climatological variogram (time-invariant). This problem has been solved by using an enumerative search algorithm, evaluating the exact Pareto-front by an efficient computational time.
Bridging the gap between finance and clinical operations with activity-based cost management.
Storfjell, J L; Jessup, S
1996-12-01
Activity-based cost management (ABCM) is an exciting management tool that links financial information with operations. By determining the costs of specific activities and processes, nurse managers accurately determine true costs of services more accurately than traditional cost accounting methods, and then can target processes for improvement and monitor them for change and improvement. The authors describe the ABCM process applied to nursing management situations.
Remote sensing in Iowa agriculture. [cropland inventory, soils, forestland, and crop diseases
NASA Technical Reports Server (NTRS)
Mahlstede, J. P. (Principal Investigator); Carlson, R. E.
1973-01-01
The author has identified the following significant results. Results include the estimation of forested and crop vegetation acreages using the ERTS-1 imagery. The methods used to achieve these estimates still require refinement, but the results appear promising. Practical applications would be directed toward achieving current land use inventories of these natural resources. This data is presently collected by sampling type surveys. If ERTS-1 can observe this and area estimates can be determined accurately, then a step forward has been achieved. Cost benefit relationship will have to be favorable. Problems still exist in these estimation techniques due to the diversity of the scene observed in the ERTS-1 imagery covering other part of Iowa. This is due to influence of topography and soils upon the adaptability of the vegetation to specific areas of the state. The state mosaic produced from ERTS-1 imagery shows these patterns very well. Research directed to acreage estimates is continuing.
Estimation of Population Number via Light Activities on Night-Time Satellite Images
NASA Astrophysics Data System (ADS)
Turan, M. K.; Yücer, E.; Sehirli, E.; Karaş, İ. R.
2017-11-01
Estimation and accurate assessment regarding population gets harder and harder day by day due to growth of world population in a fast manner. Estimating tendencies to settlements in cities and countries, socio-cultural development and population numbers is quite difficult. In addition to them, selection and analysis of parameters such as time, work-force and cost seems like another difficult issue. In this study, population number is guessed by evaluating light activities in İstanbul via night-time images of Turkey. By evaluating light activities between 2000 and 2010, average population per pixel is obtained. Hence, it is used to estimate population numbers in 2011, 2012 and 2013. Mean errors are concluded as 4.14 % for 2011, 3.74 % for 2012 and 3.04 % for 2013 separately. As a result of developed thresholding method, mean error is concluded as 3.64 % to estimate population number in İstanbul for next three years.
Predictive value of 3-month lumbar discectomy outcomes in the NeuroPoint-SD Registry.
Whitmore, Robert G; Curran, Jill N; Ali, Zarina S; Mummaneni, Praveen V; Shaffrey, Christopher I; Heary, Robert F; Kaiser, Michael G; Asher, Anthony L; Malhotra, Neil R; Cheng, Joseph S; Hurlbert, John; Smith, Justin S; Magge, Subu N; Steinmetz, Michael P; Resnick, Daniel K; Ghogawala, Zoher
2015-10-01
The authors have established a multicenter registry to assess the efficacy and costs of common lumbar spinal procedures using prospectively collected outcomes. Collection of these data requires an extensive commitment of resources from each site. The aim of this study was to determine whether outcomes data from shorter-interval follow-up could be used to accurately estimate long-term outcome following lumbar discectomy. An observational prospective cohort study was completed at 13 academic and community sites. Patients undergoing single-level lumbar discectomy for treatment of disc herniation were included. SF-36 and Oswestry Disability Index (ODI) data were obtained preoperatively and at 1, 3, 6, and 12 months postoperatively. Quality-adjusted life year (QALY) data were calculated using SF-6D utility scores. Correlations among outcomes at each follow-up time point were tested using the Spearman rank correlation test. One hundred forty-eight patients were enrolled over 1 year. Their mean age was 46 years (49% female). Eleven patients (7.4%) required a reoperation by 1 year postoperatively. The overall 1-year follow-up rate was 80.4%. Lumbar discectomy was associated with significant improvements in ODI and SF-36 scores (p < 0.0001) and with a gain of 0.246 QALYs over the 1-year study period. The greatest gain occurred between baseline and 3-month follow-up and was significantly greater than improvements obtained between 3 and 6 months or 6 months and 1 year(p < 0.001). Correlations between 3-month, 6-month, and 1-year outcomes were similar, suggesting that 3-month data may be used to accurately estimate 1-year outcomes for patients who do not require a reoperation. Patients who underwent reoperation had worse outcomes scores and nonsignificant correlations at all time points. This national spine registry demonstrated successful collection of high-quality outcomes data for spinal procedures in actual practice. Three-month outcome data may be used to accurately estimate outcome at future time points and may lower costs associated with registry data collection. This registry effort provides a practical foundation for the acquisition of outcome data following lumbar discectomy.
Type 2 diabetes in Vietnam: a cross-sectional, prevalence-based cost-of-illness study.
Le, Nguyen Tu Dang; Dinh Pham, Luyen; Quang Vo, Trung
2017-01-01
According to the International Diabetes Federation, total global health care expenditures for diabetes tripled between 2003 and 2013 because of increases in the number of people with diabetes as well as in the average expenditures per patient. This study aims to provide accurate and timely information about the economic impacts of type 2 diabetes mellitus (T2DM) in Vietnam. The cost-of-illness estimates followed a prospective, prevalence-based approach from the societal perspective of T2DM with 392 selected diabetic patients who received treatment from a public hospital in Ho Chi Minh City, Vietnam, during the 2016 fiscal year. In this study, the annual cost per patient estimate was US $246.10 (95% CI 228.3, 267.2) for 392 patients, which accounted for about 12% (95% CI 11, 13) of the gross domestic product per capita in 2017. That includes US $127.30, US $34.40 and US $84.40 for direct medical costs, direct nonmedical expenditures, and indirect costs, respectively. The cost of pharmaceuticals accounted for the bulk of total expenditures in our study (27.5% of total costs and 53.2% of direct medical costs). A bootstrap analysis showed that female patients had a higher cost of treatment than men at US $48.90 (95% CI 3.1, 95.0); those who received insulin and oral antidiabetics (OAD) also had a statistically significant higher cost of treatment compared to those receiving OAD, US $445.90 (95% CI 181.2, 690.6). The Gradient Boosting Regression (Ensemble method) and Lasso Regression (Generalized Linear Models) were determined to be the best models to predict the cost of T2DM ( R 2 =65.3, mean square error [MSE]=0.94; and R 2 =64.75, MSE=0.96, respectively). The findings of this study serve as a reference for policy decision making in diabetes management as well as adjustment of costs for patients in order to reduce the economic impact of the disease.
Low-Cost MEMS Sensors and Vision System for Motion and Position Estimation of a Scooter
Guarnieri, Alberto; Pirotti, Francesco; Vettore, Antonio
2013-01-01
The possibility to identify with significant accuracy the position of a vehicle in a mapping reference frame for driving directions and best-route analysis is a topic which is attracting a lot of interest from the research and development sector. To reach the objective of accurate vehicle positioning and integrate response events, it is necessary to estimate position, orientation and velocity of the system with high measurement rates. In this work we test a system which uses low-cost sensors, based on Micro Electro-Mechanical Systems (MEMS) technology, coupled with information derived from a video camera placed on a two-wheel motor vehicle (scooter). In comparison to a four-wheel vehicle; the dynamics of a two-wheel vehicle feature a higher level of complexity given that more degrees of freedom must be taken into account. For example a motorcycle can twist sideways; thus generating a roll angle. A slight pitch angle has to be considered as well; since wheel suspensions have a higher degree of motion compared to four-wheel motor vehicles. In this paper we present a method for the accurate reconstruction of the trajectory of a “Vespa” scooter; which can be used as alternative to the “classical” approach based on GPS/INS sensor integration. Position and orientation of the scooter are obtained by integrating MEMS-based orientation sensor data with digital images through a cascade of a Kalman filter and a Bayesian particle filter. PMID:23348036
Low-Cost MEMS sensors and vision system for motion and position estimation of a scooter.
Guarnieri, Alberto; Pirotti, Francesco; Vettore, Antonio
2013-01-24
The possibility to identify with significant accuracy the position of a vehicle in a mapping reference frame for driving directions and best-route analysis is a topic which is attracting a lot of interest from the research and development sector. To reach the objective of accurate vehicle positioning and integrate response events, it is necessary to estimate position, orientation and velocity of the system with high measurement rates. In this work we test a system which uses low-cost sensors, based on Micro Electro-Mechanical Systems (MEMS) technology, coupled with information derived from a video camera placed on a two-wheel motor vehicle (scooter). In comparison to a four-wheel vehicle; the dynamics of a two-wheel vehicle feature a higher level of complexity given that more degrees of freedom must be taken into account. For example a motorcycle can twist sideways; thus generating a roll angle. A slight pitch angle has to be considered as well; since wheel suspensions have a higher degree of motion compared to four-wheel motor vehicles. In this paper we present a method for the accurate reconstruction of the trajectory of a "Vespa" scooter; which can be used as alternative to the "classical" approach based on GPS/INS sensor integration. Position and orientation of the scooter are obtained by integrating MEMS-based orientation sensor data with digital images through a cascade of a Kalman filter and a Bayesian particle filter.
NASA Astrophysics Data System (ADS)
Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried
2017-09-01
Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wild, M.; Rouhani, S.
1995-02-01
A typical site investigation entails extensive sampling and monitoring. In the past, sampling plans have been designed on purely ad hoc bases, leading to significant expenditures and, in some cases, collection of redundant information. In many instances, sampling costs exceed the true worth of the collected data. The US Environmental Protection Agency (EPA) therefore has advocated the use of geostatistics to provide a logical framework for sampling and analysis of environmental data. Geostatistical methodology uses statistical techniques for the spatial analysis of a variety of earth-related data. The use of geostatistics was developed by the mining industry to estimate oremore » concentrations. The same procedure is effective in quantifying environmental contaminants in soils for risk assessments. Unlike classical statistical techniques, geostatistics offers procedures to incorporate the underlying spatial structure of the investigated field. Sample points spaced close together tend to be more similar than samples spaced further apart. This can guide sampling strategies and determine complex contaminant distributions. Geostatistic techniques can be used to evaluate site conditions on the basis of regular, irregular, random and even spatially biased samples. In most environmental investigations, it is desirable to concentrate sampling in areas of known or suspected contamination. The rigorous mathematical procedures of geostatistics allow for accurate estimates at unsampled locations, potentially reducing sampling requirements. The use of geostatistics serves as a decision-aiding and planning tool and can significantly reduce short-term site assessment costs, long-term sampling and monitoring needs, as well as lead to more accurate and realistic remedial design criteria.« less
NASA Astrophysics Data System (ADS)
Verhoeven, G.; Wieser, M.; Briese, C.; Doneus, M.
2013-07-01
Since manned, airborne aerial reconnaissance for archaeological purposes is often characterised by more-or-less random photographing of archaeological features on the Earth, the exact position and orientation of the camera during image acquisition becomes very important in an effective inventorying and interpretation workflow of these aerial photographs. Although the positioning is generally achieved by simultaneously logging the flight path or directly recording the camera's position with a GNSS receiver, this approach does not allow to record the necessary roll, pitch and yaw angles of the camera. The latter are essential elements for the complete exterior orientation of the camera, which allows - together with the inner orientation of the camera - to accurately define the portion of the Earth recorded in the photograph. This paper proposes a cost-effective, accurate and precise GNSS/IMU solution (image position: 2.5 m and orientation: 2°, both at 1σ) to record all essential exterior orientation parameters for the direct georeferencing of the images. After the introduction of the utilised hardware, this paper presents the developed software that allows recording and estimating these parameters. Furthermore, this direct georeferencing information can be embedded into the image's metadata. Subsequently, the first results of the estimation of the mounting calibration (i.e. the misalignment between the camera and GNSS/IMU coordinate frame) are provided. Furthermore, a comparison with a dedicated commercial photographic GNSS/IMU solution will prove the superiority of the introduced solution. Finally, an outlook on future tests and improvements finalises this article.
Tong, Qiaoling; Chen, Chen; Zhang, Qiao; Zou, Xuecheng
2015-01-01
To realize accurate current control for a boost converter, a precise measurement of the inductor current is required to achieve high resolution current regulating. Current sensors are widely used to measure the inductor current. However, the current sensors and their processing circuits significantly contribute extra hardware cost, delay and noise to the system. They can also harm the system reliability. Therefore, current sensorless control techniques can bring cost effective and reliable solutions for various boost converter applications. According to the derived accurate model, which contains a number of parasitics, the boost converter is a nonlinear system. An Extended Kalman Filter (EKF) is proposed for inductor current estimation and output voltage filtering. With this approach, the system can have the same advantages as sensored current control mode. To implement EKF, the load value is necessary. However, the load may vary from time to time. This can lead to errors of current estimation and filtered output voltage. To solve this issue, a load variation elimination effect elimination (LVEE) module is added. In addition, a predictive average current controller is used to regulate the current. Compared with conventional voltage controlled system, the transient response is greatly improved since it only takes two switching cycles for the current to reach its reference. Finally, experimental results are presented to verify the stable operation and output tracking capability for large-signal transients of the proposed algorithm. PMID:25928061
Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.
Chen, Jing; Zhang, Yi; Xue, Wei
2018-04-28
In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.
NASA Astrophysics Data System (ADS)
Beijen, Michiel A.; Voorhoeve, Robbert; Heertjes, Marcel F.; Oomen, Tom
2018-07-01
Vibration isolation is essential for industrial high-precision systems to suppress external disturbances. The aim of this paper is to develop a general identification approach to estimate the frequency response function (FRF) of the transmissibility matrix, which is a key performance indicator for vibration isolation systems. The major challenge lies in obtaining a good signal-to-noise ratio in view of a large system weight. A non-parametric system identification method is proposed that combines floor and shaker excitations. Furthermore, a method is presented to analyze the input power spectrum of the floor excitations, both in terms of magnitude and direction. In turn, the input design of the shaker excitation signals is investigated to obtain sufficient excitation power in all directions with minimum experiment cost. The proposed methods are shown to provide an accurate FRF of the transmissibility matrix in three relevant directions on an industrial active vibration isolation system over a large frequency range. This demonstrates that, despite their heavy weight, industrial vibration isolation systems can be accurately identified using this approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen
This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vectormore » regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.« less
Economic impact of medication non-adherence by disease groups: a systematic review.
Cutler, Rachelle Louise; Fernandez-Llimos, Fernando; Frommer, Michael; Benrimoj, Charlie; Garcia-Cardenas, Victoria
2018-01-21
To determine the economic impact of medication non-adherence across multiple disease groups. Systematic review. A comprehensive literature search was conducted in PubMed and Scopus in September 2017. Studies quantifying the cost of medication non-adherence in relation to economic impact were included. Relevant information was extracted and quality assessed using the Drummond checklist. Seventy-nine individual studies assessing the cost of medication non-adherence across 14 disease groups were included. Wide-scoping cost variations were reported, with lower levels of adherence generally associated with higher total costs. The annual adjusted disease-specific economic cost of non-adherence per person ranged from $949 to $44 190 (in 2015 US$). Costs attributed to 'all causes' non-adherence ranged from $5271 to $52 341. Medication possession ratio was the metric most used to calculate patient adherence, with varying cut-off points defining non-adherence. The main indicators used to measure the cost of non-adherence were total cost or total healthcare cost (83% of studies), pharmacy costs (70%), inpatient costs (46%), outpatient costs (50%), emergency department visit costs (27%), medical costs (29%) and hospitalisation costs (18%). Drummond quality assessment yielded 10 studies of high quality with all studies performing partial economic evaluations to varying extents. Medication non-adherence places a significant cost burden on healthcare systems. Current research assessing the economic impact of medication non-adherence is limited and of varying quality, failing to provide adaptable data to influence health policy. The correlation between increased non-adherence and higher disease prevalence should be used to inform policymakers to help circumvent avoidable costs to the healthcare system. Differences in methods make the comparison among studies challenging and an accurate estimation of true magnitude of the cost impossible. Standardisation of the metric measures used to estimate medication non-adherence and development of a streamlined approach to quantify costs is required. CRD42015027338. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Equivalent Mass versus Life Cycle Cost for Life Support Technology Selection
NASA Technical Reports Server (NTRS)
Jones, Harry
2003-01-01
The decision to develop a particular life support technology or to select it for flight usually depends on the cost to develop and fly it. Other criteria such as performance, safety, reliability, crew time, and technical and schedule risk are considered, but cost is always an important factor. Because launch cost would account for much of the cost of a future planetary mission, and because launch cost is directly proportional to the mass launched, equivalent mass has been used instead of cost to select advanced life support technology. The equivalent mass of a life support system includes the estimated mass of the hardware and of the spacecraft pressurized volume, power supply, and cooling system that the hardware requires. The equivalent mass of a system is defined as the total payload launch mass needed to provide and support the system. An extension of equivalent mass, Equivalent System Mass (ESM), has been established for use in the Advanced Life Support project. ESM adds a mass-equivalent of crew time and possibly other cost factors to equivalent mass. Traditional equivalent mass is strictly based on flown mass and reflects only the launch cost. ESM includes other important cost factors, but it complicates the simple flown mass definition of equivalent mass by adding a non-physical mass penalty for crew time that may exceed the actual flown mass. Equivalent mass is used only in life support analysis. Life Cycle Cost (LCC) is much more commonly used. LCC includes DDT&E, launch, and operations costs. For Earth orbit rather than planetary missions, the launch cost is less than the cost of Design, Development, Test, and Evaluation (DDTBE). LCC is a more inclusive cost estimator than equivalent mass. The relative costs of development, launch, and operations vary depending on the mission destination and duration. Since DDTBE or operations may cost more than launch, LCC gives a more accurate relative cost ranking than equivalent mass. To select the lowest cost technology for a particular application we should use LCC rather than equivalent mass.
Bromage, Erin S; Vadas, George G; Harvey, Ellen; Unger, Michael A; Kaattari, Stephen L
2007-10-15
Nitroaromatics are common pollutants of soil and groundwater at military installations because of their manufacture, storage, and use at these sites. Long-term monitoring of these pollutants comprise a significant percentage of restoration costs. Further, remediation activities often have to be delayed, while the samples are processed via traditional chemical assessment protocols. Here we describe a rapid (<5 min), cost-effective, accurate method using a KinExA Inline Biosensor for monitoring of 2,4,6-trinitrotoluene (TNT) in field water samples. The biosensor, which is based on KinExA technology, accurately estimated the concentration of TNT in double-blind comparisons with similar accuracy to traditional high-performance liquid chromatography(HPLC). In the assessment of field samples, the biosensor accurately predicted the concentration of TNT over the range of 1-30,000 microg/L when compared to either HPLC or quantitative gas chromatography-mass spectrometry (GC-MS). Various pre-assessment techniques were explored to examine whether field samples could be assessed untreated, without the removal of particulates or the use of solvents. In most cases, the KinExA Inline Biosensor gave a uniform assessment of TNT concentration independent of pretreatment method. This indicates that this sensor possesses significant promise for rapid, on-site assessment of TNT pollution in environmental water samples.
The Cost of Accumulating Evidence in Perceptual Decision Making
Drugowitsch, Jan; Moreno-Bote, Rubén; Churchland, Anne K.; Shadlen, Michael N.; Pouget, Alexandre
2012-01-01
Decision making often involves the accumulation of information over time, but acquiring information typically comes at a cost. Little is known about the cost incurred by animals and humans for acquiring additional information from sensory variables, due, for instance, to attentional efforts. Through a novel integration of diffusion models and dynamic programming, we were able to estimate the cost of making additional observations per unit of time from two monkeys and six humans in a reaction time random dot motion discrimination task. Surprisingly, we find that, the cost is neither zero nor constant over time, but for the animals and humans features a brief period in which it is constant but increases thereafter. In addition, we show that our theory accurately matches the observed reaction time distributions for each stimulus condition, the time-dependent choice accuracy both conditional on stimulus strength and independent of it, and choice accuracy and mean reaction times as a function of stimulus strength. The theory also correctly predicts that urgency signals in the brain should be independent of the difficulty, or stimulus strength, at each trial. PMID:22423085
Basu, Sanjay; Landon, Bruce E; Song, Zirui; Bitton, Asaf; Phillips, Russell S
2015-02-01
Primary care practice transformations require tools for policymakers and practice managers to understand the financial implications of workforce and reimbursement changes. To create a simulation model to understand how practice utilization, revenues, and expenses may change in the context of workforce and financing changes. We created a simulation model estimating clinic-level utilization, revenues, and expenses using user-specified or public input data detailing practice staffing levels, salaries and overhead expenditures, patient characteristics, clinic workload, and reimbursements. We assessed whether the model could accurately estimate clinic utilization, revenues, and expenses across the nation using labor compensation, medical expenditure, and reimbursements databases, as well as cost and revenue data from independent practices of varying size. We demonstrated the model's utility in a simulation of how utilization, revenue, and expenses would change after hiring a nurse practitioner (NP) compared with hiring a part-time physician. Modeled practice utilization and revenue closely matched independent national utilization and reimbursement data, disaggregated by patient age, sex, race/ethnicity, insurance status, and ICD diagnostic group; the model was able to estimate independent revenue and cost estimates, with highest accuracy among larger practices. A demonstration analysis revealed that hiring an NP to work independently with a subset of patients diagnosed with diabetes or hypertension could increase net revenues, if NP visits involve limited MD consultation or if NP reimbursement rates increase. A model of utilization, revenue, and expenses in primary care practices may help policymakers and managers understand the implications of workforce and financing changes.
Improved arrival-date estimates of Arctic-breeding Dunlin (Calidris alpina arcticola)
Doll, Andrew C.; Lanctot, Richard B.; Stricker, Craig A.; Yezerinac, Stephen M.; Wunder, Michael B.
2015-01-01
The use of stable isotopes in animal ecology depends on accurate descriptions of isotope dynamics within individuals. The prevailing assumption that laboratory-derived isotopic parameters apply to free-living animals is largely untested. We used stable carbon isotopes (δ13C) in whole blood from migratory Dunlin (Calidris alpina arcticola) to estimate an in situ turnover rate and individual diet-switch dates. Our in situ results indicated that turnover rates were higher in free-living birds, in comparison to the results of an experimental study on captive Dunlin and estimates derived from a theoretical allometric model. Diet-switch dates from all 3 methods were then used to estimate arrival dates to the Arctic; arrival dates calculated with the in situ turnover rate were later than those with the other turnover-rate estimates, substantially so in some cases. These later arrival dates matched dates when local snow conditions would have allowed Dunlin to settle, and agreed with anticipated arrival dates of Dunlin tracked with light-level geolocators. Our study presents a novel method for accurately estimating arrival dates for individuals of migratory species in which return dates are difficult to document. This may be particularly appropriate for species in which extrinsic tracking devices cannot easily be employed because of cost, body size, or behavioral constraints, and in habitats that do not allow individuals to be detected easily upon first arrival. Thus, this isotopic method offers an exciting alternative approach to better understand how species may be altering their arrival dates in response to changing climatic conditions.
Real-time optical flow estimation on a GPU for a skied-steered mobile robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-04-01
Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.
Investing in breastfeeding - the world breastfeeding costing initiative.
Holla-Bhar, Radha; Iellamo, Alessandro; Gupta, Arun; Smith, Julie P; Dadhich, Jai Prakash
2015-01-01
Despite scientific evidence substantiating the importance of breastfeeding in child survival and development and its economic benefits, assessments show gaps in many countries' implementation of the 2003 WHO and UNICEF Global Strategy for Infant and Young Child Feeding (Global Strategy). Optimal breastfeeding is a particular example: initiation of breastfeeding within the first hour of birth, exclusive breastfeeding for the first six months; and continued breastfeeding for two years or more, together with safe, adequate, appropriate, responsive complementary feeding starting in the sixth month. While the understanding of "optimal" may vary among countries, there is a need for governments to facilitate an enabling environment for women to achieve optimal breastfeeding. Lack of financial resources for key programs is a major impediment, making economic perspectives important for implementation. Globally, while achieving optimal breastfeeding could prevent more than 800,000 under five deaths annually, in 2013, US$58 billion was spent on commercial baby food including milk formula. Support for improved breastfeeding is inadequately prioritized by policy and practice internationally. The World Breastfeeding Costing Initiative (WBCi) launched in 2013, attempts to determine the financial investment that is necessary to implement the Global Strategy, and to introduce a tool to estimate the costs for individual countries. The article presents detailed cost estimates for implementing the Global Strategy, and outlines the WBCi Financial Planning Tool. Estimates use demographic data from UNICEF's State of the World's Children 2013. The WBCi takes a programmatic approach to scaling up interventions, including policy and planning, health and nutrition care systems, community services and mother support, media promotion, maternity protection, WHO International Code of Marketing of Breastmilk Substitutes implementation, monitoring and research, for optimal breastfeeding practices. The financial cost of a program to implement the Global Strategy in 214 countries is estimated at US $17.5 billion ($130 per live birth). The major recurring cost is maternity entitlements. WBCi is a policy advocacy initiative to encourage integrated actions that enable breastfeeding. WBCi will help countries plan and prioritize actions and budget them accurately. International agencies and donors can also use the tool to calculate or track investments in breastfeeding.
Shen, Bo; Shermock, Kenneth M; Fazio, Victor W; Achkar, Jean-Paul; Brzezinski, Aaron; Bevins, Charles L; Bambrick, Marlene L; Remzi, Feza H; Lashner, Bret A
2003-11-01
Pouchitis is often diagnosed based on symptoms and empirically treated with antibiotics (treat-first strategy). However, symptom assessment alone is not reliable for diagnosis, and an initial evaluation with pouch endoscopy (test-first strategy) has been shown to be more accurate. Cost-effectiveness of these strategies has not been compared. The aim of this study was to compare cost-effectiveness of different clinical approaches for patients with symptoms suggestive of pouchitis. Pouchitis was defined as pouchitis disease activity index scores > or =7. The frequency of pouchitis in symptomatic patients with ileal pouch was estimated to be 51%; the efficacy for initial therapy with metronidazole (MTZ) and ciprofloxacin (CIP) was 75% and 85%, respectively. Cost estimates were obtained from Medicare reimbursement data. Six competing strategies (MTZ trial, CIP trial, MTZ-then-CIP trial, CIP-then-MTZ trial, pouch endoscopy with biopsy, and pouch endoscopy without biopsy) were modeled in a decision tree. Costs per correct diagnosis with appropriate treatment were $194 for MTZ trial, $279 for CIP trial, $208 for MTZ-then-CIP trial, $261 for CIP-then-MTZ trial, $352 for pouch endoscopy with biopsy, and $243 for pouch endoscopy without biopsy. Of the two strategies with the lowest cost, the pouch endoscopy without biopsy strategy costs $50 more per patient than the MTZ trial strategy but results in an additional 15 days for early diagnosis and thus initiation of appropriate treatment (incremental cost-effectiveness ratio $3 per additional day gained). The results of base-case analysis were robust in sensitivity analyses. Although the MTZ-trial strategy had the lowest cost, the pouch endoscopy without biopsy strategy was most cost-effective. Therefore, based on its relatively low cost and the avoidance of both diagnostic delay and adverse effects associated with unnecessary antibiotics, pouch endoscopy without biopsy is the recommended strategy among those tested for the diagnosis of pouchitis.
Ye, Yalan; He, Wenwen; Cheng, Yunfei; Huang, Wenxia; Zhang, Zhilin
2017-02-16
The estimation of heart rate (HR) based on wearable devices is of interest in fitness. Photoplethysmography (PPG) is a promising approach to estimate HR due to low cost; however, it is easily corrupted by motion artifacts (MA). In this work, a robust approach based on random forest is proposed for accurately estimating HR from the photoplethysmography signal contaminated by intense motion artifacts, consisting of two stages. Stage 1 proposes a hybrid method to effectively remove MA with a low computation complexity, where two MA removal algorithms are combined by an accurate binary decision algorithm whose aim is to decide whether or not to adopt the second MA removal algorithm. Stage 2 proposes a random forest-based spectral peak-tracking algorithm, whose aim is to locate the spectral peak corresponding to HR, formulating the problem of spectral peak tracking into a pattern classification problem. Experiments on the PPG datasets including 22 subjects used in the 2015 IEEE Signal Processing Cup showed that the proposed approach achieved the average absolute error of 1.65 beats per minute (BPM) on the 22 PPG datasets. Compared to state-of-the-art approaches, the proposed approach has better accuracy and robustness to intense motion artifacts, indicating its potential use in wearable sensors for health monitoring and fitness tracking.
Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D
2014-03-01
Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Major trauma: the unseen financial burden to trauma centres, a descriptive multicentre analysis.
Curtis, Kate; Lam, Mary; Mitchell, Rebecca; Dickson, Cara; McDonnell, Karon
2014-02-01
This research examines the existing funding model for in-hospital trauma patient episodes in New South Wales (NSW), Australia and identifies factors that cause above-average treatment costs. Accurate information on the treatment costs of injury is needed to guide health-funding strategy and prevent inadvertent underfunding of specialist trauma centres, which treat a high trauma casemix. Admitted trauma patient data provided by 12 trauma centres were linked with financial data for 2008-09. Actual costs incurred by each hospital were compared with state-wide Australian Refined Diagnostic Related Groups (AR-DRG) average costs. Patient episodes where actual cost was higher than AR-DRG cost allocation were examined. There were 16693 patients at a total cost of AU$178.7million. The total costs incurred by trauma centres were $14.7million above the NSW peer-group average cost estimates. There were 10 AR-DRG where the total cost variance was greater than $500000. The AR-DRG with the largest proportion of patients were the upper limb injury categories, many of whom had multiple body regions injured and/or a traumatic brain injury (P<0.001). AR-DRG classifications do not adequately describe the trauma patient episode and are not commensurate with the expense of trauma treatment. A revision of AR-DRG used for trauma is needed. WHAT IS KNOWN ABOUT THIS TOPIC? Severely injured trauma patients often have multiple injuries, in more than one body region and the determination of appropriate AR-DRG can be difficult. Pilot research suggests that the AR-DRG do not accurately represent the care that is required for these patients. WHAT DOES THIS PAPER ADD? This is the first multicentre analysis of treatment costs and coding variance for major trauma in Australia. This research identifies the limitations of the current AR-DRGS and those that are particularly problematic. The value of linking trauma registry and financial data within each trauma centre is demonstrated. WHAT ARE THE IMPLICATIONS FOR PRACTITIONERS? Further work should be conducted between trauma services, clinical coding and finance departments to improve the accuracy of clinical coding, review funding models and ensure that AR-DRG allocation is commensurate with the expense of trauma treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaelsen, Kelly; Krishnaswamy, Venkat; Pogue, Brian W.
2012-07-15
Purpose: Design optimization and phantom validation of an integrated digital breast tomosynthesis (DBT) and near-infrared spectral tomography (NIRST) system targeting improvement in sensitivity and specificity of breast cancer detection is presented. Factors affecting instrumentation design include minimization of cost, complexity, and examination time while maintaining high fidelity NIRST measurements with sufficient information to recover accurate optical property maps. Methods: Reconstructed DBT slices from eight patients with abnormal mammograms provided anatomical information for the NIRST simulations. A limited frequency domain (FD) and extensive continuous wave (CW) NIRST system was modeled. The FD components provided tissue scattering estimations used in the reconstructionmore » of the CW data. Scattering estimates were perturbed to study the effects on hemoglobin recovery. Breast mimicking agar phantoms with inclusions were imaged using the combined DBT/NIRST system for comparison with simulation results. Results: Patient simulations derived from DBT images show successful reconstruction of both normal and malignant lesions in the breast. They also demonstrate the importance of accurately quantifying tissue scattering. Specifically, 20% errors in optical scattering resulted in 22.6% or 35.1% error in quantification of total hemoglobin concentrations, depending on whether scattering was over- or underestimated, respectively. Limited frequency-domain optical signal sampling provided two regions scattering estimates (for fat and fibroglandular tissues) that led to hemoglobin concentrations that reduced the error in the tumor region by 31% relative to when a single estimate of optical scattering was used throughout the breast volume of interest. Acquiring frequency-domain data with six wavelengths instead of three did not significantly improve the hemoglobin concentration estimates. Simulation results were confirmed through experiments in two-region breast mimicking gelatin phantoms. Conclusions: Accurate characterization of scattering is necessary for quantification of hemoglobin. Based on this study, a system design is described to optimally combine breast tomosynthesis with NIRST.« less
NASA Technical Reports Server (NTRS)
Khorram, S.
1977-01-01
Results are presented of a study intended to develop a general location-specific remote-sensing procedure for watershed-wide estimation of water loss to the atmosphere by evaporation and transpiration. The general approach involves a stepwise sequence of required information definition (input data), appropriate sample design, mathematical modeling, and evaluation of results. More specifically, the remote sensing-aided system developed to evaluate evapotranspiration employs a basic two-stage two-phase sample of three information resolution levels. Based on the discussed design, documentation, and feasibility analysis to yield timely, relatively accurate, and cost-effective evapotranspiration estimates on a watershed or subwatershed basis, work is now proceeding to implement this remote sensing-aided system.
Estimation of surface temperature in remote pollution measurement experiments
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.
Seng, Piseth; Drancourt, Michel; Gouriet, Frédérique; La Scola, Bernard; Fournier, Pierre-Edouard; Rolain, Jean Marc; Raoult, Didier
2009-08-15
Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometry accurately identifies both selected bacteria and bacteria in select clinical situations. It has not been evaluated for routine use in the clinic. We prospectively analyzed routine MALDI-TOF mass spectrometry identification in parallel with conventional phenotypic identification of bacteria regardless of phylum or source of isolation. Discrepancies were resolved by 16S ribosomal RNA and rpoB gene sequence-based molecular identification. Colonies (4 spots per isolate directly deposited on the MALDI-TOF plate) were analyzed using an Autoflex II Bruker Daltonik mass spectrometer. Peptidic spectra were compared with the Bruker BioTyper database, version 2.0, and the identification score was noted. Delays and costs of identification were measured. Of 1660 bacterial isolates analyzed, 95.4% were correctly identified by MALDI-TOF mass spectrometry; 84.1% were identified at the species level, and 11.3% were identified at the genus level. In most cases, absence of identification (2.8% of isolates) and erroneous identification (1.7% of isolates) were due to improper database entries. Accurate MALDI-TOF mass spectrometry identification was significantly correlated with having 10 reference spectra in the database (P=.01). The mean time required for MALDI-TOF mass spectrometry identification of 1 isolate was 6 minutes for an estimated 22%-32% cost of current methods of identification. MALDI-TOF mass spectrometry is a cost-effective, accurate method for routine identification of bacterial isolates in <1 h using a database comprising > or =10 reference spectra per bacterial species and a 1.9 identification score (Brucker system). It may replace Gram staining and biochemical identification in the near future.
Koumbaris, George; Kypri, Elena; Tsangaras, Kyriakos; Achilleos, Achilleas; Mina, Petros; Neofytou, Maria; Velissariou, Voula; Christopoulou, Georgia; Kallikas, Ioannis; González-Liñán, Alicia; Benusiene, Egle; Latos-Bielenska, Anna; Marek, Pietryga; Santana, Alfredo; Nagy, Nikoletta; Széll, Márta; Laudanski, Piotr; Papageorgiou, Elisavet A; Ioannides, Marios; Patsalis, Philippos C
2016-06-01
There is great need for the development of highly accurate cost effective technologies that could facilitate the widespread adoption of noninvasive prenatal testing (NIPT). We developed an assay based on the targeted analysis of cell-free DNA for the detection of fetal aneuploidies of chromosomes 21, 18, and 13. This method enabled the capture and analysis of selected genomic regions of interest. An advanced fetal fraction estimation and aneuploidy determination algorithm was also developed. This assay allowed for accurate counting and assessment of chromosomal regions of interest. The analytical performance of the assay was evaluated in a blind study of 631 samples derived from pregnancies of at least 10 weeks of gestation that had also undergone invasive testing. Our blind study exhibited 100% diagnostic sensitivity and specificity and correctly classified 52/52 (95% CI, 93.2%-100%) cases of trisomy 21, 16/16 (95% CI, 79.4%-100%) cases of trisomy 18, 5/5 (95% CI, 47.8%-100%) cases of trisomy 13, and 538/538 (95% CI, 99.3%-100%) normal cases. The test also correctly identified fetal sex in all cases (95% CI, 99.4%-100%). One sample failed prespecified assay quality control criteria, and 19 samples were nonreportable because of low fetal fraction. The extent to which free fetal DNA testing can be applied as a universal screening tool for trisomy 21, 18, and 13 depends mainly on assay accuracy and cost. Cell-free DNA analysis of targeted genomic regions in maternal plasma enables accurate and cost-effective noninvasive fetal aneuploidy detection, which is critical for widespread adoption of NIPT. © 2016 American Association for Clinical Chemistry.
Optimising cluster survey design for planning schistosomiasis preventive chemotherapy
Sturrock, Hugh J. W.; Turner, Hugo; Whitton, Jane M.; Gower, Charlotte M.; Jemu, Samuel; Phillips, Anna E.; Meite, Aboulaye; Thomas, Brent; Kollie, Karsor; Thomas, Catherine; Rebollo, Maria P.; Styles, Ben; Clements, Michelle; Fenwick, Alan; Harrison, Wendy E.; Fleming, Fiona M.
2017-01-01
Background The cornerstone of current schistosomiasis control programmes is delivery of praziquantel to at-risk populations. Such preventive chemotherapy requires accurate information on the geographic distribution of infection, yet the performance of alternative survey designs for estimating prevalence and converting this into treatment decisions has not been thoroughly evaluated. Methodology/Principal findings We used baseline schistosomiasis mapping surveys from three countries (Malawi, Côte d’Ivoire and Liberia) to generate spatially realistic gold standard datasets, against which we tested alternative two-stage cluster survey designs. We assessed how sampling different numbers of schools per district (2–20) and children per school (10–50) influences the accuracy of prevalence estimates and treatment class assignment, and we compared survey cost-efficiency using data from Malawi. Due to the focal nature of schistosomiasis, up to 53% simulated surveys involving 2–5 schools per district failed to detect schistosomiasis in low endemicity areas (1–10% prevalence). Increasing the number of schools surveyed per district improved treatment class assignment far more than increasing the number of children sampled per school. For Malawi, surveys of 15 schools per district and 20–30 children per school reliably detected endemic schistosomiasis and maximised cost-efficiency. In sensitivity analyses where treatment costs and the country considered were varied, optimal survey size was remarkably consistent, with cost-efficiency maximised at 15–20 schools per district. Conclusions/Significance Among two-stage cluster surveys for schistosomiasis, our simulations indicated that surveying 15–20 schools per district and 20–30 children per school optimised cost-efficiency and minimised the risk of under-treatment, with surveys involving more schools of greater cost-efficiency as treatment costs rose. PMID:28552961
Meng, Yu; Li, Gang; Gao, Yaozong; Lin, Weili; Shen, Dinggang
2016-11-01
Longitudinal neuroimaging analysis of the dynamic brain development in infants has received increasing attention recently. Many studies expect a complete longitudinal dataset in order to accurately chart the brain developmental trajectories. However, in practice, a large portion of subjects in longitudinal studies often have missing data at certain time points, due to various reasons such as the absence of scan or poor image quality. To make better use of these incomplete longitudinal data, in this paper, we propose a novel machine learning-based method to estimate the subject-specific, vertex-wise cortical morphological attributes at the missing time points in longitudinal infant studies. Specifically, we develop a customized regression forest, named dynamically assembled regression forest (DARF), as the core regression tool. DARF ensures the spatial smoothness of the estimated maps for vertex-wise cortical morphological attributes and also greatly reduces the computational cost. By employing a pairwise estimation followed by a joint refinement, our method is able to fully exploit the available information from both subjects with complete scans and subjects with missing scans for estimation of the missing cortical attribute maps. The proposed method has been applied to estimating the dynamic cortical thickness maps at missing time points in an incomplete longitudinal infant dataset, which includes 31 healthy infant subjects, each having up to five time points in the first postnatal year. The experimental results indicate that our proposed framework can accurately estimate the subject-specific vertex-wise cortical thickness maps at missing time points, with the average error less than 0.23 mm. Hum Brain Mapp 37:4129-4147, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Validation of the work and health interview.
Stewart, Walter F; Ricci, Judith A; Leotta, Carol; Chee, Elsbeth
2004-01-01
Instruments that measure the impact of illness on work do not usually provide a measure that can be directly translated into lost hours or costs. We describe the validation of the Work and Health Interview (WHI), a questionnaire that provides a measure of lost productive time (LPT) from work absence and reduced performance at work. A sample (n = 67) of inbound phone call agents was recruited for the study. Validity of the WHI was assessed over a 2-week period in reference to workplace data (i.e. absence time, time away from call station and electronic continuous performance) and repeated electronic diary data (n = 48) obtained approximately eight times a day to estimate time not working (i.e. a component of reduced performance). The mean (median) missed work time estimate for any reason was 11 (8.0) and 12.9 (8.0) hours in a 2-week period from the WHI and workplace data, respectively, with a Pearson's (Spearman's) correlation of 0.84 (0.76). The diary-based mean (median) estimate of time not working while at work was 3.9 (2.8) hours compared with the WHI estimate of 5.7 (3.2) hours with a Pearson's (Spearman's) correlation of 0.19 (0.33). The 2-week estimate of total productive time from the diary was 67.2 hours compared with 67.8 hours from the WHI, with a Pearson's (Spearman's) correlation of 0.50 (0.46). At a population level, the WHI provides an accurate estimate of missed time from work and total productive time when compared with workplace and diary estimates. At an individual level, the WHI measure of total missed time, but not reduced performance time, is moderately accurate.
Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo
2014-05-01
Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.
Binnendijk, Erika; Gautham, Meenakshi; Koren, Ruth; Dror, David M
2012-10-09
Most healthcare spending in developing countries is private out-of-pocket. One explanation for low penetration of health insurance is that poorer individuals doubt their ability to enforce insurance contracts. Community-based health insurance schemes (CBHI) are a solution, but launching CBHI requires obtaining accurate local data on morbidity, healthcare utilization and other details to inform package design and pricing. We developed the "Illness Mapping" method (IM) for data collection (faster and cheaper than household surveys). IM is a modification of two non-interactive consensus group methods (Delphi and Nominal Group Technique) to operate as interactive methods. We elicited estimates from "Experts" in the target community on morbidity and healthcare utilization. Interaction between facilitator and experts became essential to bridge literacy constraints and to reach consensus.The study was conducted in Gaya District, Bihar (India) during April-June 2010. The intervention included the IM and a household survey (HHS). IM included 18 women's and 17 men's groups. The HHS was conducted in 50 villages with1,000 randomly selected households (6,656 individuals). We found good agreement between the two methods on overall prevalence of illness (IM: 25.9% ±3.6; HHS: 31.4%) and on prevalence of acute (IM: 76.9%; HHS: 69.2%) and chronic illnesses (IM: 20.1%; HHS: 16.6%). We also found good agreement on incidence of deliveries (IM: 3.9% ±0.4; HHS: 3.9%), and on hospital deliveries (IM: 61.0%. ± 5.4; HHS: 51.4%). For hospitalizations, we obtained a lower estimate from the IM (1.1%) than from the HHS (2.6%). The IM required less time and less person-power than a household survey, which translate into reduced costs. We have shown that our Illness Mapping method can be carried out at lower financial and human cost for sourcing essential local data, at acceptably accurate levels. In view of the good fit of results obtained, we assume that the method could work elsewhere as well.
2012-01-01
Background Most healthcare spending in developing countries is private out-of-pocket. One explanation for low penetration of health insurance is that poorer individuals doubt their ability to enforce insurance contracts. Community-based health insurance schemes (CBHI) are a solution, but launching CBHI requires obtaining accurate local data on morbidity, healthcare utilization and other details to inform package design and pricing. We developed the “Illness Mapping” method (IM) for data collection (faster and cheaper than household surveys). Methods IM is a modification of two non-interactive consensus group methods (Delphi and Nominal Group Technique) to operate as interactive methods. We elicited estimates from “Experts” in the target community on morbidity and healthcare utilization. Interaction between facilitator and experts became essential to bridge literacy constraints and to reach consensus. The study was conducted in Gaya District, Bihar (India) during April-June 2010. The intervention included the IM and a household survey (HHS). IM included 18 women’s and 17 men’s groups. The HHS was conducted in 50 villages with1,000 randomly selected households (6,656 individuals). Results We found good agreement between the two methods on overall prevalence of illness (IM: 25.9% ±3.6; HHS: 31.4%) and on prevalence of acute (IM: 76.9%; HHS: 69.2%) and chronic illnesses (IM: 20.1%; HHS: 16.6%). We also found good agreement on incidence of deliveries (IM: 3.9% ±0.4; HHS: 3.9%), and on hospital deliveries (IM: 61.0%. ± 5.4; HHS: 51.4%). For hospitalizations, we obtained a lower estimate from the IM (1.1%) than from the HHS (2.6%). The IM required less time and less person-power than a household survey, which translate into reduced costs. Conclusions We have shown that our Illness Mapping method can be carried out at lower financial and human cost for sourcing essential local data, at acceptably accurate levels. In view of the good fit of results obtained, we assume that the method could work elsewhere as well. PMID:23043584
Tropical forest plantation biomass estimation using RADARSAT-SAR and TM data of south china
NASA Astrophysics Data System (ADS)
Wang, Chenli; Niu, Zheng; Gu, Xiaoping; Guo, Zhixing; Cong, Pifu
2005-10-01
Forest biomass is one of the most important parameters for global carbon stock model yet can only be estimated with great uncertainties. Remote sensing, especially SAR data can offers the possibility of providing relatively accurate forest biomass estimations at a lower cost than inventory in study tropical forest. The goal of this research was to compare the sensitivity of forest biomass to Landsat TM and RADARSAT-SAR data and to assess the efficiency of NDVI, EVI and other vegetation indices in study forest biomass based on the field survey date and GIS in south china. Based on vegetation indices and factor analysis, multiple regression and neural networks were developed for biomass estimation for each species of the plantation. For each species, the better relationships between the biomass predicted and that measured from field survey was obtained with a neural network developed for the species. The relationship between predicted and measured biomass derived from vegetation indices differed between species. This study concludes that single band and many vegetation indices are weakly correlated with selected forest biomass. RADARSAT-SAR Backscatter coefficient has a relatively good logarithmic correlation with forest biomass, but neither TM spectral bands nor vegetation indices alone are sufficient to establish an efficient model for biomass estimation due to the saturation of bands and vegetation indices, multiple regression models that consist of spectral and environment variables improve biomass estimation performance. Comparing with TM, a relatively well estimation result can be achieved by RADARSAT-SAR, but all had limitations in tropical forest biomass estimation. The estimation results obtained are not accurate enough for forest management purposes at the forest stand level. However, the approximate volume estimates derived by the method can be useful in areas where no other forest information is available. Therefore, this paper provides a better understanding of relationships of remote sensing data and forest stand parameters used in forest parameter estimation models.
Measurement of Antenna Bore-Sight Gain
NASA Technical Reports Server (NTRS)
Fortinberry, Jarrod; Shumpert, Thomas
2016-01-01
The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.
Adaptive finite element method for turbulent flow near a propeller
NASA Astrophysics Data System (ADS)
Pelletier, Dominique; Ilinca, Florin; Hetu, Jean-Francois
1994-11-01
This paper presents an adaptive finite element method based on remeshing to solve incompressible turbulent free shear flow near a propeller. Solutions are obtained in primitive variables using a highly accurate finite element approximation on unstructured grids. Turbulence is modeled by a mixing length formulation. Two general purpose error estimators, which take into account swirl and the variation of the eddy viscosity, are presented and applied to the turbulent wake of a propeller. Predictions compare well with experimental measurements. The proposed adaptive scheme is robust, reliable and cost effective.
ASTRAL, DRAGON and SEDAN scores predict stroke outcome more accurately than physicians.
Ntaios, G; Gioulekas, F; Papavasileiou, V; Strbian, D; Michel, P
2016-11-01
ASTRAL, SEDAN and DRAGON scores are three well-validated scores for stroke outcome prediction. Whether these scores predict stroke outcome more accurately compared with physicians interested in stroke was investigated. Physicians interested in stroke were invited to an online anonymous survey to provide outcome estimates in randomly allocated structured scenarios of recent real-life stroke patients. Their estimates were compared to scores' predictions in the same scenarios. An estimate was considered accurate if it was within 95% confidence intervals of actual outcome. In all, 244 participants from 32 different countries responded assessing 720 real scenarios and 2636 outcomes. The majority of physicians' estimates were inaccurate (1422/2636, 53.9%). 400 (56.8%) of physicians' estimates about the percentage probability of 3-month modified Rankin score (mRS) > 2 were accurate compared with 609 (86.5%) of ASTRAL score estimates (P < 0.0001). 394 (61.2%) of physicians' estimates about the percentage probability of post-thrombolysis symptomatic intracranial haemorrhage were accurate compared with 583 (90.5%) of SEDAN score estimates (P < 0.0001). 160 (24.8%) of physicians' estimates about post-thrombolysis 3-month percentage probability of mRS 0-2 were accurate compared with 240 (37.3%) DRAGON score estimates (P < 0.0001). 260 (40.4%) of physicians' estimates about the percentage probability of post-thrombolysis mRS 5-6 were accurate compared with 518 (80.4%) DRAGON score estimates (P < 0.0001). ASTRAL, DRAGON and SEDAN scores predict outcome of acute ischaemic stroke patients with higher accuracy compared to physicians interested in stroke. © 2016 EAN.
Fast and low-cost method for VBES bathymetry generation in coastal areas
NASA Astrophysics Data System (ADS)
Sánchez-Carnero, N.; Aceña, S.; Rodríguez-Pérez, D.; Couñago, E.; Fraile, P.; Freire, J.
2012-12-01
Sea floor topography is key information in coastal area management. Nowadays, LiDAR and multibeam technologies provide accurate bathymetries in those areas; however these methodologies are yet too expensive for small customers (fishermen associations, small research groups) willing to keep a periodic surveillance of environmental resources. In this paper, we analyse a simple methodology for vertical beam echosounder (VBES) bathymetric data acquisition and postprocessing, using low-cost means and free customizable tools such as ECOSONS and gvSIG (that is compared with industry standard ArcGIS). Echosounder data was filtered, resampled and, interpolated (using kriging or radial basis functions). Moreover, the presented methodology includes two data correction processes: Monte Carlo simulation, used to reduce GPS errors, and manually applied bathymetric line transformations, both improving the obtained results. As an example, we present the bathymetry of the Ría de Cedeira (Galicia, NW Spain), a good testbed area for coastal bathymetry methodologies given its extension and rich topography. The statistical analysis, performed by direct ground-truthing, rendered an upper bound of 1.7 m error, at 95% confidence level, and 0.7 m r.m.s. (cross-validation provided 30 cm and 25 cm, respectively). The methodology presented is fast and easy to implement, accurate outside transects (accuracy can be estimated), and can be used as a low-cost periodical monitoring method.
Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie
2012-06-01
Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.
Alves, Gelio; Yu, Yi-Kuo
2016-09-01
There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Van De Gucht, Tim; Saeys, Wouter; Van Meensel, Jef; Van Nuffel, Annelies; Vangeyte, Jurgen; Lauwers, Ludwig
2018-01-01
Although prototypes of automatic lameness detection systems for dairy cattle exist, information about their economic value is lacking. In this paper, a conceptual and operational framework for simulating the farm-specific economic value of automatic lameness detection systems was developed and tested on 4 system types: walkover pressure plates, walkover pressure mats, camera systems, and accelerometers. The conceptual framework maps essential factors that determine economic value (e.g., lameness prevalence, incidence and duration, lameness costs, detection performance, and their relationships). The operational simulation model links treatment costs and avoided losses with detection results and farm-specific information, such as herd size and lameness status. Results show that detection performance, herd size, discount rate, and system lifespan have a large influence on economic value. In addition, lameness prevalence influences the economic value, stressing the importance of an adequate prior estimation of the on-farm prevalence. The simulations provide first estimates for the upper limits for purchase prices of automatic detection systems. The framework allowed for identification of knowledge gaps obstructing more accurate economic value estimation. These include insights in cost reductions due to early detection and treatment, and links between specific lameness causes and their related losses. Because this model provides insight in the trade-offs between automatic detection systems' performance and investment price, it is a valuable tool to guide future research and developments. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Measuring PM and related air pollutants using low-cost ...
Emerging air quality sensors may play a key role in better characterizing levels of air pollution in a variety of settings There are a wide range of low-cost (< $500 US) sensors on the market, but few have been characterized. If accurate, this new generation of inexpensive sensors can potentially allow larger fleets of monitors to be deployed to better study the spatial and temporal variability of pollutants. The small size and light weight of these sensors also allows for the possibility of wearable or drone applications. Sensor networks will very likely play a key role in future estimates of human health impacts of pollutants, in particular particulate matter (PM), and will allow for the better characterization of pollutant sources and source regions.We will present measurements from an assortment of sensors, costing $20-$700, that have been used to measure air pollution in the US, India, and China with a focus on estimating PM concentrations. Their performance has been evaluated in these very different settings with low concentrations seen in the US (up to approximately 20 ug m-3) and much higher concentrations measured in India and China (up to approximately 300 ug m-3). Based on these studies the optimal concentration ranges of these sensors have been determined. Used in conjunction with data from a carbon dioxide sensor, emissions factors were estimated in some of the locations. In addition temperature and humidity sensors can be used to calculate c
Predicting poverty and wealth from mobile phone metadata.
Blumenstock, Joshua; Cadamuro, Gabriel; On, Robert
2015-11-27
Accurate and timely estimates of population characteristics are a critical input to social and economic research and policy. In industrialized economies, novel sources of data are enabling new approaches to demographic profiling, but in developing countries, fewer sources of big data exist. We show that an individual's past history of mobile phone use can be used to infer his or her socioeconomic status. Furthermore, we demonstrate that the predicted attributes of millions of individuals can, in turn, accurately reconstruct the distribution of wealth of an entire nation or to infer the asset distribution of microregions composed of just a few households. In resource-constrained environments where censuses and household surveys are rare, this approach creates an option for gathering localized and timely information at a fraction of the cost of traditional methods. Copyright © 2015, American Association for the Advancement of Science.
Applying Cost Imposition Strategies against China
2015-01-01
Chinese re- sponses and accurate accounting for the monetary and other security costs involved. In the air domain, competition involving China’s ballis...decision makers will find that cost imposition is not a panacea. They should understand the concept beyond its current level of misuse both for the...and accurate accounting for the monetary and other security costs involved. In the air domain, competition involving China???s ballistic and cruise
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
NASA Astrophysics Data System (ADS)
Tao, Laifa; Lu, Chen; Noktehdan, Azadeh
2015-10-01
Battery capacity estimation is a significant recent challenge given the complex physical and chemical processes that occur within batteries and the restrictions on the accessibility of capacity degradation data. In this study, we describe an approach called dynamic spatial time warping, which is used to determine the similarities of two arbitrary curves. Unlike classical dynamic time warping methods, this approach can maintain the invariance of curve similarity to the rotations and translations of curves, which is vital in curve similarity search. Moreover, it utilizes the online charging or discharging data that are easily collected and do not require special assumptions. The accuracy of this approach is verified using NASA battery datasets. Results suggest that the proposed approach provides a highly accurate means of estimating battery capacity at less time cost than traditional dynamic time warping methods do for different individuals and under various operating conditions.
NASA Astrophysics Data System (ADS)
Fuchs, Erica R. H.; Bruce, E. J.; Ram, R. J.; Kirchain, Randolph E.
2006-08-01
The monolithic integration of components holds promise to increase network functionality and reduce packaging expense. Integration also drives down yield due to manufacturing complexity and the compounding of failures across devices. Consensus is lacking on the economically preferred extent of integration. Previous studies on the cost feasibility of integration have used high-level estimation methods. This study instead focuses on accurate-to-industry detail, basing a process-based cost model of device manufacture on data collected from 20 firms across the optoelectronics supply chain. The model presented allows for the definition of process organization, including testing, as well as processing conditions, operational characteristics, and level of automation at each step. This study focuses on the cost implications of integration of a 1550-nm DFB laser with an electroabsorptive modulator on an InP platform. Results show the monolithically integrated design to be more cost competitive over discrete component options regardless of production scale. Dominant cost drivers are packaging, testing, and assembly. Leveraging the technical detail underlying model projections, component alignment, bonding, and metal-organic chemical vapor deposition (MOCVD) are identified as processes where technical improvements are most critical to lowering costs. Such results should encourage exploration of the cost advantages of further integration and focus cost-driven technology development.
Cost and Economics for Advanced Launch Vehicles
NASA Technical Reports Server (NTRS)
Whitfield, Jeff
1998-01-01
Market sensitivity and weight-based cost estimating relationships are key drivers in determining the financial viability of advanced space launch vehicle designs. Due to decreasing space transportation budgets and increasing foreign competition, it has become essential for financial assessments of prospective launch vehicles to be performed during the conceptual design phase. As part of this financial assessment, it is imperative to understand the relationship between market volatility, the uncertainty of weight estimates, and the economic viability of an advanced space launch vehicle program. This paper reports the results of a study that evaluated the economic risk inherent in market variability and the uncertainty of developing weight estimates for an advanced space launch vehicle program. The purpose of this study was to determine the sensitivity of a business case for advanced space flight design with respect to the changing nature of market conditions and the complexity of determining accurate weight estimations during the conceptual design phase. The expected uncertainty associated with these two factors drives the economic risk of the overall program. The study incorporates Monte Carlo simulation techniques to determine the probability of attaining specific levels of economic performance when the market and weight parameters are allowed to vary. This structured approach toward uncertainties allows for the assessment of risks associated with a launch vehicle program's economic performance. This results in the determination of the value of the additional risk placed on the project by these two factors.
Automated wind load characterization of wind turbine structures by embedded model updating
NASA Astrophysics Data System (ADS)
Swartz, R. Andrew; Zimmerman, Andrew T.; Lynch, Jerome P.
2010-04-01
The continued development of renewable energy resources is for the nation to limit its carbon footprint and to enjoy independence in energy production. Key to that effort are reliable generators of renewable energy sources that are economically competitive with legacy sources. In the area of wind energy, a major contributor to the cost of implementation is large uncertainty regarding the condition of wind turbines in the field due to lack of information about loading, dynamic response, and fatigue life of the structure expended. Under favorable circumstances, this uncertainty leads to overly conservative designs and maintenance schedules. Under unfavorable circumstances, it leads to inadequate maintenance schedules, damage to electrical systems, or even structural failure. Low-cost wireless sensors can provide more certainty for stakeholders by measuring the dynamic response of the structure to loading, estimating the fatigue state of the structure, and extracting loading information from the structural response without the need of an upwind instrumentation tower. This study presents a method for using wireless sensor networks to estimate the spectral properties of a wind turbine tower loading based on its measured response and some rudimentary knowledge of its structure. Structural parameters are estimated via model-updating in the frequency domain to produce an identification of the system. The updated structural model and the measured output spectra are then used to estimate the input spectra. Laboratory results are presented indicating accurate load characterization.
Sampling considerations for modal analysis with damping
NASA Astrophysics Data System (ADS)
Park, Jae Young; Wakin, Michael B.; Gilbert, Anna C.
2015-03-01
Structural health monitoring (SHM) systems are critical for monitoring aging infrastructure (such as buildings or bridges) in a cost-effective manner. Wireless sensor networks that sample vibration data over time are particularly appealing for SHM applications due to their flexibility and low cost. However, in order to extend the battery life of wireless sensor nodes, it is essential to minimize the amount of vibration data these sensors must collect and transmit. In recent work, we have studied the performance of the Singular Value Decomposition (SVD) applied to the collection of data and provided new finite sample analysis characterizing conditions under which this simple technique{also known as the Proper Orthogonal Decomposition (POD){can correctly estimate the mode shapes of the structure. Specifically, we provided theoretical guarantees on the number and duration of samples required in order to estimate a structure's mode shapes to a desired level of accuracy. In that previous work, however, we considered simplified Multiple-Degree-Of-Freedom (MDOF) systems with no damping. In this paper we consider MDOF systems with proportional damping and show that, with sufficiently light damping, the POD can continue to provide accurate estimates of a structure's mode shapes. We support our discussion with new analytical insight and experimental demonstrations. In particular, we study the tradeoffs between the level of damping, the sampling rate and duration, and the accuracy to which the structure's mode shapes can be estimated.
2012-01-01
Background Efficient, robust, and accurate genotype imputation algorithms make large-scale application of genomic selection cost effective. An algorithm that imputes alleles or allele probabilities for all animals in the pedigree and for all genotyped single nucleotide polymorphisms (SNP) provides a framework to combine all pedigree, genomic, and phenotypic information into a single-stage genomic evaluation. Methods An algorithm was developed for imputation of genotypes in pedigreed populations that allows imputation for completely ungenotyped animals and for low-density genotyped animals, accommodates a wide variety of pedigree structures for genotyped animals, imputes unmapped SNP, and works for large datasets. The method involves simple phasing rules, long-range phasing and haplotype library imputation and segregation analysis. Results Imputation accuracy was high and computational cost was feasible for datasets with pedigrees of up to 25 000 animals. The resulting single-stage genomic evaluation increased the accuracy of estimated genomic breeding values compared to a scenario in which phenotypes on relatives that were not genotyped were ignored. Conclusions The developed imputation algorithm and software and the resulting single-stage genomic evaluation method provide powerful new ways to exploit imputation and to obtain more accurate genetic evaluations. PMID:22462519
NASA Astrophysics Data System (ADS)
Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria
2016-11-01
Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.
NASA Astrophysics Data System (ADS)
Reynerson, Charles Martin
This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.
Can global navigation satellite system signals reveal the ecological attributes of forests?
NASA Astrophysics Data System (ADS)
Liu, Jingbin; Hyyppä, Juha; Yu, Xiaowei; Jaakkola, Anttoni; Liang, Xinlian; Kaartinen, Harri; Kukko, Antero; Zhu, Lingli; Wang, Yunsheng; Hyyppä, Hannu
2016-08-01
Forests have important impacts on the global carbon cycle and climate, and they are also related to a wide range of industrial sectors. Currently, one of the biggest challenges in forestry research is effectively and accurately measuring and monitoring forest variables, as the exploitation potential of forest inventory products largely depends on the accuracy of estimates and on the cost of data collection. A low-cost crowdsourcing solution is needed for forest inventory to collect forest variables. Here, we propose global navigation satellite system (GNSS) signals as a novel type of observables for predicting forest attributes and show the feasibility of utilizing GNSS signals for estimating important attributes of forest plots, including mean tree height, mean diameter at breast height, basal area, stem volume and tree biomass. The prediction accuracies of the proposed technique were better in boreal forest conditions than those of the conventional techniques of 2D remote sensing. More importantly, this technique provides a novel, cost-effective way of collecting large-scale forest measurements in the crowdsourcing context. This technique can be applied by, for example, harvesters or persons hiking or working in forests because GNSS devices are widely used, and the field operation of this technique is simple and does not require professional forestry skills.
Open-wheel race car driving: energy cost for pilots.
Beaune, Bruno; Durand, Sylvain; Mariot, Jean-Pierre
2010-11-01
The aim of this study was to evaluate the energy cost of speedway open-wheel race car driving using actimetry. Eight pilot students participated in a training session consisting of 5 successive bouts of around 30 minutes driving at steady speed on the Bugatti speedway of Le Mans (France). Energy expenditure (EE, kcal) was determined continuously by the actimetric method using the standard equation. Energy cost was estimated through physical activity ratio (PAR = EE/BMR ratio, Mets) calculation after basal metabolism rate (BMR, kcal·min-1) estimation. A 1-met PAR value was attributed to the individual BMR of each volunteer. Bout durations and EE were not significantly different between driving bouts. Mean speed was 139.94 ± 2.96 km·h-1. Physical activity ratio values ranged 4.92 ± 0.50 to 5.43 ± 0.47 Mets, corresponding to a 5.27 ± 0.47-Mets mean PAR values and a 1.21 ± 0.41 kcal·min-1 mean BMR value. These results suggest that actimetry is a simple and efficient method for EE and PAR measurements in motor sports. However, further studies are needed in the future to accurately evaluate relationships between PAR and driving intensity or between PAR and race car type.
Eberhart, Michael G; Share, Amanda M; Shpaner, Mark; Brady, Kathleen A
2015-02-01
Travel distance to medical care has been assessed using a variety of geographic methods. Network analyses are less common, but may generate more accurate estimates of travel costs. We compared straight-line distances and driving distance, as well as average drive time and travel time on a public transit network for 1789 persons diagnosed with HIV between 2010 and 2012 to identify differences overall, and by distinct geographic areas of Philadelphia. Paired t-tests were used to assess differences across methods, and analysis of variance was used to assess between-group differences. Driving distances were significantly longer than straight-line distances (p<0.001) and transit times were significantly longer than driving times (p<0.001). Persons living in the northeast section of the city traveled greater distances, and at greater cost of time and effort, than persons in all other areas of the city (p<0.001). Persons living in the northwest section of the city traveled farther and longer than all other areas except the northeast (p<0.0001). Network analyses that include public transit will likely produce a more realistic estimate of the travel costs, and may improve models to predict medical care outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Land Information System (LIS) Water Availability to Support Reclamation ET Estimation
NASA Technical Reports Server (NTRS)
Toll, David; Arsenault, Kristi; Pinheiro, Ana; Peters-Lidard, Christa; Houser, Paul; Kumar, Sujay; Engman, Ted; Nigro, Joe; Triggs, Jonathan
2005-01-01
The U.S. Bureau of Reclamation identified the remote sensing of evapotranspiration (ET) as an important water flux for study and designated a test site in the Lower Colorado River basin. A consortium of groups will work together with the goal to develop more accurate and cost effective techniques using the enhanced spatial and temporal coverage afforded by remote sensing. ET is a critical water loss flux where improved estimation should lead to better management of Reclamation responsibilities. There are several areas where NASA satellite and modeling data may be useful to meet Reclamation's objectives for improved ET estimation. In this paper we outline one possible contribution to use NASA's data integration capability of the Land Information System (LIS) to provide a merger of observational (in situ and satellite) with physical process models to provide estimates of ET and other water availability outputs (e.g., runoff, soil moisture) retrospectively, in near real-time, and also providing short-term predictions.
Estimating TCP Packet Loss Ratio from Sampled ACK Packets
NASA Astrophysics Data System (ADS)
Yamasaki, Yasuhiro; Shimonishi, Hideyuki; Murase, Tutomu
The advent of various quality-sensitive applications has greatly changed the requirements for IP network management and made the monitoring of individual traffic flows more important. Since the processing costs of per-flow quality monitoring are high, especially in high-speed backbone links, packet sampling techniques have been attracting considerable attention. Existing sampling techniques, such as those used in Sampled NetFlow and sFlow, however, focus on the monitoring of traffic volume, and there has been little discussion of the monitoring of such quality indexes as packet loss ratio. In this paper we propose a method for estimating, from sampled packets, packet loss ratios in individual TCP sessions. It detects packet loss events by monitoring duplicate ACK events raised by each TCP receiver. Because sampling reveals only a portion of the actual packet loss, the actual packet loss ratio is estimated statistically. Simulation results show that the proposed method can estimate the TCP packet loss ratio accurately from a 10% sampling of packets.
NASA Astrophysics Data System (ADS)
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).
48 CFR 1615.406-2 - Certificate of accurate cost or pricing data for community-rated carriers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... cost or pricing data for community-rated carriers. 1615.406-2 Section 1615.406-2 Federal Acquisition... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 1615.406-2 Certificate of accurate cost or pricing data for community-rated carriers. The contracting officer will require a carrier...
Meehan, Sue-Ann; Beyers, Nulda; Burger, Ronelle
2017-12-02
In South Africa, the financing and sustainability of HIV services is a priority. Community-based HIV testing services (CB-HTS) play a vital role in diagnosis and linkage to HIV care for those least likely to utilise government health services. With insufficient estimates of the costs associated with CB-HTS provided by NGOs in South Africa, this cost analysis explored the cost to implement and provide services at two NGO-led CB-HTS modalities and calculated the costs associated with realizing key HIV outputs for each CB-HTS modality. The study took place in a peri-urban area where CB-HTS were provided from a stand-alone centre and mobile service. Using a service provider (NGO) perspective, all inputs were allocated by HTS modality with shared costs apportioned according to client volume or personnel time. We calculated the total cost of each HTS modality and the cost categories (personnel, capital and recurring goods/services) across each HTS modality. Costs were divided into seven pre-determined project components, used to examine cost drivers. HIV outputs were analysed for each HTS modality and the mean cost for each HIV output was calculated per HTS modality. The annual cost of the stand-alone and mobile modalities was $96,616 and $77,764 respectively, with personnel costs accounting for 54% of the total costs at the stand-alone. For project components, overheads and service provision made up the majority of the costs. The mean cost per person tested at stand-alone ($51) was higher than at the mobile ($25). Linkage to care cost at the stand-alone ($1039) was lower than the mobile ($2102). This study provides insight into the cost of an NGO led CB-HTS project providing HIV testing and linkage to care through two CB-HIV testing modalities. The study highlights; (1) the importance of including all applicable costs (including overheads) to ensure an accurate cost estimate that is representative of the full service implementation cost, (2) the direct link between test uptake and mean cost per person tested, and (3) the need for effective linkage to care strategies to increase linkage and thereby reduce the mean cost per person linked to HIV care.
Waterlander, Wilma E; Blakely, Tony; Nghiem, Nhung; Cleghorn, Christine L; Eyles, Helen; Genc, Murat; Wilson, Nick; Jiang, Yannan; Swinburn, Boyd; Jacobi, Liana; Michie, Jo; Ni Mhurchu, Cliona
2016-07-19
There is a need for accurate and precise food price elasticities (PE, change in consumer demand in response to change in price) to better inform policy on health-related food taxes and subsidies. The Price Experiment and Modelling (Price ExaM) study aims to: I) derive accurate and precise food PE values; II) quantify the impact of price changes on quantity and quality of discrete food group purchases and; III) model the potential health and disease impacts of a range of food taxes and subsidies. To achieve this, we will use a novel method that includes a randomised Virtual Supermarket experiment and econometric methods. Findings will be applied in simulation models to estimate population health impact (quality-adjusted life-years [QALYs]) using a multi-state life-table model. The study will consist of four sequential steps: 1. We generate 5000 price sets with random price variation for all 1412 Virtual Supermarket food and beverage products. Then we add systematic price variation for foods to simulate five taxes and subsidies: a fruit and vegetable subsidy and taxes on sugar, saturated fat, salt, and sugar-sweetened beverages. 2. Using an experimental design, 1000 adult New Zealand shoppers complete five household grocery shops in the Virtual Supermarket where they are randomly assigned to one of the 5000 price sets each time. 3. Output data (i.e., multiple observations of price configurations and purchased amounts) are used as inputs to econometric models (using Bayesian methods) to estimate accurate PE values. 4. A disease simulation model will be run with the new PE values as inputs to estimate QALYs gained and health costs saved for the five policy interventions. The Price ExaM study has the potential to enhance public health and economic disciplines by introducing internationally novel scientific methods to estimate accurate and precise food PE values. These values will be used to model the potential health and disease impacts of various food pricing policy options. Findings will inform policy on health-related food taxes and subsidies. Australian New Zealand Clinical Trials Registry ACTRN12616000122459 (registered 3 February 2016).
Khavjou, Olga A; Honeycutt, Amanda A; Hoerger, Thomas J; Trogdon, Justin G; Cash, Amanda J
2014-08-01
Community-based programs require substantial investments of resources; however, evaluations of these programs usually lack analyses of program costs. Costs of community-based programs reported in previous literature are limited and have been estimated retrospectively. To describe a prospective cost data collection approach developed for the Communities Putting Prevention to Work (CPPW) program capturing costs for community-based tobacco use and obesity prevention strategies. A web-based cost data collection instrument was developed using an activity-based costing approach. Respondents reported quarterly expenditures on labor; consultants; materials, travel, and services; overhead; partner efforts; and in-kind contributions. Costs were allocated across CPPW objectives and strategies organized around five categories: media, access, point of decision/promotion, price, and social support and services. The instrument was developed in 2010, quarterly data collections took place in 2011-2013, and preliminary analysis was conducted in 2013. Preliminary descriptive statistics are presented for the cost data collected from 51 respondents. More than 50% of program costs were for partner organizations, and over 20% of costs were for labor hours. Tobacco communities devoted the majority of their efforts to media strategies. Obesity communities spent more than half of their resources on access strategies. Collecting accurate cost information on health promotion and disease prevention programs presents many challenges. The approach presented in this paper is one of the first efforts successfully collecting these types of data and can be replicated for collecting costs from other programs. Copyright © 2014 American Journal of Preventive Medicine. All rights reserved.
Feasibility study for automatic reduction of phase change imagery
NASA Technical Reports Server (NTRS)
Nossaman, G. O.
1971-01-01
The feasibility of automatically reducing a form of pictorial aerodynamic heating data is discussed. The imagery, depicting the melting history of a thin coat of fusible temperature indicator painted on an aerodynamically heated model, was previously reduced by manual methods. Careful examination of various lighting theories and approaches led to an experimentally verified illumination concept capable of yielding high-quality imagery. Both digital and video image processing techniques were applied to reduction of the data, and it was demonstrated that either method can be used to develop superimposed contours. Mathematical techniques were developed to find the model-to-image and the inverse image-to-model transformation using six conjugate points, and methods were developed using these transformations to determine heating rates on the model surface. A video system was designed which is able to reduce the imagery rapidly, economically and accurately. Costs for this system were estimated. A study plan was outlined whereby the mathematical transformation techniques developed to produce model coordinate heating data could be applied to operational software, and methods were discussed and costs estimated for obtaining the digital information necessary for this software.
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
NASA Astrophysics Data System (ADS)
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
Fast and accurate genotype imputation in genome-wide association studies through pre-phasing
Howie, Bryan; Fuchsberger, Christian; Stephens, Matthew; Marchini, Jonathan; Abecasis, Gonçalo R.
2013-01-01
Sequencing efforts, including the 1000 Genomes Project and disease-specific efforts, are producing large collections of haplotypes that can be used for genotype imputation in genome-wide association studies (GWAS). Imputing from these reference panels can help identify new risk alleles, but the use of large panels with existing methods imposes a high computational burden. To keep imputation broadly accessible, we introduce a strategy called “pre-phasing” that maintains the accuracy of leading methods while cutting computational costs by orders of magnitude. In brief, we first statistically estimate the haplotypes for each GWAS individual (“pre-phasing”) and then impute missing genotypes into these estimated haplotypes. This reduces the computational cost because: (i) the GWAS samples must be phased only once, whereas standard methods would implicitly re-phase with each reference panel update; (ii) it is much faster to match a phased GWAS haplotype to one reference haplotype than to match unphased GWAS genotypes to a pair of reference haplotypes. This strategy will be particularly valuable for repeated imputation as reference panels evolve. PMID:22820512
Measuring food intake with digital photography.
Martin, C K; Nicklas, T; Gunturk, B; Correa, J B; Allen, H R; Champagne, C
2014-01-01
The digital photography of foods method accurately estimates the food intake of adults and children in cafeterias. When using this method, images of food selection and leftovers are quickly captured in the cafeteria. These images are later compared with images of 'standard' portions of food using computer software. The amount of food selected and discarded is estimated based upon this comparison, and the application automatically calculates energy and nutrient intake. In the present review, we describe this method, as well as a related method called the Remote Food Photography Method (RFPM), which relies on smartphones to estimate food intake in near real-time in free-living conditions. When using the RFPM, participants capture images of food selection and leftovers using a smartphone and these images are wirelessly transmitted in near real-time to a server for analysis. Because data are transferred and analysed in near real-time, the RFPM provides a platform for participants to quickly receive feedback about their food intake behaviour and to receive dietary recommendations for achieving weight loss and health promotion goals. The reliability and validity of measuring food intake with the RFPM in adults and children is also reviewed. In sum, the body of research reviewed demonstrates that digital imaging accurately estimates food intake in many environments and it has many advantages over other methods, including reduced participant burden, elimination of the need for participants to estimate portion size, and the incorporation of computer automation to improve the accuracy, efficiency and cost-effectiveness of the method. © 2013 The British Dietetic Association Ltd.