Sample records for cost estimating computer

  1. Estimating costs and performance of systems for machine processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ballard, R. J.; Eastwood, L. F., Jr.

    1977-01-01

    This paper outlines a method for estimating computer processing times and costs incurred in producing information products from digital remotely sensed data. The method accounts for both computation and overhead, and may be applied to any serial computer. The method is applied to estimate the cost and computer time involved in producing Level II Land Use and Vegetative Cover Maps for a five-state midwestern region. The results show that the amount of data to be processed overloads some example computer systems, but that the processing is feasible on others.

  2. Manual of phosphoric acid fuel cell power plant cost model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.

  3. Price and cost estimation

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.

    1979-01-01

    Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.

  4. PACE 2: Pricing and Cost Estimating Handbook

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.; Shepherd, T.

    1977-01-01

    An automatic data processing system to be used for the preparation of industrial engineering type manhour and material cost estimates has been established. This computer system has evolved into a highly versatile and highly flexible tool which significantly reduces computation time, eliminates computational errors, and reduces typing and reproduction time for estimators and pricers since all mathematical and clerical functions are automatic once basic inputs are derived.

  5. Automated Estimation Of Software-Development Costs

    NASA Technical Reports Server (NTRS)

    Roush, George B.; Reini, William

    1993-01-01

    COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.

  6. Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Klems, Markus; Nimis, Jens; Tai, Stefan

    On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.

  7. Space Station Furnace Facility. Volume 3: Program cost estimate

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Wie, N.H.

    An overview of the UCC-ND system for computer-aided cost estimating is provided. The program is generally utilized in the preparation of construction cost estimates for projects costing $25,000,000 or more. The advantages of the system to the manager and the estimator are discussed, and examples of the product are provided. 19 figures, 1 table.

  9. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  10. Computer software to estimate timber harvesting system production, cost, and revenue

    Treesearch

    Dr. John E. Baumgras; Dr. Chris B. LeDoux

    1992-01-01

    Large variations in timber harvesting cost and revenue can result from the differences between harvesting systems, the variable attributes of harvesting sites and timber stands, or changing product markets. Consequently, system and site specific estimates of production rates and costs are required to improve estimates of harvesting revenue. This paper describes...

  11. Weight and cost estimating relationships for heavy lift airships

    NASA Technical Reports Server (NTRS)

    Gray, D. W.

    1979-01-01

    Weight and cost estimating relationships, including additional parameters that influence the cost and performance of heavy-lift airships (HLA), are discussed. Inputs to a closed loop computer program, consisting of useful load, forward speed, lift module positive or negative thrust, and rotors and propellers, are examined. Detail is given to the HLA cost and weight program (HLACW), which computes component weights, vehicle size, buoyancy lift, rotor and propellar thrust, and engine horse power. This program solves the problem of interrelating the different aerostat, rotors, engines and propeller sizes. Six sets of 'default parameters' are left for the operator to change during each computer run enabling slight data manipulation without altering the program.

  12. The Cost of CAI: A Matter of Assumptions.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    Cost estimates for Computer Assisted Instruction (CAI) depend crucially upon the particular assumptions made about the components of the system to be included in the costs, the expected lifetime of the system and courseware, and the anticipated student utilization of the system/courseware. The cost estimates of three currently operational systems…

  13. A stump-to-truck cost estimating program for cable logging young-growth Douglas-fir

    Treesearch

    Chris B. LeDoux

    1989-01-01

    WCOST is a computer program designed to estimate the stump-to-truck logging cost of cable logging young-growth Douglas-fir. The program uses data from stand inventory, cruise data, and the logging plan for the tract in question to produce detailed stump-to-truck cost estimates for specific proposed timber sales. These estimates are then used, in combination with...

  14. Harvesting systems and costs for southern pine in the 1980s

    Treesearch

    Frederick W. Cubbage; James E. Granskog

    1981-01-01

    Timber harvesting systems and their costs are a major concern for the forest products industries. In this paper, harvest costs per cord are estimated, using computer simulation, for current southern pine harvesting systems. The estimations represent a range of mechanization levels. The sensitivity of systems to factors affecting harvest costs - machine costs, fuel...

  15. Computer programs for estimating civil aircraft economics

    NASA Technical Reports Server (NTRS)

    Maddalon, D. V.; Molloy, J. K.; Neubawer, M. J.

    1980-01-01

    Computer programs for calculating airline direct operating cost, indirect operating cost, and return on investment were developed to provide a means for determining commercial aircraft life cycle cost and economic performance. A representative wide body subsonic jet aircraft was evaluated to illustrate use of the programs.

  16. A Framework for Automating Cost Estimates in Assembly Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calton, T.L.; Peters, R.R.

    1998-12-09

    When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less

  17. The Development of a Methodology for Estimating the Cost of Air Force On-the-Job Training.

    ERIC Educational Resources Information Center

    Samers, Bernard N.; And Others

    The Air Force uses a standardized costing methodology for resident technical training schools (TTS); no comparable methodology exists for computing the cost of on-the-job training (OJT). This study evaluates three alternative survey methodologies and a number of cost models for estimating the cost of OJT for airmen training in the Administrative…

  18. Measuring costs of data collection at village clinics by village doctors for a syndromic surveillance system-a cross sectional survey from China.

    PubMed

    Ding, Yan; Fei, Yang; Xu, Biao; Yang, Jun; Yan, Weirong; Diwan, Vinod K; Sauerborn, Rainer; Dong, Hengjin

    2015-07-25

    Studies into the costs of syndromic surveillance systems are rare, especially for estimating the direct costs involved in implementing and maintaining these systems. An Integrated Surveillance System in rural China (ISSC project), with the aim of providing an early warning system for outbreaks, was implemented; village clinics were the main surveillance units. Village doctors expressed their willingness to join in the surveillance if a proper subsidy was provided. This study aims to measure the costs of data collection by village clinics to provide a reference regarding the subsidy level required for village clinics to participate in data collection. We conducted a cross-sectional survey with a village clinic questionnaire and a staff questionnaire using a purposive sampling strategy. We tracked reported events using the ISSC internal database. Cost data included staff time, and the annual depreciation and opportunity costs of computers. We measured the village doctors' time costs for data collection by multiplying the number of full time employment equivalents devoted to the surveillance by the village doctors' annual salaries and benefits, which equaled their net incomes. We estimated the depreciation and opportunity costs of computers by calculating the equivalent annual computer cost and then allocating this to the surveillance based on the percentage usage. The estimated total annual cost of collecting data was 1,423 Chinese Renminbi (RMB) in 2012 (P25 = 857, P75 = 3284), including 1,250 RMB (P25 = 656, P75 = 3000) staff time costs and 134 RMB (P25 = 101, P75 = 335) depreciation and opportunity costs of computers. The total costs of collecting data from the village clinics for the syndromic surveillance system was calculated to be low compared with the individual net income in County A.

  19. System technology analysis of aeroassisted orbital transfer vehicles: Moderate lift/drag (0.75-1.5). Volume 3: Cost estimates and work breakdown structure/dictionary, phase 1 and 2

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Technology payoffs of representative ground based (Phase 1) and space based (Phase 2) mid lift/drag ratio aeroassisted orbit transfer vehicles (AOTV) were assessed and prioritized. A narrative summary of the cost estimates and work breakdown structure/dictionary for both study phases is presented. Costs were estimated using the Grumman Space Programs Algorithm for Cost Estimating (SPACE) computer program and results are given for four AOTV configurations. The work breakdown structure follows the standard of the joint government/industry Space Systems Cost Analysis Group (SSCAG). A table is provided which shows cost estimates for each work breakdown structure element.

  20. The Pilot Training Study: A Cost-Estimating Model for Advanced Pilot Training (APT).

    ERIC Educational Resources Information Center

    Knollmeyer, L. E.

    The Advanced Pilot Training Cost Model is a statement of relationships that may be used, given the necessary inputs, for estimating the resources required and the costs to train pilots in the Air Force formal flying training schools. Resources and costs are computed by weapon system on an annual basis for use in long-range planning or sensitivity…

  1. Index cost estimate based BIM method - Computational example for sports fields

    NASA Astrophysics Data System (ADS)

    Zima, Krzysztof

    2017-07-01

    The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.

  2. Highway infrastructure : FHWA's model for estimating highway needs is generally reasonable, despite limitations

    DOT National Transportation Integrated Search

    2000-06-01

    The Highway Economic Requirements System (HERS) computer model estimates investment requirements for the nation's highways by adding together the costs of highway improvements that the model's benefit-cost analyses indicate are warranted. In making i...

  3. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  4. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  5. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  6. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  7. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  8. SAMICS Validation. SAMICS Support Study, Phase 3

    NASA Technical Reports Server (NTRS)

    1979-01-01

    SAMICS provides a consistent basis for estimating array costs and compares production technology costs. A review and a validation of the SAMICS model are reported. The review had the following purposes: (1) to test the computational validity of the computer model by comparison with preliminary hand calculations based on conventional cost estimating techniques; (2) to review and improve the accuracy of the cost relationships being used by the model: and (3) to provide an independent verification to users of the model's value in decision making for allocation of research and developement funds and for investment in manufacturing capacity. It is concluded that the SAMICS model is a flexible, accurate, and useful tool for managerial decision making.

  9. GME: at what cost?

    PubMed

    Young, David W

    2003-11-01

    Current computing methods impede determining the real cost of graduate medical education. However, a more accurate estimate could be obtained if policy makers would allow for the application of basic cost-accounting principles, including consideration of department-level costs, unbundling of joint costs, and other factors.

  10. Estimation of the laser cutting operating cost by support vector regression methodology

    NASA Astrophysics Data System (ADS)

    Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam

    2016-09-01

    Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.

  11. Users guide for STHARVEST: software to estimate the cost of harvesting small timber.

    Treesearch

    Roger D. Fight; Xiaoshan Zhang; Bruce R. Hartsough

    2003-01-01

    The STHARVEST computer application is Windows-based, public-domain software used to estimate costs for harvesting small-diameter stands or the small-diameter component of a mixed-sized stand. The equipment production rates were developed from existing studies. Equipment operating cost rates were based on November 1998 prices for new equipment and wage rates for the...

  12. Robust stereo matching with trinary cross color census and triple image-based refinements

    NASA Astrophysics Data System (ADS)

    Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr

    2017-12-01

    For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.

  13. An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ricciuto, Daniel; Evans, Katherine

    2018-03-01

    Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.

  14. Precision Parameter Estimation and Machine Learning

    NASA Astrophysics Data System (ADS)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  15. B-2 Extremely High Frequency SATCOM and Computer Increment 1 (B-2 EHF Inc 1)

    DTIC Science & Technology

    2015-12-01

    Confidence Level Confidence Level of cost estimate for current APB: 55% This APB reflects cost and funding data based on the B-2 EHF Increment I SCP...This cost estimate was quantified at the Mean (~55%) confidence level . Total Quantity Quantity SAR Baseline Production Estimate Current APB...Production Estimate Econ Qty Sch Eng Est Oth Spt Total 33.624 -0.350 1.381 0.375 0.000 -6.075 0.000 -0.620 -5.289 28.335 Current SAR Baseline to Current

  16. Efficient Data-Worth Analysis Using a Multilevel Monte Carlo Method Applied in Oil Reservoir Simulations

    NASA Astrophysics Data System (ADS)

    Lu, D.; Ricciuto, D. M.; Evans, K. J.

    2017-12-01

    Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.

  17. Estimating Software-Development Costs With Greater Accuracy

    NASA Technical Reports Server (NTRS)

    Baker, Dan; Hihn, Jairus; Lum, Karen

    2008-01-01

    COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.

  18. The Development of a Model for Estimating the Costs Associated with the Delivery of a Metals Cluster Program.

    ERIC Educational Resources Information Center

    Hunt, Charles R.

    A study developed a model to assist school administrators to estimate costs associated with the delivery of a metals cluster program at Norfolk State College, Virginia. It sought to construct the model so that costs could be explained as a function of enrollment levels. Data were collected through a literature review, computer searches of the…

  19. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  20. COSTMODL: An automated software development cost estimation tool

    NASA Technical Reports Server (NTRS)

    Roush, George B.

    1991-01-01

    The cost of developing computer software continues to consume an increasing portion of many organizations' total budgets, both in the public and private sector. As this trend develops, the capability to produce reliable estimates of the effort and schedule required to develop a candidate software product takes on increasing importance. The COSTMODL program was developed to provide an in-house capability to perform development cost estimates for NASA software projects. COSTMODL is an automated software development cost estimation tool which incorporates five cost estimation algorithms including the latest models for the Ada language and incrementally developed products. The principal characteristic which sets COSTMODL apart from other software cost estimation programs is its capacity to be completely customized to a particular environment. The estimation equations can be recalibrated to reflect the programmer productivity characteristics demonstrated by the user's organization, and the set of significant factors which effect software development costs can be customized to reflect any unique properties of the user's development environment. Careful use of a capability such as COSTMODL can significantly reduce the risk of cost overruns and failed projects.

  1. Computer program to perform cost and weight analysis of transport aircraft. Volume 2: Technical volume

    NASA Technical Reports Server (NTRS)

    1973-01-01

    An improved method for estimating aircraft weight and cost using a unique and fundamental approach was developed. The results of this study were integrated into a comprehensive digital computer program, which is intended for use at the preliminary design stage of aircraft development. The program provides a means of computing absolute values for weight and cost, and enables the user to perform trade studies with a sensitivity to detail design and overall structural arrangement. Both batch and interactive graphics modes of program operation are available.

  2. Model implementation for dynamic computation of system cost

    NASA Astrophysics Data System (ADS)

    Levri, J.; Vaccari, D.

    The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.

  3. Joint Direct Attack Munition (JDAM)

    DTIC Science & Technology

    2015-12-01

    February 19, 2015 and the O&S cost are based on an ICE dated August 28, 2014 Confidence Level Confidence Level of cost estimate for current APB: 50% A...mathematically derived confidence level was not computed for this Life-Cycle Cost Estimate (LCCE). This LCCE represents the expected value, taking into...consideration relevant risks, including ordinary levels of external and unforeseen events. It aims to provide sufficient resources to execute the

  4. Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments

    NASA Astrophysics Data System (ADS)

    Tsuji, Daisuke; Suyama, Kenji

    This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.

  5. Metal surface corrosion grade estimation from single image

    NASA Astrophysics Data System (ADS)

    Chen, Yijun; Qi, Lin; Sun, Huyuan; Fan, Hao; Dong, Junyu

    2018-04-01

    Metal corrosion can cause many problems, how to quickly and effectively assess the grade of metal corrosion and timely remediation is a very important issue. Typically, this is done by trained surveyors at great cost. Assisting them in the inspection process by computer vision and artificial intelligence would decrease the inspection cost. In this paper, we propose a dataset of metal surface correction used for computer vision detection and present a comparison between standard computer vision techniques by using OpenCV and deep learning method for automatic metal surface corrosion grade estimation from single image on this dataset. The test has been performed by classifying images and calculating the accuracy for the two different approaches.

  6. 24 CFR 15.110 - What fees will HUD charge?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... duplicating machinery. The computer run time includes the cost of operating a central processing unit for that... Applies. (6) Computer run time (includes only mainframe search time not printing) The direct cost of... estimated fee is more than $250.00 or you have a history of failing to pay FOIA fees to HUD in a timely...

  7. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  8. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE PAGES

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...

    2017-06-06

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  9. Costs of rearing children in agricultural economies: an alternative estimation approach and findings from rural Bangladesh.

    PubMed

    Khan, M M; Magnani, R J; Mock, N B; Saadat, Y S

    1993-03-01

    There are changes in child costs during demographic transition. This study examines household time allocation from 66 agricultural households in 3 villages in Tangail District in rural north central Bangladesh in 1984-85 (371 days). Component and total child-rearing costs are estimated in alternative ways. Conventional "opportunity wage" measures are considered overestimated. The methodological shortcomings of direct cost accounting procedures and consumer demand methods in computing time cost and monetary cost of child rearing are pointed out. In this study's alternative computation, age standardized equivalent costs are generated. Child food consumption costs were generated from a large national survey conducted in 1983. Nonfood expenditures were estimated by food to nonfood expenditure ratios taken from the aforementioned survey. For estimating breast-feeding costs, an estimate was produced based on the assumption that costs for infant food consumption were a fixed proportion of food costs for older children. Land ownership groups were set up to reflect socioeconomic status: 1) landless households, 2) marginal farm households with 1 acre or .4 hectares of land, 3) middle income households with 1-2 acres of land, 4) upper middle income households with 2-4 acres of land, and 5) upper income or rich households with over 4 acres of land. The nonmarket wage rate for hired household help was used to determine the value of cooking, fetching water, and household cleaning and repairing. The results confirm the low costs of child rearing in high fertility societies. Productive nonmarket activities are effective in subsidizing the costs of children. The addition of a child into households already with children has a low impact on time costs of children; "this economies of scale effect is estimated ... at 20%." The highest relative costs were found in the lowest income households, and the lowest costs were in the highest income households. 5% of total household income is devoted to child rearing in the lowest income households compared to 1% of income in the highest income households. The implications are that fertility decline is more directly related to structural changes in the economy, satisfaction of existing demand for family planning, and the producing additional demand for fertility control.

  10. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Zubair, Mohammad

    1993-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

  11. Technology Estimating 2: A Process to Determine the Cost and Schedule of Space Technology Research and Development

    NASA Technical Reports Server (NTRS)

    Cole, Stuart K.; Wallace, Jon; Schaffer, Mark; May, M. Scott; Greenberg, Marc W.

    2014-01-01

    As a leader in space technology research and development, NASA is continuing in the development of the Technology Estimating process, initiated in 2012, for estimating the cost and schedule of low maturity technology research and development, where the Technology Readiness Level is less than TRL 6. NASA' s Technology Roadmap areas consist of 14 technology areas. The focus of this continuing Technology Estimating effort included four Technology Areas (TA): TA3 Space Power and Energy Storage, TA4 Robotics, TA8 Instruments, and TA12 Materials, to confine the research to the most abundant data pool. This research report continues the development of technology estimating efforts completed during 2013-2014, and addresses the refinement of parameters selected and recommended for use in the estimating process, where the parameters developed are applicable to Cost Estimating Relationships (CERs) used in the parametric cost estimating analysis. This research addresses the architecture for administration of the Technology Cost and Scheduling Estimating tool, the parameters suggested for computer software adjunct to any technology area, and the identification of gaps in the Technology Estimating process.

  12. Hydrogen from coal cost estimation guidebook

    NASA Technical Reports Server (NTRS)

    Billings, R. E.

    1981-01-01

    In an effort to establish baseline information whereby specific projects can be evaluated, a current set of parameters which are typical of coal gasification applications was developed. Using these parameters a computer model allows researchers to interrelate cost components in a sensitivity analysis. The results make possible an approximate estimation of hydrogen energy economics from coal, under a variety of circumstances.

  13. Estimation of optimal educational cost per medical student.

    PubMed

    Yang, Eunbae B; Lee, Seunghee

    2009-09-01

    This study aims to estimate the optimal educational cost per medical student. A private medical college in Seoul was targeted by the study, and its 2006 learning environment and data from the 2003~2006 budget and settlement were carefully analyzed. Through interviews with 3 medical professors and 2 experts in the economics of education, the study attempted to establish the educational cost estimation model, which yields an empirically computed estimate of the optimal cost per student in medical college. The estimation model was based primarily upon the educational cost which consisted of direct educational costs (47.25%), support costs (36.44%), fixed asset purchases (11.18%) and costs for student affairs (5.14%). These results indicate that the optimal cost per student is approximately 20,367,000 won each semester; thus, training a doctor costs 162,936,000 won over 4 years. Consequently, we inferred that the tuition levels of a local medical college or professional medical graduate school cover one quarter or one-half of the per- student cost. The findings of this study do not necessarily imply an increase in medical college tuition; the estimation of the per-student cost for training to be a doctor is one matter, and the issue of who should bear this burden is another. For further study, we should consider the college type and its location for general application of the estimation method, in addition to living expenses and opportunity costs.

  14. Costs of fire suppression forces based on cost-aggregation approach

    Treesearch

    Gonz& aacute; lez-Cab& aacute; Armando n; Charles W. McKetta; Thomas J. Mills

    1984-01-01

    A cost-aggregation approach has been developed for determining the cost of Fire Management Inputs (FMls)-the direct fireline production units (personnel and equipment) used in initial attack and large-fire suppression activities. All components contributing to an FMI are identified, computed, and summed to estimate hourly costs. This approach can be applied to any FMI...

  15. Using System Mass (SM), Equivalent Mass (EM), Equivalent System Mass (ESM) or Life Cycle Mass (LCM) in Advanced Life Support (ALS) Reporting

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2003-01-01

    The Advanced Life Support (ALS) has used a single number, Equivalent System Mass (ESM), for both reporting progress and technology selection. ESM is the launch mass required to provide a space system. ESM indicates launch cost. ESM alone is inadequate for technology selection, which should include other metrics such as Technology Readiness Level (TRL) and Life Cycle Cost (LCC) and also consider perfom.arxe 2nd risk. ESM has proven difficult to implement as a reporting metric, partly because it includes non-mass technology selection factors. Since it will not be used exclusively for technology selection, a new reporting metric can be made easier to compute and explain. Systems design trades-off performance, cost, and risk, but a risk weighted cost/benefit metric would be too complex to report. Since life support has fixed requirements, different systems usually have roughly equal performance. Risk is important since failure can harm the crew, but it is difficult to treat simply. Cost is not easy to estimate, but preliminary space system cost estimates are usually based on mass, which is better estimated than cost. Amass-based cost estimate, similar to ESM, would be a good single reporting metric. The paper defines and compares four mass-based cost estimates, Equivalent Mass (EM), Equivalent System Mass (ESM), Life Cycle Mass (LCM), and System Mass (SM). EM is traditional in life support and includes mass, volume, power, cooling and logistics. ESM is the specifically defined ALS metric, which adds crew time and possibly other cost factors to EM. LCM is a new metric, a mass-based estimate of LCC measured in mass units. SM includes only the factors of EM that are originally measured in mass, the hardware and logistics mass. All four mass-based metrics usually give similar comparisons. SM is by far the simplest to compute and easiest to explain.

  16. AMDTreat 5.0+ with PHREEQC titration module to compute caustic chemical quantity, effluent quality, and sludge volume

    USGS Publications Warehouse

    Cravotta, Charles A.; Means, Brent P; Arthur, Willam; McKenzie, Robert M; Parkhurst, David L.

    2015-01-01

    Alkaline chemicals are commonly added to discharges from coal mines to increase pH and decrease concentrations of acidity and dissolved aluminum, iron, manganese, and associated metals. The annual cost of chemical treatment depends on the type and quantities of chemicals added and sludge produced. The AMDTreat computer program, initially developed in 2003, is widely used to compute such costs on the basis of the user-specified flow rate and water quality data for the untreated AMD. Although AMDTreat can use results of empirical titration of net-acidic or net-alkaline effluent with caustic chemicals to accurately estimate costs for treatment, such empirical data are rarely available. A titration simulation module using the geochemical program PHREEQC has been incorporated with AMDTreat 5.0+ to improve the capability of AMDTreat to estimate: (1) the quantity and cost of caustic chemicals to attain a target pH, (2) the chemical composition of the treated effluent, and (3) the volume of sludge produced by the treatment. The simulated titration results for selected caustic chemicals (NaOH, CaO, Ca(OH)2, Na2CO3, or NH3) without aeration or with pre-aeration can be compared with or used in place of empirical titration data to estimate chemical quantities, treated effluent composition, sludge volume (precipitated metals plus unreacted chemical), and associated treatment costs. This paper describes the development, evaluation, and potential utilization of the PHREEQC titration module with the new AMDTreat 5.0+ computer program available at http://www.amd.osmre.gov/.

  17. Estimating the economic opportunity cost of water use with river basin simulators in a computationally efficient way

    NASA Astrophysics Data System (ADS)

    Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.

    2017-04-01

    The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).

  18. Aircraft ground damage and the use of predictive models to estimate costs

    NASA Astrophysics Data System (ADS)

    Kromphardt, Benjamin D.

    Aircraft are frequently involved in ground damage incidents, and repair costs are often accepted as part of doing business. The Flight Safety Foundation (FSF) estimates ground damage to cost operators $5-10 billion annually. Incident reports, documents from manufacturers or regulatory agencies, and other resources were examined to better understand the problem of ground damage in aviation. Major contributing factors were explained, and two versions of a computer-based model were developed to project costs and show what is possible. One objective was to determine if the models could match the FSF's estimate. Another objective was to better understand cost savings that could be realized by efforts to further mitigate the occurrence of ground incidents. Model effectiveness was limited by access to official data, and assumptions were used if data was not available. However, the models were determined to sufficiently estimate the costs of ground incidents.

  19. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  20. Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models

    NASA Astrophysics Data System (ADS)

    Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael

    2016-06-01

    We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.

  1. Probabilistic Methodology for Estimation of Number and Economic Loss (Cost) of Future Landslides in the San Francisco Bay Region, California

    USGS Publications Warehouse

    Crovelli, Robert A.; Coe, Jeffrey A.

    2008-01-01

    The Probabilistic Landslide Assessment Cost Estimation System (PLACES) presented in this report estimates the number and economic loss (cost) of landslides during a specified future time in individual areas, and then calculates the sum of those estimates. The analytic probabilistic methodology is based upon conditional probability theory and laws of expectation and variance. The probabilistic methodology is expressed in the form of a Microsoft Excel computer spreadsheet program. Using historical records, the PLACES spreadsheet is used to estimate the number of future damaging landslides and total damage, as economic loss, from future landslides caused by rainstorms in 10 counties of the San Francisco Bay region in California. Estimates are made for any future 5-year period of time. The estimated total number of future damaging landslides for the entire 10-county region during any future 5-year period of time is about 330. Santa Cruz County has the highest estimated number of damaging landslides (about 90), whereas Napa, San Francisco, and Solano Counties have the lowest estimated number of damaging landslides (5?6 each). Estimated direct costs from future damaging landslides for the entire 10-county region for any future 5-year period are about US $76 million (year 2000 dollars). San Mateo County has the highest estimated costs ($16.62 million), and Solano County has the lowest estimated costs (about $0.90 million). Estimated direct costs are also subdivided into public and private costs.

  2. Harvesting costs for management planning for ponderosa pine plantations.

    Treesearch

    Roger D. Fight; Alex Gicqueau; Bruce R. Hartsough

    1999-01-01

    The PPHARVST computer application is Windows-based, public-domain software used to estimate harvesting costs for management planning for ponderosa pine (Pinus ponderosa Dougl. ex Laws.) plantations. The equipment production rates were developed from existing studies. Equipment cost rates were based on 1996 prices for new...

  3. The Macintosh Based Design Studio.

    ERIC Educational Resources Information Center

    Earle, Daniel W., Jr.

    1988-01-01

    Describes the configuration of a workstation for a college design studio based on the Macintosh Plus microcomputer. Highlights include cost estimates, computer hardware peripherals, computer aided design software, networked studios, and potentials for new approaches to design activity in the computer based studio of the future. (Author/LRW)

  4. Advanced space power requirements and techniques. Task 1: Mission projections and requirements. Volume 3: Appendices. [cost estimates and computer programs

    NASA Technical Reports Server (NTRS)

    Wolfe, M. G.

    1978-01-01

    Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.

  5. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE PAGES

    Butler, Troy; Wildey, Timothy

    2018-01-01

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  6. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Troy; Wildey, Timothy

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  7. 40 CFR 57.803 - Issuance of tentative determination; notice.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... determination. (1) The EPA staff shall formulate and prepare: (i) A “Staff Computational Analysis,” using the... Computational Analysis, discussing the estimated cost of interim controls, and assessing the effect upon the...

  8. 40 CFR 57.803 - Issuance of tentative determination; notice.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... determination. (1) The EPA staff shall formulate and prepare: (i) A “Staff Computational Analysis,” using the... Computational Analysis, discussing the estimated cost of interim controls, and assessing the effect upon the...

  9. 40 CFR 57.803 - Issuance of tentative determination; notice.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... determination. (1) The EPA staff shall formulate and prepare: (i) A “Staff Computational Analysis,” using the... Computational Analysis, discussing the estimated cost of interim controls, and assessing the effect upon the...

  10. 40 CFR 57.803 - Issuance of tentative determination; notice.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determination. (1) The EPA staff shall formulate and prepare: (i) A “Staff Computational Analysis,” using the... Computational Analysis, discussing the estimated cost of interim controls, and assessing the effect upon the...

  11. 76 FR 21393 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-15

    ... startup cost components or annual operation, maintenance, and purchase of service components. You should describe the methods you use to estimate major cost factors, including system and technology acquisition.... Capital and startup costs include, among other items, computers and software you purchase to prepare for...

  12. 76 FR 25367 - Agency Information Collection Activities: Proposed Collection, Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-04

    ... startup cost components or annual operation, maintenance, and purchase of service components. You should describe the methods you use to estimate major cost factors, including system and technology acquisition.... Capital and startup costs include, among other items, computers and software you purchase to prepare for...

  13. 76 FR 5192 - BOEMRE Information Collection Activity: 1010-0170-Coastal Impact Assistance Program (CIAP...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-28

    ... disclose this information, you should comment and provide your total capital and startup cost components or... use to estimate major cost factors, including system and technology acquisition, expected useful life... startup costs include, among other items, computers and software you purchase to prepare for collecting...

  14. [Methodologies for estimating the indirect costs of traffic accidents].

    PubMed

    Carozzi, Soledad; Elorza, María Eugenia; Moscoso, Nebel Silvana; Ripari, Nadia Vanina

    2017-01-01

    Traffic accidents generate multiple costs to society, including those associated with the loss of productivity. However, there is no consensus about the most appropriate methodology for estimating those costs. The aim of this study was to review methods for estimating indirect costs applied in crash cost studies. A thematic review of the literature was carried out between 1995 and 2012 in PubMed with the terms cost of illness, indirect cost, road traffic injuries, productivity loss. For the assessment of costs we used the the human capital method, on the basis of the wage-income lost during the time of treatment and recovery of patients and caregivers. In the case of premature death or total disability, the discount rate was applied to obtain the present value of lost future earnings. The computed years arose by subtracting to life expectancy at birth the average age of those affected who are not incorporated into the economically active life. The interest in minimizing the problem is reflected in the evolution of the implemented methodologies. We expect that this review is useful to estimate efficiently the real indirect costs of traffic accidents.

  15. Examining the Feasibility and Utility of Estimating Partial Expected Value of Perfect Information (via a Nonparametric Approach) as Part of the Reimbursement Decision-Making Process in Ireland: Application to Drugs for Cancer.

    PubMed

    McCullagh, Laura; Schmitz, Susanne; Barry, Michael; Walsh, Cathal

    2017-11-01

    In Ireland, all new drugs for which reimbursement by the healthcare payer is sought undergo a health technology assessment by the National Centre for Pharmacoeconomics. The National Centre for Pharmacoeconomics estimate expected value of perfect information but not partial expected value of perfect information (owing to computational expense associated with typical methodologies). The objective of this study was to examine the feasibility and utility of estimating partial expected value of perfect information via a computationally efficient, non-parametric regression approach. This was a retrospective analysis of evaluations on drugs for cancer that had been submitted to the National Centre for Pharmacoeconomics (January 2010 to December 2014 inclusive). Drugs were excluded if cost effective at the submitted price. Drugs were excluded if concerns existed regarding the validity of the applicants' submission or if cost-effectiveness model functionality did not allow required modifications to be made. For each included drug (n = 14), value of information was estimated at the final reimbursement price, at a threshold equivalent to the incremental cost-effectiveness ratio at that price. The expected value of perfect information was estimated from probabilistic analysis. Partial expected value of perfect information was estimated via a non-parametric approach. Input parameters with a population value at least €1 million were identified as potential targets for research. All partial estimates were determined within minutes. Thirty parameters (across nine models) each had a value of at least €1 million. These were categorised. Collectively, survival analysis parameters were valued at €19.32 million, health state utility parameters at €15.81 million and parameters associated with the cost of treating adverse effects at €6.64 million. Those associated with drug acquisition costs and with the cost of care were valued at €6.51 million and €5.71 million, respectively. This research demonstrates that the estimation of partial expected value of perfect information via this computationally inexpensive approach could be considered feasible as part of the health technology assessment process for reimbursement purposes within the Irish healthcare system. It might be a useful tool in prioritising future research to decrease decision uncertainty.

  16. A combined registration and finite element analysis method for fast estimation of intraoperative brain shift; phantom and animal model study.

    PubMed

    Mohammadi, Amrollah; Ahmadian, Alireza; Rabbani, Shahram; Fattahi, Ehsan; Shirani, Shapour

    2017-12-01

    Finite element models for estimation of intraoperative brain shift suffer from huge computational cost. In these models, image registration and finite element analysis are two time-consuming processes. The proposed method is an improved version of our previously developed Finite Element Drift (FED) registration algorithm. In this work the registration process is combined with the finite element analysis. In the Combined FED (CFED), the deformation of whole brain mesh is iteratively calculated by geometrical extension of a local load vector which is computed by FED. While the processing time of the FED-based method including registration and finite element analysis was about 70 s, the computation time of the CFED was about 3.2 s. The computational cost of CFED is almost 50% less than similar state of the art brain shift estimators based on finite element models. The proposed combination of registration and structural analysis can make the calculation of brain deformation much faster. Copyright © 2016 John Wiley & Sons, Ltd.

  17. A cost-utility analysis of the use of preoperative computed tomographic angiography in abdomen-based perforator flap breast reconstruction.

    PubMed

    Offodile, Anaeze C; Chatterjee, Abhishek; Vallejo, Sergio; Fisher, Carla S; Tchou, Julia C; Guo, Lifei

    2015-04-01

    Computed tomographic angiography is a diagnostic tool increasingly used for preoperative vascular mapping in abdomen-based perforator flap breast reconstruction. This study compared the use of computed tomographic angiography and the conventional practice of Doppler ultrasonography only in postmastectomy reconstruction using a cost-utility model. Following a comprehensive literature review, a decision analytic model was created using the three most clinically relevant health outcomes in free autologous breast reconstruction with computed tomographic angiography versus Doppler ultrasonography only. Cost and utility estimates for each health outcome were used to derive the quality-adjusted life-years and incremental cost-utility ratio. One-way sensitivity analysis was performed to scrutinize the robustness of the authors' results. Six studies and 782 patients were identified. Cost-utility analysis revealed a baseline cost savings of $3179, a gain in quality-adjusted life-years of 0.25. This yielded an incremental cost-utility ratio of -$12,716, implying a dominant choice favoring preoperative computed tomographic angiography. Sensitivity analysis revealed that computed tomographic angiography was costlier when the operative time difference between the two techniques was less than 21.3 minutes. However, the clinical advantage of computed tomographic angiography over Doppler ultrasonography only showed that computed tomographic angiography would still remain the cost-effective option even if it offered no additional operating time advantage. The authors' results show that computed tomographic angiography is a cost-effective technology for identifying lower abdominal perforators for autologous breast reconstruction. Although the perfect study would be a randomized controlled trial of the two approaches with true cost accrual, the authors' results represent the best available evidence.

  18. Direct medical cost of overweight and obesity in the United States: a quantitative systematic review

    PubMed Central

    Tsai, Adam Gilden; Williamson, David F.; Glick, Henry A.

    2010-01-01

    Objectives To estimate per-person and aggregate direct medical costs of overweight and obesity and to examine the effect of study design factors. Methods PubMed (1968–2009), EconLit (1969–2009), and Business Source Premier (1995–2009) were searched for original studies. Results were standardized to compute the incremental cost per overweight person and per obese person, and to compute the national aggregate cost. Results A total of 33 U.S. studies met review criteria. Among the 4 highest quality studies, the 2008 per-person direct medical cost of overweight was $266 and of obesity was $1723. The aggregate national cost of overweight and obesity combined was $113.9 billion. Study design factors that affected cost estimate included: use of national samples versus more selected populations; age groups examined; inclusion of all medical costs versus obesity-related costs only; and BMI cutoffs for defining overweight and obesity. Conclusions Depending on the source of total national health care expenditures used, the direct medical cost of overweight and obesity combined is approximately 5.0% to 10% of U.S. health care spending. Future studies should include nationally representative samples, evaluate adults of all ages, report all medical costs, and use standard BMI cutoffs. PMID:20059703

  19. Software for Tracking Costs of Mars Projects

    NASA Technical Reports Server (NTRS)

    Wong, Alvin; Warfield, Keith

    2003-01-01

    The Mars Cost Tracking Model is a computer program that administers a system set up for tracking the costs of future NASA projects that pertain to Mars. Previously, no such tracking system existed, and documentation was written in a variety of formats and scattered in various places. It was difficult to justify costs or even track the history of costs of a spacecraft mission to Mars. The present software enables users to maintain all cost-model definitions, documentation, and justifications of cost estimates in one computer system that is accessible via the Internet. The software provides sign-off safeguards to ensure the reliability of information entered into the system. This system may eventually be used to track the costs of projects other than only those that pertain to Mars.

  20. Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.

    PubMed

    Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P

    2010-12-22

    Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.

  1. Research without billing data. Econometric estimation of patient-specific costs.

    PubMed

    Barnett, P G

    1997-06-01

    This article describes a method for computing the cost of care provided to individual patients in health care systems that do not routinely generate billing data, but gather information on patient utilization and total facility costs. Aggregate data on cost and utilization were used to estimate how costs vary with characteristics of patients and facilities of the US Department of Veterans Affairs. A set of cost functions was estimated, taking advantage of the department-level organization of the data. Casemix measures were used to determine the costs of acute hospital and long-term care. Hospitalization for medical conditions cost an average of $5,642 per US Health Care Financing Administration diagnosis-related group weight; surgical hospitalizations cost $11,836. Nursing home care cost $197.33 per day, intermediate care cost $280.66 per day, psychiatric care cost $307.33 per day, and domiciliary care cost $111.84 per day. Outpatient visits cost an average of $90.36. These estimates include the cost of physician services. The econometric method presented here accounts for variation in resource use caused by casemix that is not reflected in length of stay and for the effects of medical education, research, facility size, and wage rates. Data on non-Veteran's Affairs hospital stays suggest that the method accounts for 40% of the variation in acute hospital care costs and is superior to cost estimates based on length of stay or diagnosis-related group weight alone.

  2. Systems engineering and integration: Cost estimation and benefits analysis

    NASA Technical Reports Server (NTRS)

    Dean, ED; Fridge, Ernie; Hamaker, Joe

    1990-01-01

    Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.

  3. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  4. Drug development costs when financial risk is measured using the Fama-French three-factor model.

    PubMed

    Vernon, John A; Golec, Joseph H; Dimasi, Joseph A

    2010-08-01

    In a widely cited article, DiMasi, Hansen, and Grabowski (2003) estimate the average pre-tax cost of bringing a new molecular entity to market. Their base case estimate, excluding post-marketing studies, was $802 million (in $US 2000). Strikingly, almost half of this cost (or $399 million) is the cost of capital (COC) used to fund clinical development expenses to the point of FDA marketing approval. The authors used an 11% real COC computed using the capital asset pricing model (CAPM). But the CAPM is a single factor risk model, and multi-factor risk models are the current state of the art in finance. Using the Fama-French three factor model we find that the cost of drug development to be higher than the earlier estimate. Copyright (c) 2009 John Wiley & Sons, Ltd.

  5. Use of multispectral data in design of forest sample surveys

    NASA Technical Reports Server (NTRS)

    Titus, S. J.; Wensel, L. C.

    1977-01-01

    The use of multispectral data in design of forest sample surveys using a computer software package is described. The system allows evaluation of a number of alternative sampling systems and, with appropriate cost data, estimates the implementation cost for each.

  6. Use of multispectral data in design of forest sample surveys

    NASA Technical Reports Server (NTRS)

    Titus, S. J.; Wensel, L. C.

    1977-01-01

    The use of multispectral data in design of forest sample surveys using a computer software package, WILLIAM, is described. The system allows evaluation of a number of alternative sampling systems and, with appropriate cost data, estimates the implementation cost for each.

  7. Computer-Aided Surgical Simulation in Head and Neck Reconstruction: A Cost Comparison among Traditional, In-House, and Commercial Options.

    PubMed

    Li, Sean S; Copeland-Halperin, Libby R; Kaminsky, Alexander J; Li, Jihui; Lodhi, Fahad K; Miraliakbari, Reza

    2018-06-01

     Computer-aided surgical simulation (CASS) has redefined surgery, improved precision and reduced the reliance on intraoperative trial-and-error manipulations. CASS is provided by third-party services; however, it may be cost-effective for some hospitals to develop in-house programs. This study provides the first cost analysis comparison among traditional (no CASS), commercial CASS, and in-house CASS for head and neck reconstruction.  The costs of three-dimensional (3D) pre-operative planning for mandibular and maxillary reconstructions were obtained from an in-house CASS program at our large tertiary care hospital in Northern Virginia, as well as a commercial provider (Synthes, Paoli, PA). A cost comparison was performed among these modalities and extrapolated in-house CASS costs were derived. The calculations were based on estimated CASS use with cost structures similar to our institution and sunk costs were amortized over 10 years.  Average operating room time was estimated at 10 hours, with an average of 2 hours saved with CASS. The hourly cost to the hospital for the operating room (including anesthesia and other ancillary costs) was estimated at $4,614/hour. Per case, traditional cases were $46,140, commercial CASS cases were $40,951, and in-house CASS cases were $38,212. Annual in-house CASS costs were $39,590.  CASS reduced operating room time, likely due to improved efficiency and accuracy. Our data demonstrate that hospitals with similar cost structure as ours, performing greater than 27 cases of 3D head and neck reconstructions per year can see a financial benefit from developing an in-house CASS program. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  8. Valuing the Economic Costs of Allergic Rhinitis, Acute Bronchitis, and Asthma from Exposure to Indoor Dampness and Mold in the US

    PubMed Central

    2016-01-01

    Two foundational methods for estimating the total economic burden of disease are cost of illness (COI) and willingness to pay (WTP). WTP measures the full cost to society, but WTP estimates are difficult to compute and rarely available. COI methods are more often used but less likely to reflect full costs. This paper attempts to estimate the full economic cost (2014$) of illnesses resulting from exposure to dampness and mold using COI methods and WTP where the data is available. A limited sensitivity analysis of alternative methods and assumptions demonstrates a wide potential range of estimates. In the final estimates, the total annual cost to society attributable to dampness and mold is estimated to be $3.7 (2.3–4.7) billion for allergic rhinitis, $1.9 (1.1–2.3) billion for acute bronchitis, $15.1 (9.4–20.6) billion for asthma morbidity, and $1.7 (0.4–4.5) billion for asthma mortality. The corresponding costs from all causes, not limited to dampness and mold, using the same approach would be $24.8 billion for allergic rhinitis, $13.5 billion for acute bronchitis, $94.5 billion for asthma morbidity, and $10.8 billion for asthma mortality. PMID:27313630

  9. Gradient-based stochastic estimation of the density matrix

    NASA Astrophysics Data System (ADS)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  10. Productivity associated with visual status of computer users.

    PubMed

    Daum, Kent M; Clore, Katherine A; Simms, Suzanne S; Vesely, Jon W; Wilczek, Dawn D; Spittle, Brian M; Good, Greg W

    2004-01-01

    The aim of this project is to examine the potential connection between the astigmatic refractive corrections of subjects using computers and their productivity and comfort. We hypothesize that improving the visual status of subjects using computers results in greater productivity, as well as improved visual comfort. Inclusion criteria required subjects 19 to 30 years of age with complete vision examinations before being enrolled. Using a double-masked, placebo-controlled, randomized design, subjects completed three experimental tasks calculated to assess the effects of refractive error on productivity (time to completion and the number of errors) at a computer. The tasks resembled those commonly undertaken by computer users and involved visual search tasks of: (1) counties and populations; (2) nonsense word search; and (3) a modified text-editing task. Estimates of productivity for time to completion varied from a minimum of 2.5% upwards to 28.7% with 2 D cylinder miscorrection. Assuming a conservative estimate of an overall 2.5% increase in productivity with appropriate astigmatic refractive correction, our data suggest a favorable cost-benefit ratio of at least 2.3 for the visual correction of an employee (total cost 268 dollars) with a salary of 25,000 dollars per year. We conclude that astigmatic refractive error affected both productivity and visual comfort under the conditions of this experiment. These data also suggest a favorable cost-benefit ratio for employers who provide computer-specific eyewear to their employees.

  11. Estimating and validating ground-based timber harvesting production through computer simulation

    Treesearch

    Jingxin Wang; Chris B. LeDoux

    2003-01-01

    Estimating ground-based timber harvesting systems production with an object oriented methodology was investigated. The estimation model developed generates stands of trees, simulates chain saw, drive-to-tree feller-buncher, swing-to-tree single-grip harvester felling, and grapple skidder and forwarder extraction activities, and analyzes costs and productivity. It also...

  12. An extended Kalman filter approach to non-stationary Bayesian estimation of reduced-order vocal fold model parameters.

    PubMed

    Hadwin, Paul J; Peterson, Sean D

    2017-04-01

    The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.

  13. Annual economic impacts of seasonal influenza on US counties: Spatial heterogeneity and patterns

    PubMed Central

    2012-01-01

    Economic impacts of seasonal influenza vary across US counties, but little estimation has been conducted at the county level. This research computed annual economic costs of seasonal influenza for 3143 US counties based on Census 2010, identified inherent spatial patterns, and investigated cost-benefits of vaccination strategies. The computing model modified existing methods for national level estimation, and further emphasized spatial variations between counties, in terms of population size, age structure, influenza activity, and income level. Upon such a model, four vaccination strategies that prioritize different types of counties were simulated and their net returns were examined. The results indicate that the annual economic costs of influenza varied from $13.9 thousand to $957.5 million across US counties, with a median of $2.47 million. Prioritizing vaccines to counties with high influenza attack rates produces the lowest influenza cases and highest net returns. This research fills the current knowledge gap by downscaling the estimation to a county level, and adds spatial variability into studies of influenza economics and interventions. Compared to the national estimates, the presented statistics and maps will offer detailed guidance for local health agencies to fight against influenza. PMID:22594494

  14. Sensitivity analysis of add-on price estimate for select silicon wafering technologies

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.

  15. Saving Energy and Money: A Lesson in Computer Power Management

    ERIC Educational Resources Information Center

    Lazaros, Edward J.; Hua, David

    2012-01-01

    In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…

  16. Cost Estimation Techniques for C3I System Software.

    DTIC Science & Technology

    1984-07-01

    opment manmonth have been determined for maxi, midi , and mini .1 type computers. Small to median size timeshared developments used 0.2 to 1.5 hours...development schedule 1.23 1.00 1.10 2.1.3 Detailed Model The final codification of the COCOMO regressions was the development of separate effort...regardless of the software structure level being estimated: D8VC -- the expected development computer (maxi. midi . mini, micro) MODE -- the expected

  17. ABC estimation of unit costs for emergency department services.

    PubMed

    Holmes, R L; Schroeder, R E

    1996-04-01

    Rapid evolution of the health care industry forces managers to make cost-effective decisions. Typical hospital cost accounting systems do not provide emergency department managers with the information needed, but emergency department settings are so complex and dynamic as to make the more accurate activity-based costing (ABC) system prohibitively expensive. Through judicious use of the available traditional cost accounting information and simple computer spreadsheets. managers may approximate the decision-guiding information that would result from the much more costly and time-consuming implementation of ABC.

  18. Utilizing Expert Knowledge in Estimating Future STS Costs

    NASA Technical Reports Server (NTRS)

    Fortner, David B.; Ruiz-Torres, Alex J.

    2004-01-01

    A method of estimating the costs of future space transportation systems (STSs) involves classical activity-based cost (ABC) modeling combined with systematic utilization of the knowledge and opinions of experts to extend the process-flow knowledge of existing systems to systems that involve new materials and/or new architectures. The expert knowledge is particularly helpful in filling gaps that arise in computational models of processes because of inconsistencies in historical cost data. Heretofore, the costs of planned STSs have been estimated following a "top-down" approach that tends to force the architectures of new systems to incorporate process flows like those of the space shuttles. In this ABC-based method, one makes assumptions about the processes, but otherwise follows a "bottoms up" approach that does not force the new system architecture to incorporate a space-shuttle-like process flow. Prototype software has been developed to implement this method. Through further development of software, it should be possible to extend the method beyond the space program to almost any setting in which there is a need to estimate the costs of a new system and to extend the applicable knowledge base in order to make the estimate.

  19. Benefit-Cost Analysis of TAT Phase I Worker Training. Training and Technology Project. Special Report.

    ERIC Educational Resources Information Center

    Kirby, Frederick C.; Castagna, Paul A.

    The purpose of this study is to estimate costs and benefits and to compute alternative benefit-cost ratios for both the individuals and the Federal Government as a result of investing time and resources in the Training and Technology (TAT) Project. TAT is a continuing experimental program in training skilled workers for private industry. The five…

  20. ENGINEERING ECONOMIC ANALYSIS OF A PROGRAM FOR ARTIFICIAL GROUNDWATER RECHARGE.

    USGS Publications Warehouse

    Reichard, Eric G.; Bredehoeft, John D.

    1984-01-01

    This study describes and demonstrates two alternate methods for evaluating the relative costs and benefits of artificial groundwater recharge using percolation ponds. The first analysis considers the benefits to be the reduction of pumping lifts and land subsidence; the second considers benefits as the alternative costs of a comparable surface delivery system. Example computations are carried out for an existing artificial recharge program in Santa Clara Valley in California. A computer groundwater model is used to estimate both the average long term and the drought period effects of artificial recharge in the study area. Results indicate that the costs of artificial recharge are considerably smaller than the alternative costs of an equivalent surface system. Refs.

  1. Permittivity and conductivity parameter estimations using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.

    2018-04-01

    Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.

  2. Robust, Adaptive Radar Detection and Estimation

    DTIC Science & Technology

    2015-07-21

    cost function is not a convex function in R, we apply a transformation variables i.e., let X = σ2R−1 and S′ = 1 σ2 S. Then, the revised cost function in...1 viv H i . We apply this inverse covariance matrix in computing the SINR as well as estimator variance. • Rank Constrained Maximum Likelihood: Our...even as almost all available training samples are corrupted. Probability of Detection vs. SNR We apply three test statistics, the normalized matched

  3. MER-DIMES : a planetary landing application of computer vision

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew; Matthies, Larry

    2005-01-01

    During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.

  4. Multi-level methods and approximating distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E.

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparablemore » to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.« less

  5. The effect of bovine somatotropin on the cost of producing milk: Estimates using propensity scores.

    PubMed

    Tauer, Loren W

    2016-04-01

    Annual farm-level data from New York dairy farms from the years 1994 through 2013 were used to estimate the cost effect from bovine somatotropin (bST) using propensity score matching. Cost of production was computed using the whole-farm method, which subtracts sales of crops and animals from total costs under the assumption that the cost of producing those products is equal to their sales values. For a farm to be included in this data set, milk receipts on that farm must have comprised 85% or more of total receipts, indicating that these farms are primarily milk producers. Farm use of bST, where 25% or more of the herd was treated, ranged annually from 25 to 47% of the farms. The average cost effect from the use of bST was estimated to be a reduction of $2.67 per 100 kg of milk produced in 2013 dollars, although annual cost reduction estimates ranged from statistical zero to $3.42 in nominal dollars. Nearest neighbor matching techniques generated a similar estimate of $2.78 in 2013 dollars. These cost reductions estimated from the use of bST represented a cost savings of 5.5% per kilogram of milk produced. Herd-level production increase per cow from the use of bST over 20 yr averaged 1,160 kg. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Simple calculator to estimate the medical cost of diabetes in sub-Saharan Africa

    PubMed Central

    Alouki, Koffi; Delisle, Hélène; Besançon, Stéphane; Baldé, Naby; Sidibé-Traoré, Assa; Drabo, Joseph; Djrolo, François; Mbanya, Jean-Claude; Halimi, Serge

    2015-01-01

    AIM: To design a medical cost calculator and show that diabetes care is beyond reach of the majority particularly patients with complications. METHODS: Out-of-pocket expenditures of patients for medical treatment of type-2 diabetes were estimated based on price data collected in Benin, Burkina Faso, Guinea and Mali. A detailed protocol for realistic medical care of diabetes and its complications in the African context was defined. Care components were based on existing guidelines, published data and clinical experience. Prices were obtained in public and private health facilities. The cost calculator used Excel. The cost for basic management of uncomplicated diabetes was calculated per person and per year. Incremental costs were also computed per annum for chronic complications and per episode for acute complications. RESULTS: Wide variations of estimated care costs were observed among countries and between the public and private healthcare system. The minimum estimated cost for the treatment of uncomplicated diabetes (in the public sector) would amount to 21%-34% of the country’s gross national income per capita, 26%-47% in the presence of retinopathy, and above 70% for nephropathy, the most expensive complication. CONCLUSION: The study provided objective evidence for the exorbitant medical cost of diabetes considering that no medical insurance is available in the study countries. Although the calculator only estimates the cost of inaction, it is innovative and of interest for several stakeholders. PMID:26617974

  7. A Computer Program to Evaluate Timber Production Investments Under Uncertainty

    Treesearch

    Dennis L. Schweitzer

    1968-01-01

    A computer program has been written in Fortran IV to calculate probability distributions of present worths of investments in timber production. Inputs can include both point and probabilistic estimates of future costs, prices, and yields. Distributions of rates of return can also be constructed.

  8. 34 CFR 682.401 - Basic program agreement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or converting the records relating to its existing guaranty portfolio to an information or computer... that owns or controls the agency's existing information or computer system. If the agency is soliciting... must include a concise description of the agency's conversion project and the actual or estimated cost...

  9. 34 CFR 682.401 - Basic program agreement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... or converting the records relating to its existing guaranty portfolio to an information or computer... that owns or controls the agency's existing information or computer system. If the agency is soliciting... must include a concise description of the agency's conversion project and the actual or estimated cost...

  10. 34 CFR 682.401 - Basic program agreement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... or converting the records relating to its existing guaranty portfolio to an information or computer... that owns or controls the agency's existing information or computer system. If the agency is soliciting... must include a concise description of the agency's conversion project and the actual or estimated cost...

  11. 34 CFR 682.401 - Basic program agreement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... or converting the records relating to its existing guaranty portfolio to an information or computer... that owns or controls the agency's existing information or computer system. If the agency is soliciting... must include a concise description of the agency's conversion project and the actual or estimated cost...

  12. Near-Field Source Localization by Using Focusing Technique

    NASA Astrophysics Data System (ADS)

    He, Hongyang; Wang, Yide; Saillard, Joseph

    2008-12-01

    We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.

  13. Array distribution in data-parallel programs

    NASA Technical Reports Server (NTRS)

    Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert; Sheffler, Thomas J.

    1994-01-01

    We consider distribution at compile time of the array data in a distributed-memory implementation of a data-parallel program written in a language like Fortran 90. We allow dynamic redistribution of data and define a heuristic algorithmic framework that chooses distribution parameters to minimize an estimate of program completion time. We represent the program as an alignment-distribution graph. We propose a divide-and-conquer algorithm for distribution that initially assigns a common distribution to each node of the graph and successively refines this assignment, taking computation, realignment, and redistribution costs into account. We explain how to estimate the effect of distribution on computation cost and how to choose a candidate set of distributions. We present the results of an implementation of our algorithms on several test problems.

  14. Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup

    PubMed Central

    Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.

    2010-01-01

    Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651

  15. Enhancing Groundwater Cost Estimation with the Interpolation of Water Tables across the United States

    NASA Astrophysics Data System (ADS)

    Rosli, A. U. M.; Lall, U.; Josset, L.; Rising, J. A.; Russo, T. A.; Eisenhart, T.

    2017-12-01

    Analyzing the trends in water use and supply across the United States is fundamental to efforts in ensuring water sustainability. As part of this, estimating the costs of producing or obtaining water (water extraction) and the correlation with water use is an important aspect in understanding the underlying trends. This study estimates groundwater costs by interpolating the depth to water level across the US in each county. We use Ordinary and Universal Kriging, accounting for the differences between aquifers. Kriging generates a best linear unbiased estimate at each location and has been widely used to map ground-water surfaces (Alley, 1993).The spatial covariates included in the universal Kriging were land-surface elevation as well as aquifer information. The average water table is computed for each county using block kriging to obtain a national map of groundwater cost, which we compare with survey estimates of depth to the water table performed by the USDA. Groundwater extraction costs were then assumed to be proportional to water table depth. Beyond estimating the water cost, the approach can provide an indication of groundwater-stress by exploring the historical evolution of depth to the water table using time series information between 1960 and 2015. Despite data limitations, we hope to enable a more compelling and meaningful national-level analysis through the quantification of cost and stress for more economically efficient water management.

  16. Definition study of land/sea civil user navigational location monitoring systems for NAVSTAR GPS: User requirements and systems concepts

    NASA Technical Reports Server (NTRS)

    Devito, D. M.

    1981-01-01

    A low-cost GPS civil-user mobile terminal whose purchase cost is substantially an order of magnitude less than estimates for the military counterpart is considered with focus on ground station requirements for position monitoring of civil users requiring this capability and the civil user navigation and location-monitoring requirements. Existing survey literature was examined to ascertain the potential users of a low-cost NAVSTAR receiver and to estimate their number, function, and accuracy requirements. System concepts are defined for low cost user equipments for in-situ navigation and the retransmission of low data rate positioning data via a geostationary satellite to a central computing facility.

  17. Cost benefit assessment of NASA remote sensing technology transferred to the State of Georgia

    NASA Technical Reports Server (NTRS)

    Kelly, D. L.; Zimmer, R. P.; Wilkins, R. D.

    1978-01-01

    The benefits involved in the transfer of NASA remote sensing technology to eight Georgia state agencies are identified in quantifiable and qualitative terms, and a value for these benefits is computed by means of an effectiveness analysis. The benefits of the transfer are evaluated by contrasting a baseline scenario without Landsat and an alternative scenario with Landsat. The net present value of the Landsat technology being transferred is estimated at 9.5 million dollars. The estimated value of the transfer is most sensitive to discount rate, the cost of photo acquisition, and the cost of data digitalization. It is estimated that, if the budget is constrained, Landsat could provide data products roughly seven times more frequently than would otherwise be possible.

  18. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  19. Estimated Costs for Delivery of HIV Antiretroviral Therapy to Individuals with CD4+ T-Cell Counts >350 cells/uL in Rural Uganda.

    PubMed

    Jain, Vivek; Chang, Wei; Byonanebye, Dathan M; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R; Havlir, Diane V; Kahn, James G

    2015-01-01

    Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451-716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100-200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. In a Ugandan HIV clinic, ART delivery costs--including VL testing--for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions.

  20. Overview of SDCM - The Spacecraft Design and Cost Model

    NASA Technical Reports Server (NTRS)

    Ferebee, Melvin J.; Farmer, Jeffery T.; Andersen, Gregory C.; Flamm, Jeffery D.; Badi, Deborah M.

    1988-01-01

    The Spacecraft Design and Cost Model (SDCM) is a computer-aided design and analysis tool for synthesizing spacecraft configurations, integrating their subsystems, and generating information concerning on-orbit servicing and costs. SDCM uses a bottom-up method in which the cost and performance parameters for subsystem components are first calculated; the model then sums the contributions from individual components in order to obtain an estimate of sizes and costs for each candidate configuration within a selected spacecraft system. An optimum spacraft configuration can then be selected.

  1. Calibration and Validation of the Sage Software Cost/Schedule Estimating System to United States Air Force Databases

    DTIC Science & Technology

    1997-09-01

    factor values are identified. For SASET, revised cost estimating relationships are provided ( Apgar et al., 1991). A 1991 AFIT thesis by Gerald Ourada...description of the model is a paragraph directly quoted from the user’s manual . This is not to imply that a lack of a thorough analysis indicates...constraints imposed by the system. The effective technology rating is computed from the basic technology rating by the following equation ( Apgar et al., 1991

  2. Estimated Costs for Delivery of HIV Antiretroviral Therapy to Individuals with CD4+ T-Cell Counts >350 cells/uL in Rural Uganda

    PubMed Central

    Jain, Vivek; Chang, Wei; Byonanebye, Dathan M.; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R.; Havlir, Diane V.; Kahn, James G.

    2015-01-01

    Background Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Methods Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Findings Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451–716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100–200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. Conclusions In a Ugandan HIV clinic, ART delivery costs—including VL testing—for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions. PMID:26632823

  3. Massive Photons: An Infrared Regularization Scheme for Lattice QCD+QED.

    PubMed

    Endres, Michael G; Shindler, Andrea; Tiburzi, Brian C; Walker-Loud, André

    2016-08-12

    Standard methods for including electromagnetic interactions in lattice quantum chromodynamics calculations result in power-law finite-volume corrections to physical quantities. Removing these by extrapolation requires costly computations at multiple volumes. We introduce a photon mass to alternatively regulate the infrared, and rely on effective field theory to remove its unphysical effects. Electromagnetic modifications to the hadron spectrum are reliably estimated with a precision and cost comparable to conventional approaches that utilize multiple larger volumes. A significant overall cost advantage emerges when accounting for ensemble generation. The proposed method may benefit lattice calculations involving multiple charged hadrons, as well as quantum many-body computations with long-range Coulomb interactions.

  4. 77 FR 35466 - Pilot Project Grants in Support of Railroad Safety Risk Reduction Programs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-13

    ... mobile telephones and laptop computers. This subpart was codified in response to an increase in the... FRA funding. Applications should include feasibility studies and cost estimates, if completed. FRA will more favorably consider applications that include these types of studies and estimates, as they...

  5. An architecture for efficient gravitational wave parameter estimation with multimodal linear surrogate models

    NASA Astrophysics Data System (ADS)

    O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.

    2017-07-01

    The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.

  6. Pharmacologic treatments for dry eye: a worthwhile investment?

    PubMed

    Novack, Gary D

    2002-01-01

    To determine whether investment in a novel pharmacologic agent for the treatment of dry eye would be worthwhile from a financial perspective. Estimates were made of the cost and time required to develop a novel pharmacologic treatment of dry eye and the potential revenues for the product. These estimates were used to compute the value of the investment, adjusting for the time value of money. Development was estimated to cost $42 million and to take 55 months from investigational new drug exemption filing to new drug application approval. The potential market for this treatment was estimated at $542 million per year at year 5. Adding in the cost of development and marketing as well as other costs, net present value was very positive at the 5, 8, 10, and 40% cost of financing. The internal rate of return was 90%. In summary, if there were a successful pharmacologic treatment of dry eye and if a firm could manage the cash flow during the development, then the market potential approaches that of other treatment of chronic ophthalmic conditions (e.g., glaucoma), and it would be a worthwhile investment.

  7. Burden of suicide in Poland in 2012: how could it be measured and how big is it?

    PubMed

    Orlewska, Katarzyna; Orlewska, Ewa

    2018-04-01

    The aim of our study was to estimate the health-related and economic burden of suicide in Poland in 2012 and to demonstrate the effects of using different assumptions on the disease burden estimation. Years of life lost (YLL) were calculated by multiplying the number of deaths by the remaining life expectancy. Local expected YLL (LEYLL) and standard expected YLL (SEYLL) were computed using Polish life expectancy tables and WHO standards, respectively. In the base case analysis LEYLL and SEYLL were computed with 3.5 and 0% discount rates, respectively, and no age-weighting. Premature mortality costs were calculated using a human capital approach, with discounting at 5%, and are reported in Polish zloty (PLN) (1 euro = 4.3 PLN). The impact of applying different assumptions on base-case estimates was tested in sensitivity analyses. The total LEYLLs and SEYLLs due to suicide were 109,338 and 279,425, respectively, with 88% attributable to male deaths. The cost of male premature mortality (2,808,854,532 PLN) was substantially higher than for females (177,852,804 PLN). Discounting and age-weighting have a large effect on the base case estimates of LEYLLs. The greatest impact on the estimates of suicide-related premature mortality costs was due to the value of the discount rate. Our findings provide quantitative evidence on the burden of suicide. In our opinion each of the demonstrated methods brings something valuable to the evaluation of the impact of suicide on a given population, but LEYLLs and premature mortality costs estimated according to national guidelines have the potential to be useful for local public health policymakers.

  8. Sensitivity analysis of the add-on price estimate for the edge-defined film-fed growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.; Kachare, A. H.

    1981-01-01

    The analysis is in terms of cost parameters and production parameters. The cost parameters include equipment, space, direct labor, materials, and utilities. The production parameters include growth rate, process yield, and duty cycle. A computer program was developed specifically to do the sensitivity analysis.

  9. Evaluation of solar thermal power plants using economic and performance simulations

    NASA Technical Reports Server (NTRS)

    El-Gabawali, N.

    1980-01-01

    An energy cost analysis is presented for central receiver power plants with thermal storage and point focusing power plants with electrical storage. The present approach is based on optimizing the size of the plant to give the minimum energy cost (in mills/kWe hr) of an annual plant energy production. The optimization is done by considering the trade-off between the collector field size and the storage capacity for a given engine size. The energy cost is determined by the plant cost and performance. The performance is estimated by simulating the behavior of the plant under typical weather conditions. Plant capital and operational costs are estimated based on the size and performance of different components. This methodology is translated into computer programs for automatic and consistent evaluation.

  10. Product pricing in the Solar Array Manufacturing Industry - An executive summary of SAMICS

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1978-01-01

    Capabilities, methodology, and a description of input data to the Solar Array Manufacturing Industry Costing Standards (SAMICS) are presented. SAMICS were developed to provide a standardized procedure and data base for comparing manufacturing processes of Low-cost Solar Array (LSA) subcontractors, guide the setting of research priorities, and assess the progress of LSA toward its hundred-fold cost reduction goal. SAMICS can be used to estimate the manufacturing costs and product prices and determine the impact of inflation, taxes, and interest rates, but it is limited by its ignoring the effects of the market supply and demand and an assumption that all factories operate in a production line mode. The SAMICS methodology defines the industry structure, hypothetical supplier companies, and manufacturing processes and maintains a body of standardized data which is used to compute the final product price. The input data includes the product description, the process characteristics, the equipment cost factors, and production data for the preparation of detailed cost estimates. Activities validating that SAMICS produced realistic price estimates and cost breakdowns are described.

  11. [Cost analysis for navigation in knee endoprosthetics].

    PubMed

    Cerha, O; Kirschner, S; Günther, K-P; Lützner, J

    2009-12-01

    Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to 300-395 depending on the navigation system. Computer-assisted TKA is associated with additional costs. From an economical point of view an amount of more than 50 procedures per year appears to be favourable. The cost-effectiveness could be estimated if long-term results will show a reduction of revisions or a better clinical outcome.

  12. Spatial aliasing for efficient direction-of-arrival estimation based on steering vector reconstruction

    NASA Astrophysics Data System (ADS)

    Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming

    2016-12-01

    A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.

  13. Cost-effectiveness of a motivational intervention for alcohol-involved youth in a hospital emergency department.

    PubMed

    Neighbors, Charles J; Barnett, Nancy P; Rohsenow, Damaris J; Colby, Suzanne M; Monti, Peter M

    2010-05-01

    Brief interventions in the emergency department targeting risk-taking youth show promise to reduce alcohol-related injury. This study models the cost-effectiveness of a motivational interviewing-based intervention relative to brief advice to stop alcohol-related risk behaviors (standard care). Average cost-effectiveness ratios were compared between conditions. In addition, a cost-utility analysis examined the incremental cost of motivational interviewing per quality-adjusted life year gained. Microcosting methods were used to estimate marginal costs of motivational interviewing and standard care as well as two methods of patient screening: standard emergency-department staff questioning and proactive outreach by counseling staff. Average cost-effectiveness ratios were computed for drinking and driving, injuries, vehicular citations, and negative social consequences. Using estimates of the marginal effect of motivational interviewing in reducing drinking and driving, estimates of traffic fatality risk from drinking-and-driving youth, and national life tables, the societal costs per quality-adjusted life year saved by motivational interviewing relative to standard care were also estimated. Alcohol-attributable traffic fatality risks were estimated using national databases. Intervention costs per participant were $81 for standard care, $170 for motivational interviewing with standard screening, and $173 for motivational interviewing with proactive screening. The cost-effectiveness ratios for motivational interviewing were more favorable than standard care across all study outcomes and better for men than women. The societal cost per quality-adjusted life year of motivational interviewing was $8,795. Sensitivity analyses indicated that results were robust in terms of variability in parameter estimates. This brief intervention represents a good societal investment compared with other commonly adopted medical interventions.

  14. Cost-effectiveness of implementing computed tomography screening for lung cancer in Taiwan.

    PubMed

    Yang, Szu-Chun; Lai, Wu-Wei; Lin, Chien-Chung; Su, Wu-Chou; Ku, Li-Jung; Hwang, Jing-Shiang; Wang, Jung-Der

    2017-06-01

    A screening program for lung cancer requires more empirical evidence. Based on the experience of the National Lung Screening Trial (NLST), we developed a method to adjust lead-time bias and quality-of-life changes for estimating the cost-effectiveness of implementing computed tomography (CT) screening in Taiwan. The target population was high-risk (≥30 pack-years) smokers between 55 and 75 years of age. From a nation-wide, 13-year follow-up cohort, we estimated quality-adjusted life expectancy (QALE), loss-of-QALE, and lifetime healthcare expenditures per case of lung cancer stratified by pathology and stage. Cumulative stage distributions for CT-screening and no-screening were assumed equal to those for CT-screening and radiography-screening in the NLST to estimate the savings of loss-of-QALE and additional costs of lifetime healthcare expenditures after CT screening. Costs attributable to screen-negative subjects, false-positive cases and radiation-induced lung cancer were included to obtain the incremental cost-effectiveness ratio from the public payer's perspective. The incremental costs were US$22,755 per person. After dividing this by savings of loss-of-QALE (1.16 quality-adjusted life year (QALY)), the incremental cost-effectiveness ratio was US$19,683 per QALY. This ratio would fall to US$10,947 per QALY if the stage distribution for CT-screening was the same as that of screen-detected cancers in the NELSON trial. Low-dose CT screening for lung cancer among high-risk smokers would be cost-effective in Taiwan. As only about 5% of our women are smokers, future research is necessary to identify the high-risk groups among non-smokers and increase the coverage. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  15. An Investment Case to Prevent the Reintroduction of Malaria in Sri Lanka

    PubMed Central

    Shretta, Rima; Baral, Ranju; Avanceña, Anton L. V.; Fox, Katie; Dannoruwa, Asoka Premasiri; Jayanetti, Ravindra; Jeyakumaran, Arumainayagam; Hasantha, Rasike; Peris, Lalanthika; Premaratne, Risintha

    2017-01-01

    Sri Lanka has made remarkable gains in reducing the burden of malaria, recording no locally transmitted malaria cases since November 2012 and zero deaths since 2007. The country was recently certified as malaria free by World Health Organization in September 2016. Sri Lanka, however, continues to face a risk of resurgence due to persistent receptivity and vulnerability to malaria transmission. Maintaining the gains will require continued financing to the malaria program to maintain the activities aimed at preventing reintroduction. This article presents an investment case for malaria in Sri Lanka by estimating the costs and benefits of sustaining investments to prevent the reintroduction of the disease. An ingredient-based approach was used to estimate the cost of the existing program. The cost of potential resurgence was estimated using a hypothetical scenario in which resurgence assumed to occur, if all prevention of reintroduction activities were halted. These estimates were used to compute a benefit–cost ratio and a return on investment. The total economic cost of the malaria program in 2014 was estimated at U.S. dollars (USD) 0.57 per capita per year with a financial cost of USD0.37 per capita. The cost of potential malaria resurgence was, however, much higher estimated at 13 times the cost of maintaining existing activities or 21 times based on financial costs alone. This evidence suggests a substantial return on investment providing a compelling argument for advocacy for continued prioritization of funding for the prevention of reintroduction of malaria in Sri Lanka. PMID:28115673

  16. Computer multitasking with Desqview 386 in a family practice.

    PubMed Central

    Davis, A E

    1990-01-01

    Computers are now widely used in medical practice for accounting and secretarial tasks. However, it has been much more difficult to use computers in more physician-related activities of daily practice. I investigated the Desqview multitasking system on a 386 computer as a solution to this problem. Physician-directed tasks of management of patient charts, retrieval of reference information, word processing, appointment scheduling and office organization were each managed by separate programs. Desqview allowed instantaneous switching back and forth between the various programs. I compared the time and cost savings and the need for physician input between Desqview 386, a 386 computer alone and an older, XT computer. Desqview significantly simplified the use of computer programs for medical information management and minimized the necessity for physician intervention. The time saved was 15 minutes per day; the costs saved were estimated to be $5000 annually. PMID:2383848

  17. An adjoint-based simultaneous estimation method of the asthenosphere's viscosity and afterslip using a fast and scalable finite-element adjoint solver

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo

    2018-04-01

    The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.

  18. Applications of a stump-to-mill computer model to cable logging planning

    Treesearch

    Chris B. LeDoux

    1986-01-01

    Logging cost simulators and data from logging cost studies have been assembled and converted into a series of simple equations that can be used to estimate the stump-to-mill cost of cable logging in mountainous terrain of the Eastern United States. These equations are based on the use of two small and four medium-sized cable yarders and are applicable for harvests of...

  19. The QUELCE Method: Using Change Drivers to Estimate Program Costs

    DTIC Science & Technology

    2016-08-01

    QUELCE computes a distribution of program costs based on Monte Carlo analysis of program cost drivers—assessed via analyses of dependency structure...possible scenarios. These include  a dependency structure matrix to understand the interaction of change drivers for a specific project  a...performed by the SEI or by company analysts. From the workshop results, analysts create a dependency structure matrix (DSM) of the change drivers

  20. 33 CFR 277.8 - Procedures for apportionment of costs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... life bears to the total estimated service life. The share of the bridge owner, thus computed... not have to be met until the bridge had reached the end of its useful life. Accordingly, the present worth of the amount is computed deferred over the unexpired life. The discount rate to be used in the...

  1. 33 CFR 277.8 - Procedures for apportionment of costs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... life bears to the total estimated service life. The share of the bridge owner, thus computed... not have to be met until the bridge had reached the end of its useful life. Accordingly, the present worth of the amount is computed deferred over the unexpired life. The discount rate to be used in the...

  2. 33 CFR 277.8 - Procedures for apportionment of costs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... life bears to the total estimated service life. The share of the bridge owner, thus computed... not have to be met until the bridge had reached the end of its useful life. Accordingly, the present worth of the amount is computed deferred over the unexpired life. The discount rate to be used in the...

  3. 33 CFR 277.8 - Procedures for apportionment of costs.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... life bears to the total estimated service life. The share of the bridge owner, thus computed... not have to be met until the bridge had reached the end of its useful life. Accordingly, the present worth of the amount is computed deferred over the unexpired life. The discount rate to be used in the...

  4. On the precision of aero-thermal simulations for TMT

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Thompson, Hugh

    2016-08-01

    Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. These simulations utilize Computational Fluid Dynamics (CFD) to estimate, among others, thermal (dome and mirror) seeing as well as wind jitter and blur. As the design matures, guidance obtained from these numerical experiments can influence significant cost-performance trade-offs and even component survivability. The stochastic nature of environmental conditions results in the generation of a large computational solution matrix in order to statistically predict Observatory Performance. Moreover, the relative contribution of selected key subcomponents to IQ increases the parameter space and thus computational cost, while dictating a reduced prediction error bar. The current study presents the strategy followed to minimize prediction time and computational resources, the subsequent physical and numerical limitations and finally the approach to mitigate the issues experienced. In particular, the paper describes a mesh-independence study, the effect of interpolation of CFD results on the TMT IQ metric, and an analysis of the sensitivity of IQ to certain important heat sources and geometric features.

  5. Comparison of different models for non-invasive FFR estimation

    NASA Astrophysics Data System (ADS)

    Mirramezani, Mehran; Shadden, Shawn

    2017-11-01

    Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.

  6. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  7. An Analysis of the RCA Price-S Cost Estimation Model as it Relates to Current Air Force Computer Software Acquisition and Management.

    DTIC Science & Technology

    1979-12-01

    because of the use of complex computational algorithms (Ref 25). Another important factor effecting the cost of soft- ware is the size of the development...involved the alignment and navigational algorithm portions of the software. The second avionics system application was the development of an inertial...001 1 COAT CONL CREA CINT CMAT CSTR COPR CAPP New Code .001 .001 .001 .001 1001 ,OO .00 Device TDAT T03NL TREA TINT Types o * Quantity OGAT OONL OREA

  8. Rule-Based Flight Software Cost Estimation

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  9. Planning Inmarsat's second generation of spacecraft

    NASA Astrophysics Data System (ADS)

    Williams, W. P.

    1982-09-01

    The next generation of studies of the Inmarsat service are outlined, such as traffic forecasting studies, communications capacity estimates, space segment design, cost estimates, and financial analysis. Traffic forecasting will require future demand estimates, and a computer model has been developed which estimates demand over the Atlantic, Pacific, and Indian ocean regions. Communications estimates are based on traffic estimates, as a model converts traffic demand into a required capacity figure for a given area. The Erlang formula is used, requiring additional data such as peak hour ratios and distribution estimates. Basic space segment technical requirements are outlined (communications payload, transponder arrangements, etc), and further design studies involve such areas as space segment configuration, launcher and spacecraft studies, transmission planning, and earth segment configurations. Cost estimates of proposed design parameters will be performed, but options must be reduced to make construction feasible. Finally, a financial analysis will be carried out in order to calculate financial returns.

  10. Comparison of Permanent Change of Station Costs for Women and Men Transferred Prematurely from Ships (Computer Diskette).

    DTIC Science & Technology

    requirements: Post-script. The Objective of this report was to determine whether transferring pregnant women from ships costs the Navy more permanent...change of station (PCS) funds than transferring men and nonpregnant women information was extracted from the enlisted master record concerning gender...from gender-integrated afloat units. The direct costs of transfer prior to PRD was compared for men and women and an estimate of PCS costs, if the ships were not gender-integrated, was also calculated.

  11. Computational Analysis of Nanoparticles-Molten Salt Thermal Energy Storage for Concentrated Solar Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Vinod

    2017-05-05

    High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly butmore » important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.« less

  12. Computational analysis of bovine milk exosomal miRNAs profiles derived from uninfected and Streptococcus uberis infected mammary gland

    USDA-ARS?s Scientific Manuscript database

    The dairy cattle industry in the U.S. contributes an estimated 7 billion dollars to the agribusiness economy. Bacterial infections that cause disease like mastitis, affect health of the lactating mammary gland, and negatively impacts milk production and milk quality, costing producers an estimated 2...

  13. EMSAR: estimation of transcript abundance from RNA-seq data by mappability-based segmentation and reclustering.

    PubMed

    Lee, Soohyun; Seo, Chae Hwa; Alver, Burak Han; Lee, Sanghyuk; Park, Peter J

    2015-09-03

    RNA-seq has been widely used for genome-wide expression profiling. RNA-seq data typically consists of tens of millions of short sequenced reads from different transcripts. However, due to sequence similarity among genes and among isoforms, the source of a given read is often ambiguous. Existing approaches for estimating expression levels from RNA-seq reads tend to compromise between accuracy and computational cost. We introduce a new approach for quantifying transcript abundance from RNA-seq data. EMSAR (Estimation by Mappability-based Segmentation And Reclustering) groups reads according to the set of transcripts to which they are mapped and finds maximum likelihood estimates using a joint Poisson model for each optimal set of segments of transcripts. The method uses nearly all mapped reads, including those mapped to multiple genes. With an efficient transcriptome indexing based on modified suffix arrays, EMSAR minimizes the use of CPU time and memory while achieving accuracy comparable to the best existing methods. EMSAR is a method for quantifying transcripts from RNA-seq data with high accuracy and low computational cost. EMSAR is available at https://github.com/parklab/emsar.

  14. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  15. Taking ART to Scale: Determinants of the Cost and Cost-Effectiveness of Antiretroviral Therapy in 45 Clinical Sites in Zambia

    PubMed Central

    Marseille, Elliot; Giganti, Mark J.; Mwango, Albert; Chisembele-Taylor, Angela; Mulenga, Lloyd; Over, Mead; Kahn, James G.; Stringer, Jeffrey S. A.

    2012-01-01

    Background We estimated the unit costs and cost-effectiveness of a government ART program in 45 sites in Zambia supported by the Centre for Infectious Disease Research Zambia (CIDRZ). Methods We estimated per person-year costs at the facility level, and support costs incurred above the facility level and used multiple regression to estimate variation in these costs. To estimate ART effectiveness, we compared mortality in this Zambian population to that of a cohort of rural Ugandan HIV patients receiving co-trimoxazole (CTX) prophylaxis. We used micro-costing techniques to estimate incremental unit costs, and calculated cost-effectiveness ratios with a computer model which projected results to 10 years. Results The program cost $69.7 million for 125,436 person-years of ART, or $556 per ART-year. Compared to CTX prophylaxis alone, the program averted 33.3 deaths or 244.5 disability adjusted life-years (DALYs) per 100 person-years of ART. In the base-case analysis, the net cost per DALY averted was $833 compared to CTX alone. More than two-thirds of the variation in average incremental total and on-site cost per patient-year of treatment is explained by eight determinants, including the complexity of the patient-case load, the degree of adherence among the patients, and institutional characteristics including, experience, scale, scope, setting and sector. Conclusions and Significance The 45 sites exhibited substantial variation in unit costs and cost-effectiveness and are in the mid-range of cost-effectiveness when compared to other ART programs studied in southern Africa. Early treatment initiation, large scale, and hospital setting, are associated with statistically significantly lower costs, while others (rural location, private sector) are associated with shifting cost from on- to off-site. This study shows that ART programs can be significantly less costly or more cost-effective when they exploit economies of scale and scope, and initiate patients at higher CD4 counts. PMID:23284843

  16. Taking ART to scale: determinants of the cost and cost-effectiveness of antiretroviral therapy in 45 clinical sites in Zambia.

    PubMed

    Marseille, Elliot; Giganti, Mark J; Mwango, Albert; Chisembele-Taylor, Angela; Mulenga, Lloyd; Over, Mead; Kahn, James G; Stringer, Jeffrey S A

    2012-01-01

    We estimated the unit costs and cost-effectiveness of a government ART program in 45 sites in Zambia supported by the Centre for Infectious Disease Research Zambia (CIDRZ). We estimated per person-year costs at the facility level, and support costs incurred above the facility level and used multiple regression to estimate variation in these costs. To estimate ART effectiveness, we compared mortality in this Zambian population to that of a cohort of rural Ugandan HIV patients receiving co-trimoxazole (CTX) prophylaxis. We used micro-costing techniques to estimate incremental unit costs, and calculated cost-effectiveness ratios with a computer model which projected results to 10 years. The program cost $69.7 million for 125,436 person-years of ART, or $556 per ART-year. Compared to CTX prophylaxis alone, the program averted 33.3 deaths or 244.5 disability adjusted life-years (DALYs) per 100 person-years of ART. In the base-case analysis, the net cost per DALY averted was $833 compared to CTX alone. More than two-thirds of the variation in average incremental total and on-site cost per patient-year of treatment is explained by eight determinants, including the complexity of the patient-case load, the degree of adherence among the patients, and institutional characteristics including, experience, scale, scope, setting and sector. The 45 sites exhibited substantial variation in unit costs and cost-effectiveness and are in the mid-range of cost-effectiveness when compared to other ART programs studied in southern Africa. Early treatment initiation, large scale, and hospital setting, are associated with statistically significantly lower costs, while others (rural location, private sector) are associated with shifting cost from on- to off-site. This study shows that ART programs can be significantly less costly or more cost-effective when they exploit economies of scale and scope, and initiate patients at higher CD4 counts.

  17. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  18. Adaptive error covariances estimation methods for ensemble Kalman filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, Yicun, E-mail: zhen@math.psu.edu; Harlim, John, E-mail: jharlim@psu.edu

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for usingmore » information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.« less

  19. The 2009 DOD Cost Research Workshop: Acquisition Reform

    DTIC Science & Technology

    2010-02-01

    2 ACEIT Enhancement, Help-Desk/Training, Consulting DASA-CE–3 Command, Control, Communications, Computers, Intelligence, Surveillance, and...Management Information System (OSMIS) online interactive relational database DASA-CE–2 Title: ACEIT Enhancement, Help-Desk/Training, Consulting Summary...support and training for the Automated Cost estimator Integrated Tools ( ACEIT ) software suite. ACEIT is the Army standard suite of analytical tools for

  20. A benefit-cost analysis of ten tree species in Modesto, California, U.S.A

    Treesearch

    E.G. McPherson

    2003-01-01

    Tree work records for ten species were analyzed to estimate average annual management costs by dbh class for six activity areas. Average annual benefits were calculated by dbh class for each species with computer modeling. Average annual net benefits per tree were greatest for London plane (Platanus acerifolia) ($178.57), hackberry (...

  1. On the Efficiency Costs of De-Tracking Secondary Schools in Europe

    ERIC Educational Resources Information Center

    Brunello, Giorgio; Rocco, Lorenzo; Ariga, Kenn; Iwahashi, Roki

    2012-01-01

    Many European countries have delayed the time when school tracking starts in order to pursue equality of opportunity. What are the efficiency costs of de-tracking secondary schools? This paper builds a stylized model of the optimal time of tracking, estimates the relevant parameters using micro data for 11 European countries and computes the…

  2. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  3. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM 370 VERSION)

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  4. At what costs will screening with CT colonography be competitive? A cost-effectiveness approach.

    PubMed

    Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Zauber, Ann G; Boer, Rob; Wilschut, Janneke; Habbema, J Dik F

    2009-03-01

    The costs of computed tomographic colonography (CTC) are not yet established for screening use. In our study, we estimated the threshold costs for which CTC screening would be a cost-effective alternative to colonoscopy for colorectal cancer (CRC) screening in the general population. We used the MISCAN-colon microsimulation model to estimate the costs and life-years gained of screening persons aged 50-80 years for 4 screening strategies: (i) optical colonoscopy; and CTC with referral to optical colonoscopy of (ii) any suspected polyp; (iii) a suspected polyp >or=6 mm and (iv) a suspected polyp >or=10 mm. For each of the 4 strategies, screen intervals of 5, 10, 15 and 20 years were considered. Subsequently, for each CTC strategy and interval, the threshold costs of CTC were calculated. We performed a sensitivity analysis to assess the effect of uncertain model parameters on the threshold costs. With equal costs ($662), optical colonoscopy dominated CTC screening. For CTC to gain similar life-years as colonoscopy screening every 10 years, it should be offered every 5 years with referral of polyps >or=6 mm. For this strategy to be as cost-effective as colonoscopy screening, the costs must not exceed $285 or 43% of colonoscopy costs (range in sensitivity analysis: 39-47%). With 25% higher adherence than colonoscopy, CTC threshold costs could be 71% of colonoscopy costs. Our estimate of 43% is considerably lower than previous estimates in literature, because previous studies only compared CTC screening to 10-yearly colonoscopy, where we compared to different intervals of colonoscopy screening.

  5. Doubly stochastic radial basis function methods

    NASA Astrophysics Data System (ADS)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  6. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  7. Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?

    PubMed

    Madhyastha, Tara M; Koh, Natalie; Day, Trevor K M; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J; Rajan, Sabreena; Woelfer, Karl A; Wolf, Jonathan; Grabowski, Thomas J

    2017-01-01

    The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows "in the cloud." Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster.

  8. Standardization in software conversion of (ROM) estimating

    NASA Technical Reports Server (NTRS)

    Roat, G. H.

    1984-01-01

    Technical problems and their solutions comprise by far the majority of work involved in space simulation engineering. Fixed price contracts with schedule award fees are becoming more and more prevalent. Accurate estimation of these jobs is critical to maintain costs within limits and to predict realistic contract schedule dates. Computerized estimating may hold the answer to these new problems, though up to now computerized estimating has been complex, expensive, and geared to the business world, not to technical people. The objective of this effort was to provide a simple program on a desk top computer capable of providing a Rough Order of Magnitude (ROM) estimate in a short time. This program is not intended to provide a highly detailed breakdown of costs to a customer, but to provide a number which can be used as a rough estimate on short notice. With more debugging and fine tuning, a more detailed estimate can be made.

  9. Risk and Vulnerability Assessment Using Cybernomic Computational Models: Tailored for Industrial Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Federick T.; Schlicher, Bob G

    2015-01-01

    There are many influencing economic factors to weigh from the defender-practitioner stakeholder point-of-view that involve cost combined with development/deployment models. Some examples include the cost of countermeasures themselves, the cost of training and the cost of maintenance. Meanwhile, we must better anticipate the total cost from a compromise. The return on investment in countermeasures is essentially impact costs (i.e., the costs from violating availability, integrity and confidentiality / privacy requirements). The natural question arises about choosing the main risks that must be mitigated/controlled and monitored in deciding where to focus security investments. To answer this question, we have investigated themore » cost/benefits to the attacker/defender to better estimate risk exposure. In doing so, it s important to develop a sound basis for estimating the factors that derive risk exposure, such as likelihood that a threat will emerge and whether it will be thwarted. This impact assessment framework can provide key information for ranking cybersecurity threats and managing risk.« less

  10. Estimating the cost-effectiveness of 54 weeks of infliximab for rheumatoid arthritis.

    PubMed

    Wong, John B; Singh, Gurkirpal; Kavanaugh, Arthur

    2002-10-01

    To estimate the cost-effectiveness of infliximab plus methotrexate for active, refractory rheumatoid arthritis. We projected the 54-week results from a randomized controlled trial of infliximab into lifetime economic and clinical outcomes using a Markov computer simulation model. Direct and indirect costs, quality of life, and disability estimates were based on trial results; Arthritis, Rheumatism, and Aging Medical Information System (ARAMIS) database outcomes; and published data. Results were discounted using the standard 3% rate. Because most well-accepted medical therapies have cost-effectiveness ratios below $50,000 to $100,000 per quality-adjusted life-year (QALY) gained, results below this range were considered to be "cost-effective." At 3 mg/kg, each infliximab infusion would cost $1393. When compared with methotrexate alone, 54 weeks of infliximab plus methotrexate decreased the likelihood of having advanced disability from 23% to 11% at the end of 54 weeks, which projected to a lifetime marginal cost-effectiveness ratio of $30,500 per discounted QALY gained, considering only direct medical costs. When applying a societal perspective and including indirect or productivity costs, the marginal cost-effectiveness ratio for infliximab was $9100 per discounted QALY gained. The results remained relatively unchanged with variation of model estimates over a broad range of values. Infliximab plus methotrexate for 54 weeks for rheumatoid arthritis should be cost-effective with its clinical benefit providing good value for the drug cost, especially when including productivity losses. Although infliximab beyond 54 weeks will likely be cost-effective, the economic and clinical benefit remains uncertain and will depend on long-term results of clinical trials.

  11. Pest management in Douglas-fir seed orchards: a microcomputer decision method

    Treesearch

    James B. Hoy; Michael I. Haverty

    1988-01-01

    The computer program described provides a Douglas-fir seed orchard manager (user) with a quantitative method for making insect pest management decisions on a desk-top computer. The decision system uses site-specific information such as estimates of seed crop size, insect attack rates, insecticide efficacy and application costs, weather, and crop value. At sites where...

  12. Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario

    NASA Astrophysics Data System (ADS)

    Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.

    1997-06-01

    In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.

  13. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE PAGES

    Paszyńska, A.; Paszyński, M.; Jopek, K.; ...

    2015-01-01

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  14. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paszyńska, A.; Paszyński, M.; Jopek, K.

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  15. Estimating health benefits and cost-savings for achieving the Healthy People 2020 objective of reducing invasive colorectal cancer.

    PubMed

    Hung, Mei-Chuan; Ekwueme, Donatus U; White, Arica; Rim, Sun Hee; King, Jessica B; Wang, Jung-Der; Chang, Su-Hsin

    2018-01-01

    This study aims to quantify the aggregate potential life-years (LYs) saved and healthcare cost-savings if the Healthy People 2020 objective were met to reduce invasive colorectal cancer (CRC) incidence by 15%. We identified patients (n=886,380) diagnosed with invasive CRC between 2001 and 2011 from a nationally representative cancer dataset. We stratified these patients by sex, race/ethnicity, and age. Using these data and data from the 2001-2011 U.S. life tables, we estimated a survival function for each CRC group and the corresponding reference group and computed per-person LYs saved. We estimated per-person annual healthcare cost-savings using the 2008-2012 Medical Expenditure Panel Survey. We calculated aggregate LYs saved and cost-savings by multiplying the reduced number of CRC patients by the per-person LYs saved and lifetime healthcare cost-savings, respectively. We estimated an aggregate of 84,569 and 64,924 LYs saved for men and women, respectively, accounting for healthcare cost-savings of $329.3 and $294.2 million (in 2013$), respectively. Per person, we estimated 6.3 potential LYs saved related to those who developed CRC for both men and women, and healthcare cost-savings of $24,000 for men and $28,000 for women. Non-Hispanic whites and those aged 60-64 had the highest aggregate potential LYs saved and cost-savings. Achieving the HP2020 objective of reducing invasive CRC incidence by 15% by year 2020 would potentially save nearly 150,000 life-years and $624 million on healthcare costs. Copyright © 2017. Published by Elsevier Inc.

  16. Transportation Planning for Your Community

    DOT National Transportation Integrated Search

    2000-12-01

    The Highway Economic Requirements System (HERS) is a computer model designed to simulate improvement selection decisions based on the relative benefit-cost merits of alternative improvement options. HERS is intended to estimate national level investm...

  17. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  18. Unmanned Aerial Vehicles unique cost estimating requirements

    NASA Astrophysics Data System (ADS)

    Malone, P.; Apgar, H.; Stukes, S.; Sterk, S.

    Unmanned Aerial Vehicles (UAVs), also referred to as drones, are aerial platforms that fly without a human pilot onboard. UAVs are controlled autonomously by a computer in the vehicle or under the remote control of a pilot stationed at a fixed ground location. There are a wide variety of drone shapes, sizes, configurations, complexities, and characteristics. Use of these devices by the Department of Defense (DoD), NASA, civil and commercial organizations continues to grow. UAVs are commonly used for intelligence, surveillance, reconnaissance (ISR). They are also use for combat operations, and civil applications, such as firefighting, non-military security work, surveillance of infrastructure (e.g. pipelines, power lines and country borders). UAVs are often preferred for missions that require sustained persistence (over 4 hours in duration), or are “ too dangerous, dull or dirty” for manned aircraft. Moreover, they can offer significant acquisition and operations cost savings over traditional manned aircraft. Because of these unique characteristics and missions, UAV estimates require some unique estimating methods. This paper describes a framework for estimating UAV systems total ownership cost including hardware components, software design, and operations. The challenge of collecting data, testing the sensitivities of cost drivers, and creating cost estimating relationships (CERs) for each key work breakdown structure (WBS) element is discussed. The autonomous operation of UAVs is especially challenging from a software perspective.

  19. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  20. Multi-objective reverse logistics model for integrated computer waste management.

    PubMed

    Ahluwalia, Poonam Khanijo; Nema, Arvind K

    2006-12-01

    This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.

  1. Economic Outcomes With Anatomical Versus Functional Diagnostic Testing for Coronary Artery Disease.

    PubMed

    Mark, Daniel B; Federspiel, Jerome J; Cowper, Patricia A; Anstrom, Kevin J; Hoffmann, Udo; Patel, Manesh R; Davidson-Ray, Linda; Daniels, Melanie R; Cooper, Lawton S; Knight, J David; Lee, Kerry L; Douglas, Pamela S

    2016-07-19

    PROMISE (PROspective Multicenter Imaging Study for Evaluation of Chest Pain) found that initial use of at least 64-slice multidetector computed tomography angiography (CTA) versus functional diagnostic testing strategies did not improve clinical outcomes in stable symptomatic patients with suspected coronary artery disease (CAD) requiring noninvasive testing. To conduct an economic analysis for PROMISE (a major secondary aim of the study). Prospective economic study from the U.S. perspective. Comparisons were made according to the intention-to-treat principle, and CIs were calculated using bootstrap methods. (ClinicalTrials.gov: NCT01174550). 190 U.S. centers. 9649 U.S. patients enrolled in PROMISE between July 2010 and September 2013. Median follow-up was 25 months. Technical costs of the initial (outpatient) testing strategy were estimated from Premier Research Database data. Hospital-based costs were estimated using hospital bills and Medicare cost-charge ratios. Physician fees were taken from the Medicare Physician Fee Schedule. Costs were expressed in 2014 U.S. dollars, discounted at 3% annually, and estimated out to 3 years using inverse probability weighting methods. The mean initial testing costs were $174 for exercise electrocardiography; $404 for CTA; $501 to $514 for pharmacologic and exercise stress echocardiography, respectively; and $946 to $1132 for exercise and pharmacologic stress nuclear testing, respectively. Mean costs at 90 days were $2494 for the CTA strategy versus $2240 for the functional strategy (mean difference, $254 [95% CI, -$634 to $906]). The difference was associated with more revascularizations and catheterizations (4.25 per 100 patients) with CTA use. After 90 days, the mean cost difference between the groups out to 3 years remained small. Cost weights for test strategies were obtained from sources outside PROMISE. Computed tomography angiography and functional diagnostic testing strategies in patients with suspected CAD have similar costs through 3 years of follow-up. National Heart, Lung, and Blood Institute.

  2. FIA BioSum: a tool to evaluate financial costs, opportunities and effectiveness of fuel treatments.

    Treesearch

    Jeremy Fried; Glenn Christensen

    2004-01-01

    FIA BioSum, a tool developed by the USDA Forest Services Forest Inventory and Analysis (FIA) Program, generates reliable cost estimates, identifies opportunities and evaluates the effectiveness of fuel treatments in forested landscapes. BioSum is an analytic framework that integrates a suite of widely used computer models with a foundation of attribute-rich,...

  3. Theory and Techniques for Assessing the Demand and Supply of Outdoor Recreation in the United States

    Treesearch

    H. Ken Cordell; John C. Bergstrom

    1989-01-01

    As the central analysis for the 1989 Renewable Resources planning Act Assessment, a household market model covering 37 recreational activities was computed for the United States. Equilibrium consumption and costs were estimated, as were likely future changes in consumption and costs in response to expected demand growth and alternative development and access policies...

  4. Accessibility and Affordability of Tertiary Education in Brazil, Colombia, Mexico and Peru within a Global Context. Policy Research Working Paper 4517

    ERIC Educational Resources Information Center

    Murakami, Yuki; Blom, Andreas

    2008-01-01

    This paper examines the financing of tertiary education in Brazil, Colombia, Mexico and Peru, comparing the affordability and accessibility of tertiary education with that in high-income countries. To measure affordability, the authors estimate education costs, living costs, grants, and loans. Further, they compute the participation rate,…

  5. Computational cost for detecting inspiralling binaries using a network of laser interferometric detectors

    NASA Astrophysics Data System (ADS)

    Pai, Archana; Bose, Sukanta; Dhurandhar, Sanjeev

    2002-04-01

    We extend a coherent network data-analysis strategy developed earlier for detecting Newtonian waveforms to the case of post-Newtonian (PN) waveforms. Since the PN waveform depends on the individual masses of the inspiralling binary, the parameter-space dimension increases by one from that of the Newtonian case. We obtain the number of templates and estimate the computational costs for PN waveforms: for a lower mass limit of 1Msolar, for LIGO-I noise and with 3% maximum mismatch, the online computational speed requirement for single detector is a few Gflops; for a two-detector network it is hundreds of Gflops and for a three-detector network it is tens of Tflops. Apart from idealistic networks, we obtain results for realistic networks comprising of LIGO and VIRGO. Finally, we compare costs incurred in a coincidence detection strategy with those incurred in the coherent strategy detailed above.

  6. Regulation, the capital-asset pricing model, and the arbitrage pricing theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roll, R.W.; Ross, S.A.

    1983-05-26

    This article describes the arbitrage pricing theory (APT) as and compares it with the capital-asset pricing model (CAPM) as a tool for computing the cost of capital in utility regulatory proceedings. The article argues that the APT is a significantly superior method for determining equity cost, and demonstrates that its application to utilities derives more-sensible estimates of the cost of equity capital than the CAPM. 8 references, 1 figure, 2 tables.

  7. An optimized network for phosphorus load monitoring for Lake Okeechobee, Florida

    USGS Publications Warehouse

    Gain, W.S.

    1997-01-01

    Phosphorus load data were evaluated for Lake Okeechobee, Florida, for water years 1982 through 1991. Standard errors for load estimates were computed from available phosphorus concentration and daily discharge data. Components of error were associated with uncertainty in concentration and discharge data and were calculated for existing conditions and for 6 alternative load-monitoring scenarios for each of 48 distinct inflows. Benefit-cost ratios were computed for each alternative monitoring scenario at each site by dividing estimated reductions in load uncertainty by the 5-year average costs of each scenario in 1992 dollars. Absolute and marginal benefit-cost ratios were compared in an iterative optimization scheme to determine the most cost-effective combination of discharge and concentration monitoring scenarios for the lake. If the current (1992) discharge-monitoring network around the lake is maintained, the water-quality sampling at each inflow site twice each year is continued, and the nature of loading remains the same, the standard error of computed mean-annual load is estimated at about 98 metric tons per year compared to an absolute loading rate (inflows and outflows) of 530 metric tons per year. This produces a relative uncertainty of nearly 20 percent. The standard error in load can be reduced to about 20 metric tons per year (4 percent) by adopting an optimized set of monitoring alternatives at a cost of an additional $200,000 per year. The final optimized network prescribes changes to improve both concentration and discharge monitoring. These changes include the addition of intensive sampling with automatic samplers at 11 sites, the initiation of event-based sampling by observers at another 5 sites, the continuation of periodic sampling 12 times per year at 1 site, the installation of acoustic velocity meters to improve discharge gaging at 9 sites, and the improvement of a discharge rating at 1 site.

  8. Cost-effectiveness of rabies post exposure prophylaxis in Iran.

    PubMed

    Hatam, Nahid; Esmaelzade, Firooz; Mirahmadizadeh, Alireza; Keshavarz, Khosro; Rajabi, Abdolhalim; Afsar Kazerooni, Parvin; Ataollahi, Marzieh

    2014-01-01

    The rabies is one of the most important officially-known viral zoonotic diseases for its global distribution, outbreak, high human and veterinary costs, and high death rate and causes high economic costs in different countries of the world every year. The rabies is the deadliest disease and if the symptoms break out in a person, one will certainly die. However, the deaths resulting from rabies can be prevented by post-exposure prophylaxis. To do so, in Iran and most of the countries in the world, all the people who are exposed to animal bite receive Post-Exposure Prophylaxis (PEP) treatment. The present survey aimed to investigate the cost-effectiveness of PEP in southern Iran. The present study estimated the PEP costs from the government`s Perspective with step-down method for the people exposed to animal bite, estimated the number of DALYs prevented by PEP in the individuals using decision Tree model, and computed the Incremental cost-effectiveness Ratio. The information collected of all reported animal bite cases (n=7111) in Fars Province, who referred rabies registries in urban and rural health centers to receive active care. Performing the PEP program cost estimated 1,052,756.1 USD for one  year and the estimated cost for the treatment of each animal bite case and each prevented death was 148.04 and 5945.42 USD, respectively. Likewise 4,509.82 DALYs were prevented in southern Iran in 2011 by PEP program. The incremental cost-effectiveness ratio for each DALY was estimated to be 233.43 USD. In addition to its full effectiveness in prophylaxis from rabies, PEP program saves the financial resources of the society, as well. This study showed performing PEP to be more cost-effective.

  9. Analytical Study on Flight Performance of a RP Laser Launcher

    NASA Astrophysics Data System (ADS)

    Katsurayama, H.; Ushio, M.; Komurasaki, K.; Arakawa, Y.

    2005-04-01

    An air-breathing RP Laser Launcher has been proposed as the alternative to conventional chemical launch systems. This paper analytically examines the feasibility of SSTO system powered by RP lasers. The trajectory from the ground to the geosynchronous orbit is computed and the launch cost including laser-base development is estimated. The engine performance is evaluated by CFD computations and a cycle analysis. The results show that the beam power of 2.3MW per unit initial vehicle mass is optimum to reach a geo-synchronous transfer orbit, and 3,000 launches are necessary to redeem the cost for laser transmitter.

  10. The 20 kW battery study program

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Six battery configurations were selected for detailed study and these are described. A computer program was modified for use in estimation of the weights, costs, and reliabilities of each of the configurations, as a function of several important independent variables, such as system voltage, battery voltage ratio (battery voltage/bus voltage), and the number of parallel units into which each of the components of the power subsystem was divided. The computer program was used to develop the relationship between the independent variables alone and in combination, and the dependent variables: weight, cost, and availability. Parametric data, including power loss curves, are given.

  11. Shuttle user analysis (study 2.2): Volume 3. Business Risk And Value of Operations in space (BRAVO). Part 4: Computer programs and data look-up

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Computer program listings as well as graphical and tabulated data needed by the analyst to perform a BRAVO analysis were examined. Graphical aid which can be used to determine the earth coverage of satellites in synchronous equatorial orbits was described. A listing for satellite synthesis computer program as well as a sample printout for the DSCS-11 satellite program and a listing of the symbols used in the program were included. The APL language listing for the payload program cost estimating computer program was given. This language is compatible with many of the time sharing remote terminals computers used in the United States. Data on the intelsat communications network was studied. Costs for telecommunications systems leasing, line of sight microwave relay communications systems, submarine telephone cables, and terrestrial power generation systems were also described.

  12. A cost-construction model to assess the total cost of an anesthesiology residency program.

    PubMed

    Franzini, L; Berry, J M

    1999-01-01

    Although the total costs of graduate medical education are difficult to quantify, this information may be of great importance for health policy and planning over the next decade. This study describes the total costs associated with the residency program at the University of Texas--Houston Department of Anesthesiology during the 1996-1997 academic year. The authors used cost-construction methodology, which computes the cost of teaching from information on program description, resident enrollment, faculty and resident salaries and benefits, and overhead. Surveys of faculty and residents were conducted to determine the time spent in teaching activities; access to institutional and departmental financial records was obtained to quantify associated costs. The model was then developed and examined for a range of assumptions concerning resident productivity, replacement costs, and the cost allocation of activities jointly producing clinical care and education. The cost of resident training (cost of didactic teaching, direct clinical supervision, teaching-related preparation and administration, plus the support of the teaching program) was estimated at $75,070 per resident per year. This cost was less than the estimated replacement value of the teaching and clinical services provided by residents, $103,436 per resident per year. Sensitivity analysis, with different assumptions regarding resident replacement cost and reimbursement rates, varied the cost estimates but generally identified the anesthesiology residency program as a financial asset. In most scenarios, the value of the teaching and clinical services provided by residents exceeded the cost of the resources used in the educational program.

  13. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  14. The economic burden of occupational non-melanoma skin cancer due to solar radiation.

    PubMed

    Mofidi, Amirabbas; Tompa, Emile; Spencer, James; Kalcevich, Christina; Peters, Cheryl E; Kim, Joanne; Song, Chaojie; Mortazavi, Seyed Bagher; Demers, Paul A

    2018-06-01

    Solar ultraviolet (UV) radiation is the second most prevalent carcinogenic exposure in Canada and is similarly important in other countries with large Caucasian populations. The objective of this article was to estimate the economic burden associated with newly diagnosed non-melanoma skin cancers (NMSCs) attributable to occupational solar radiation exposure. Key cost categories considered were direct costs (healthcare costs, out-of-pocket costs (OOPCs), and informal caregiver costs); indirect costs (productivity/output costs and home production costs); and intangible costs (monetary value of the loss of health-related quality of life (HRQoL)). To generate the burden estimates, we used secondary data from multiple sources applied to computational methods developed from an extensive review of the literature. An estimated 2,846 (5.3%) of the 53,696 newly diagnosed cases of basal cell carcinoma (BCC) and 1,710 (9.2%) of the 18,549 newly diagnosed cases of squamous cell carcinoma (SCC) in 2011 in Canada were attributable to occupational solar radiation exposure. The combined total for direct and indirect costs of occupational NMSC cases is $28.9 million ($15.9 million for BCC and $13.0 million for SCC), and for intangible costs is $5.7 million ($0.6 million for BCC and $5.1 million for SCC). On a per-case basis, the total costs are $5,670 for BCC and $10,555 for SCC. The higher per-case cost for SCC is largely a result of a lower survival rate, and hence higher indirect and intangible costs. Our estimates can be used to raise awareness of occupational solar UV exposure as an important causal factor in NMSCs and can highlight the importance of occupational BCC and SCC among other occupational cancers.

  15. Model implementation for dynamic computation of system cost for advanced life support

    NASA Technical Reports Server (NTRS)

    Levri, J. A.; Vaccari, D. A.

    2004-01-01

    Life support system designs for long-duration space missions have a multitude of requirements drivers, such as mission objectives, political considerations, cost, crew wellness, inherent mission attributes, as well as many other influences. Evaluation of requirements satisfaction can be difficult, particularly at an early stage of mission design. Because launch cost is a critical factor and relatively easy to quantify, it is a point of focus in early mission design. The method used to determine launch cost influences the accuracy of the estimate. This paper discusses the appropriateness of dynamic mission simulation in estimating the launch cost of a life support system. This paper also provides an abbreviated example of a dynamic simulation life support model and possible ways in which such a model might be utilized for design improvement. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  16. Economic impact of fuel properties on turbine powered business aircraft

    NASA Technical Reports Server (NTRS)

    Powell, F. D.

    1984-01-01

    The principal objective was to estimate the economic impact on the turbine-powered business aviation fleet of potential changes in the composition and properties of aviation fuel. Secondary objectives include estimation of the sensitivity of costs to specific fuel properties, and an assessment of the directions in which further research should be directed. The study was based on the published characteristics of typical and specific modern aircraft in three classes; heavy jet, light jet, and turboprop. Missions of these aircraft were simulated by computer methods for each aircraft for several range and payload combinations, and assumed atmospheric temperatures ranging from nominal to extremely cold. Five fuels were selected for comparison with the reference fuel, nominal Jet A. An overview of the data, the mathematic models, the data reduction and analysis procedure, and the results of the study are given. The direct operating costs of the study fuels are compared with that of the reference fuel in the 1990 time-frame, and the anticipated fleet costs and fuel break-even costs are estimated.

  17. Computer program to assess impact of fatigue and fracture criteria on weight and cost of transport aircraft

    NASA Technical Reports Server (NTRS)

    Tanner, C. J.; Kruse, G. S.; Oman, B. H.

    1975-01-01

    A preliminary design analysis tool for rapidly performing trade-off studies involving fatigue, fracture, static strength, weight, and cost is presented. Analysis subprograms were developed for fatigue life, crack growth life, and residual strength; and linked to a structural synthesis module which in turn was integrated into a computer program. The part definition module of a cost and weight analysis program was expanded to be compatible with the upgraded structural synthesis capability. The resultant vehicle design and evaluation program is named VDEP-2. It is an accurate and useful tool for estimating purposes at the preliminary design stage of airframe development. A sample case along with an explanation of program applications and input preparation is presented.

  18. Planning and processing multistage samples with a computer program—MUST.

    Treesearch

    John W. Hazard; Larry E. Stewart

    1974-01-01

    A computer program was written to handle multistage sampling designs in insect populations. It is, however, general enough to be used for any population where the number of stages does not exceed three. The program handles three types of sampling situations, all of which assume equal probability sampling. Option 1 takes estimates of sample variances, costs, and either...

  19. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    PubMed

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  20. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm

    PubMed Central

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-01-01

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions. PMID:28925979

  1. Efficient Voronoi volume estimation for DEM simulations of granular materials under confined conditions

    PubMed Central

    Frenning, Göran

    2015-01-01

    When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975

  2. Estimating alcohol content of traditional brew in Western Kenya using culturally relevant methods: the case for cost over volume.

    PubMed

    Papas, Rebecca K; Sidle, John E; Wamalwa, Emmanuel S; Okumu, Thomas O; Bryant, Kendall L; Goulet, Joseph L; Maisto, Stephen A; Braithwaite, R Scott; Justice, Amy C

    2010-08-01

    Traditional homemade brew is believed to represent the highest proportion of alcohol use in sub-Saharan Africa. In Eldoret, Kenya, two types of brew are common: chang'aa, spirits, and busaa, maize beer. Local residents refer to the amount of brew consumed by the amount of money spent, suggesting a culturally relevant estimation method. The purposes of this study were to analyze ethanol content of chang'aa and busaa; and to compare two methods of alcohol estimation: use by cost, and use by volume, the latter the current international standard. Laboratory results showed mean ethanol content was 34% (SD = 14%) for chang'aa and 4% (SD = 1%) for busaa. Standard drink unit equivalents for chang'aa and busaa, respectively, were 2 and 1.3 (US) and 3.5 and 2.3 (Great Britain). Using a computational approach, both methods demonstrated comparable results. We conclude that cost estimation of alcohol content is more culturally relevant and does not differ in accuracy from the international standard.

  3. Improving stochastic estimates with inference methods: calculating matrix diagonals.

    PubMed

    Selig, Marco; Oppermann, Niels; Ensslin, Torsten A

    2012-02-01

    Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method. © 2012 American Physical Society

  4. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    PubMed Central

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  5. Development of adaptive observation strategy using retrospective optimal interpolation

    NASA Astrophysics Data System (ADS)

    Noh, N.; Kim, S.; Song, H.; Lim, G.

    2011-12-01

    Retrospective optimal interpolation (ROI) is a method that is used to minimize cost functions with multiple minima without using adjoint models. Song and Lim (2011) perform the experiments to reduce the computational costs for implementing ROI by transforming the control variables into eigenvectors of background error covariance. We adapt the ROI algorithm to compute sensitivity estimates of severe weather events over the Korean peninsula. The eigenvectors of the ROI algorithm is modified every time the observations are assimilated. This implies that the modified eigenvectors shows the error distribution of control variables which are updated by assimilating observations. So, We can estimate the effects of the specific observations. In order to verify the adaptive observation strategy, High-impact weather over the Korean peninsula is simulated and interpreted using WRF modeling system and sensitive regions for each high-impact weather is calculated. The effects of assimilation for each observation type is discussed.

  6. Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?

    PubMed Central

    Madhyastha, Tara M.; Koh, Natalie; Day, Trevor K. M.; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J.; Rajan, Sabreena; Woelfer, Karl A.; Wolf, Jonathan; Grabowski, Thomas J.

    2017-01-01

    The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows “in the cloud.” Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster. PMID:29163119

  7. Estimating the implied cost of carbon in future scenarios using a CGE model: The Case of Colorado

    DOE PAGES

    Hannum, Christopher; Cutler, Harvey; Iverson, Terrence; ...

    2017-01-07

    We develop a state-level computable general equilibrium (CGE) model that reflects the roles of coal, natural gas, wind, solar, and hydroelectricity in supplying electricity, using Colorado as a case study. Also, we focus on the economic impact of implementing Colorado's existing Renewable Portfolio Standard, updated in 2013. This requires that 25% of state generation come from qualifying renewable sources by 2020. We evaluate the policy under a variety of assumptions regarding wind integration costs and assumptions on the persistence of federal subsidies for wind. Specifically, we estimate the implied price of carbon as the carbon price at which a state-levelmore » policy would pass a state-level cost-benefit analysis, taking account of estimated greenhouse gas emission reductions and ancillary benefits from corresponding reductions in criteria pollutants. Our findings suggest that without the Production Tax Credit (federal aid), the state policy of mandating renewable power generation (RPS) is costly to state actors, with an implied cost of carbon of about $17 per ton of CO 2 with a 3% discount rate. Federal aid makes the decision between natural gas and wind nearly cost neutral for Colorado.« less

  8. Estimating the implied cost of carbon in future scenarios using a CGE model: The Case of Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hannum, Christopher; Cutler, Harvey; Iverson, Terrence

    We develop a state-level computable general equilibrium (CGE) model that reflects the roles of coal, natural gas, wind, solar, and hydroelectricity in supplying electricity, using Colorado as a case study. Also, we focus on the economic impact of implementing Colorado's existing Renewable Portfolio Standard, updated in 2013. This requires that 25% of state generation come from qualifying renewable sources by 2020. We evaluate the policy under a variety of assumptions regarding wind integration costs and assumptions on the persistence of federal subsidies for wind. Specifically, we estimate the implied price of carbon as the carbon price at which a state-levelmore » policy would pass a state-level cost-benefit analysis, taking account of estimated greenhouse gas emission reductions and ancillary benefits from corresponding reductions in criteria pollutants. Our findings suggest that without the Production Tax Credit (federal aid), the state policy of mandating renewable power generation (RPS) is costly to state actors, with an implied cost of carbon of about $17 per ton of CO 2 with a 3% discount rate. Federal aid makes the decision between natural gas and wind nearly cost neutral for Colorado.« less

  9. 10 CFR Appendix I to Part 504 - Procedures for the Computation of the Real Cost of Capital

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .... Government 13-week Treasury Bills, B=The “beta” coefficient—the relationship between the excess return on common stock and the excess return on the S&P 500 composite index, and R m=The mean excess return on the... statements of firms, and investment bankers. (2) The predicted nominal cost of debt (R d) may be estimated by...

  10. Costs and Outcomes over 36 Years of Patients with Phenylketonuria Who Do and Do Not Remain on a Phenylalanine-Restricted Diet

    ERIC Educational Resources Information Center

    Guest, J. F.; Bai, J. J.; Taylor, R. R.; Sladkevicius, E.; Lee, P. J.; Lachmann, R. H.

    2013-01-01

    Background: To quantify the costs and consequences of managing phenylketonuria (PKU) in the UK and to estimate the potential implications to the UK's National Health Service (NHS) of keeping patients on a phenylalanine-restricted diet for life. Methods: A computer-based model was constructed depicting the management of PKU patients over the first…

  11. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  12. S-1 project. Volume I. Architecture. 1979 annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-01-01

    The US Navy is one of the world's largest users of digital computing equipment having a procurement cost of at least $50,000, and is the single largest such computer customer in the Department of Defense. Its projected acquisition plan for embedded computer systems during the first half of the 80s contemplates the installation of over 10,000 such systems at an estimated cost of several billions of dollars. This expenditure, though large, is dwarfed by the 85 billion dollars which DOD is projected to spend during the next half-decade on computer software, the near-majority of which will be spent by themore » Navy; the life-cycle costs of the 700,000+ lines of software for a single large Navy weapons systems application (e.g., AEGIS) have been conservatively estimated at most of a billion dollars. The S-1 Project is dedicated to realizing potentially large improvements in the efficiency with which such very large sums may be spent, so that greater military effectiveness may be secured earlier, and with smaller expenditures. The fundamental objectives of the S-1 Project's work are first to enable the Navy to be able to quickly, reliably and inexpensively evaluate at any time what is available from the state-of-the-art in digital processing systems and what the relevance of such systems may be to Navy data processing applications: and second to provide reference prototype systems to support possible competitive procurement action leading to deployment of such systems.« less

  13. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less

  14. A novel method for state of charge estimation of lithium-ion batteries using a nonlinear observer

    NASA Astrophysics Data System (ADS)

    Xia, Bizhong; Chen, Chaoren; Tian, Yong; Sun, Wei; Xu, Zhihui; Zheng, Weiwei

    2014-12-01

    The state of charge (SOC) is important for the safety and reliability of battery operation since it indicates the remaining capacity of a battery. However, as the internal state of each cell cannot be directly measured, the value of the SOC has to be estimated. In this paper, a novel method for SOC estimation in electric vehicles (EVs) using a nonlinear observer (NLO) is presented. One advantage of this method is that it does not need complicated matrix operations, so the computation cost can be reduced. As a key step in design of the nonlinear observer, the state-space equations based on the equivalent circuit model are derived. The Lyapunov stability theory is employed to prove the convergence of the nonlinear observer. Four experiments are carried out to evaluate the performance of the presented method. The results show that the SOC estimation error converges to 3% within 130 s while the initial SOC error reaches 20%, and does not exceed 4.5% while the measurement suffers both 2.5% voltage noise and 5% current noise. Besides, the presented method has advantages over the extended Kalman filter (EKF) and sliding mode observer (SMO) algorithms in terms of computation cost, estimation accuracy and convergence rate.

  15. Can broader diffusion of value-based insurance design increase benefits from US health care without increasing costs? Evidence from a computer simulation model.

    PubMed

    Braithwaite, R Scott; Omokaro, Cynthia; Justice, Amy C; Nucifora, Kimberly; Roberts, Mark S

    2010-02-16

    Evidence suggests that cost sharing (i.e.,copayments and deductibles) decreases health expenditures but also reduces essential care. Value-based insurance design (VBID) has been proposed to encourage essential care while controlling health expenditures. Our objective was to estimate the impact of broader diffusion of VBID on US health care benefits and costs. We used a published computer simulation of costs and life expectancy gains from US health care to estimate the impact of broader diffusion of VBID. Two scenarios were analyzed: (1) applying VBID solely to pharmacy benefits and (2) applying VBID to both pharmacy benefits and other health care services (e.g., devices). We assumed that cost sharing would be eliminated for high-value services (<$100,000 per life-year), would remain unchanged for intermediate- or unknown-value services ($100,000-$300,000 per life-year or unknown), and would be increased for low-value services (>$300,000 per life-year). All costs are provided in 2003 US dollars. Our simulation estimated that approximately 60% of health expenditures in the US are spent on low-value services, 20% are spent on intermediate-value services, and 20% are spent on high-value services. Correspondingly, the vast majority (80%) of health expenditures would have cost sharing that is impacted by VBID. With prevailing patterns of cost sharing, health care conferred 4.70 life-years at a per-capita annual expenditure of US$5,688. Broader diffusion of VBID to pharmaceuticals increased the benefit conferred by health care by 0.03 to 0.05 additional life-years, without increasing costs and without increasing out-of-pocket payments. Broader diffusion of VBID to other health care services could increase the benefit conferred by health care by 0.24 to 0.44 additional life-years, also without increasing costs and without increasing overall out-of-pocket payments. Among those without health insurance, using cost saving from VBID to subsidize insurance coverage would increase the benefit conferred by health care by 1.21 life-years, a 31% increase. Broader diffusion of VBID may amplify benefits from US health care without increasing health expenditures.

  16. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  17. Fast and accurate genotype imputation in genome-wide association studies through pre-phasing

    PubMed Central

    Howie, Bryan; Fuchsberger, Christian; Stephens, Matthew; Marchini, Jonathan; Abecasis, Gonçalo R.

    2013-01-01

    Sequencing efforts, including the 1000 Genomes Project and disease-specific efforts, are producing large collections of haplotypes that can be used for genotype imputation in genome-wide association studies (GWAS). Imputing from these reference panels can help identify new risk alleles, but the use of large panels with existing methods imposes a high computational burden. To keep imputation broadly accessible, we introduce a strategy called “pre-phasing” that maintains the accuracy of leading methods while cutting computational costs by orders of magnitude. In brief, we first statistically estimate the haplotypes for each GWAS individual (“pre-phasing”) and then impute missing genotypes into these estimated haplotypes. This reduces the computational cost because: (i) the GWAS samples must be phased only once, whereas standard methods would implicitly re-phase with each reference panel update; (ii) it is much faster to match a phased GWAS haplotype to one reference haplotype than to match unphased GWAS genotypes to a pair of reference haplotypes. This strategy will be particularly valuable for repeated imputation as reference panels evolve. PMID:22820512

  18. Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling

    NASA Astrophysics Data System (ADS)

    Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.

    2014-12-01

    Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.

  19. Registration of surface structures using airborne focused ultrasound.

    PubMed

    Sundström, N; Börjesson, P O; Holmer, N G; Olsson, L; Persson, H W

    1991-01-01

    A low-cost measuring system, based on a personal computer combined with standard equipment for complex measurements and signal processing, has been assembled. Such a system increases the possibilities for small hospitals and clinics to finance advanced measuring equipment. A description of equipment developed for airborne ultrasound together with a personal computer-based system for fast data acquisition and processing is given. Two air-adapted ultrasound transducers with high lateral resolution have been developed. Furthermore, a few results for fast and accurate estimation of signal arrival time are presented. The theoretical estimation models developed are applied to skin surface profile registrations.

  20. Least-cost control of agricultural nutrient contributions to the Gulf of Mexico hypoxic zone.

    PubMed

    Rabotyagov, Sergey; Campbell, Todd; Jha, Manoj; Gassman, Philip W; Arnold, Jeffrey; Kurkalova, Lyubov; Secchi, Silvia; Feng, Hongli; Kling, Catherine L

    2010-09-01

    In 2008, the hypoxic zone in the Gulf of Mexico, measuring 20 720 km2, was one of the two largest reported since measurement of the zone began in 1985. The extent of the hypoxic zone is related to nitrogen and phosphorous loadings originating on agricultural fields in the upper Midwest. This study combines the tools of evolutionary computation with a water quality model and cost data to develop a trade-off frontier for the Upper Mississippi River Basin specifying the least cost of achieving nutrient reductions and the location of the agricultural conservation practices needed. The frontier allows policymakers and stakeholders to explicitly see the trade-offs between cost and nutrient reductions. For example, the cost of reducing annual nitrate-N loadings by 30% is estimated to be US$1.4 billion/year, with a concomitant 36% reduction in P and the cost of reducing annual P loadings by 30% is estimated to be US$370 million/year, with a concomitant 9% reduction in nitrate-N.

  1. Cost and resource utilization associated with use of computed tomography to evaluate chest pain in the emergency department: the Rule Out Myocardial Infarction using Computer Assisted Tomography (ROMICAT) study.

    PubMed

    Hulten, Edward; Goehler, Alexander; Bittencourt, Marcio Sommer; Bamberg, Fabian; Schlett, Christopher L; Truong, Quynh A; Nichols, John; Nasir, Khurram; Rogers, Ian S; Gazelle, Scott G; Nagurney, John T; Hoffmann, Udo; Blankstein, Ron

    2013-09-01

    Coronary computed tomographic angiography (cCTA) allows rapid, noninvasive exclusion of obstructive coronary artery disease (CAD). However, concern exists whether implementation of cCTA in the assessment of patients presenting to the emergency department with acute chest pain will lead to increased downstream testing and costs compared with alternative strategies. Our aim was to compare observed actual costs of usual care (UC) with projected costs of a strategy including early cCTA in the evaluation of patients with acute chest pain in the Rule Out Myocardial Infarction Using Computer Assisted Tomography I (ROMICAT I) study. We compared cost and hospital length of stay of UC observed among 368 patients enrolled in the ROMICAT I study with projected costs of management based on cCTA. Costs of UC were determined by an electronic cost accounting system. Notably, UC was not influenced by cCTA results because patients and caregivers were blinded to the cCTA results. Costs after early implementation of cCTA were estimated assuming changes in management based on cCTA findings of the presence and severity of CAD. Sensitivity analysis was used to test the influence of key variables on both outcomes and costs. We determined that in comparison with UC, cCTA-guided triage, whereby patients with no CAD are discharged, could reduce total hospital costs by 23% (P<0.001). However, when the prevalence of obstructive CAD increases, index hospitalization cost increases such that when the prevalence of ≥ 50% stenosis is >28% to 33%, the use of cCTA becomes more costly than UC. cCTA may be a cost-saving tool in acute chest pain populations that have a prevalence of potentially obstructive CAD <30%. However, increased cost would be anticipated in populations with higher prevalence of disease.

  2. Cost analysis of Human Papillomavirus-related cervical diseases and genital warts in Swaziland.

    PubMed

    Ginindza, Themba G; Sartorius, Benn; Dlamini, Xolisile; Östensson, Ellinor

    2017-01-01

    Human papillomavirus (HPV) has proven to be the cause of several severe clinical conditions on the cervix, vulva, vagina, anus, oropharynx and penis. Several studies have assessed the costs of cervical lesions, cervical cancer (CC), and genital warts. However, few have been done in Africa and none in Swaziland. Cost analysis is critical in providing useful information for economic evaluations to guide policymakers concerned with the allocation of resources in order to reduce the disease burden. A prevalence-based cost of illness (COI) methodology was used to investigate the economic burden of HPV-related diseases. We used a top-down approach for the cost associated with hospital care and a bottom-up approach to estimate the cost associated with outpatient and primary care. The current study was conducted from a provider perspective since the state bears the majority of the costs of screening and treatment in Swaziland. All identifiable direct medical costs were considered for cervical lesions, cervical cancer and genital warts, which were primary diagnoses during 2015. A mix of bottom up micro-costing ingredients approach and top-down approaches was used to collect data on costs. All costs were computed at the price level of 2015 and converted to dollars ($). The total annual estimated direct medical cost associated with screening, managing and treating cervical lesions, CC and genital warts in Swaziland was $16 million. The largest cost in the analysis was estimated for treatment of high-grade cervical lesions and cervical cancer representing 80% of the total cost ($12.6 million). Costs for screening only represented 5% of the total cost ($0.9 million). Treatment of genital warts represented 6% of the total cost ($1million). According to the cost estimations in this study, the economic burden of HPV-related cervical diseases and genital warts represents a major public health issue in Swaziland. Prevention of HPV infection with a national HPV immunization programme for pre-adolescent girls would prevent the majority of CC related deaths and associated costs.

  3. Estimating the cost of healthcare delivery in three hospitals in southern ghana.

    PubMed

    Aboagye, A Q Q; Degboe, A N K; Obuobi, A A D

    2010-09-01

    The cost burden (called full cost) of providing health services at a referral, a district and a mission hospital in Ghana were determined. Standard cost-finding and cost analysis tools recommended by World Health Organization are used to analyse 2002 and 2003 hospital data. Full cost centre costs were computed by taking into account cash and non-cash expenses and allocating overhead costs to intermediate and final patient care centres. The full costs of running the mission hospital in 2002 and 2003 were US$600,295 and US$758,647 respectively; for the district hospital, the respective costs were US$496,240 and US$487,537; and for the referral hospital, the respective costs were US$1,160,535 and US$1,394,321. Of these, overhead costs ranged between 20% and 42%, while salaries made up between 45% and 60%. Based on healthcare utilization data, in 2003 the estimated cost per outpatient attendance was US$ 2.25 at the mission hospital, US$ 4.51 at the district hospital and US$8.5 at the referral hospital; inpatient day costs were US$ 6.05, US$ 9.95 and US$18.8 at the respective hospitals. User fees charged at service delivery points were generally below cost. However, some service delivery points have the potential to recover their costs. Salaries are the major cost component of the three hospitals. Overhead costs constitute an important part of hospital costs and must be noted in efforts to recover costs. Cost structures are different at different types of hospitals. Unit costs at service delivery points can be estimated and projected into the future.

  4. Principal Component Geostatistical Approach for large-dimensional inverse problems

    PubMed Central

    Kitanidis, P K; Lee, J

    2014-01-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113

  5. Principal Component Geostatistical Approach for large-dimensional inverse problems.

    PubMed

    Kitanidis, P K; Lee, J

    2014-07-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.

  6. A geochemical module for "AMDTreat" to compute caustic quantity, effluent quantity, and sludge volume

    USGS Publications Warehouse

    Cravotta, Charles A.; Parkhurst, David L.; Means, Brent P; McKenzie, Bob; Morris, Harry; Arthur, Bill

    2010-01-01

    Treatment with caustic chemicals typically is used to increase pH and decrease concentrations of dissolved aluminum, iron, and/or manganese in largevolume, metal-laden discharges from active coal mines. Generally, aluminum and iron can be removed effectively at near-neutral pH (6 to 8), whereas active manganese removal requires treatment to alkaline pH (~10). The treatment cost depends on the specific chemical used (NaOH, CaO, Ca(OH)2, Na2CO3, or NH3) and increases with the quantities of chemical added and sludge produced. The pH and metals concentrations do not change linearly with the amount of chemical added. Consequently, the amount of caustic chemical needed to achieve a target pH and the corresponding effluent composition and sludge volume can not be accurately determined without empirical titration data or the application of geochemical models to simulate the titration of the discharge water with caustic chemical(s). The AMDTreat computer program (http://amd.osmre.gov/ ) is widely used to compute costs for treatment of coal-mine drainage. Although AMDTreat can use results of empirical titration with industrial grade caustic chemicals to compute chemical costs for treatment of net-acidic or net-alkaline mine drainage, such data are rarely available. To improve the capability of AMDTreat to estimate (1) the quantity and cost of caustic chemicals to attain a target pH, (2) the concentrations of dissolved metals in treated effluent, and (3) the volume of sludge produced by the treatment, a titration simulation is being developed using the geochemical program PHREEQC (wwwbrr.cr.usgs.gov/projects/GWC_coupled/phreeqc/) that will be coupled as a module to AMDTreat. The simulated titration results can be compared with or used in place of empirical titration data to estimate chemical quantities and costs. This paper describes the development, evaluation, and potential utilization of the PHREEQC titration module for AMDTreat.

  7. Who pays and who benefits? How different models of shared responsibilities between formal and informal carers influence projections of costs of dementia management

    PubMed Central

    2011-01-01

    Background The few studies that have attempted to estimate the future cost of caring for people with dementia in Australia are typically based on total prevalence and the cost per patient over the average duration of illness. However, costs associated with dementia care also vary according to the length of the disease, severity of symptoms and type of care provided. This study aimed to determine more accurately the future costs of dementia management by taking these factors into consideration. Methods The current study estimated the prevalence of dementia in Australia (2010-2040). Data from a variety of sources was recalculated to distribute this prevalence according to the location (home/institution), care requirements (informal/formal), and dementia severity. The cost of care was attributed to redistributed prevalences and used in prediction of future costs of dementia. Results Our computer modeling indicates that the ratio between the prevalence of people with mild/moderate/severe dementia will change over the three decades from 2010 to 2040 from 50/30/20 to 44/32/24. Taking into account the severity of symptoms, location of care and cost of care per hour, the current study estimates that the informal cost of care in 2010 is AU$3.2 billion and formal care at AU$5.0 billion per annum. By 2040 informal care is estimated to cost AU$11.6 billion and formal care $AU16.7 billion per annum. Interventions to slow disease progression will result in relative savings of 5% (AU$1.5 billion) per annum and interventions to delay disease onset will result in relative savings of 14% (AU$4 billion) of the cost per annum. With no intervention, the projected combined annual cost of formal and informal care for a person with dementia in 2040 will be around AU$38,000 (in 2010 dollars). An intervention to delay progression by 2 years will see this reduced to AU$35,000. Conclusions These findings highlight the need to account for more than total prevalence when estimating the costs of dementia care. While the absolute values of cost of care estimates are subject to the validity and reliability of currently available data, dynamic systems modeling allows for future trends to be estimated. PMID:21988908

  8. Quantum computation with realistic magic-state factories

    NASA Astrophysics Data System (ADS)

    O'Gorman, Joe; Campbell, Earl T.

    2017-03-01

    Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.

  9. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  10. System-of-Systems Technology-Portfolio-Analysis Tool

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel; Mankins, John; Feingold, Harvey; Johnson, Wayne

    2012-01-01

    Advanced Technology Life-cycle Analysis System (ATLAS) is a system-of-systems technology-portfolio-analysis software tool. ATLAS affords capabilities to (1) compare estimates of the mass and cost of an engineering system based on competing technological concepts; (2) estimate life-cycle costs of an outer-space-exploration architecture for a specified technology portfolio; (3) collect data on state-of-the-art and forecasted technology performance, and on operations and programs; and (4) calculate an index of the relative programmatic value of a technology portfolio. ATLAS facilitates analysis by providing a library of analytical spreadsheet models for a variety of systems. A single analyst can assemble a representation of a system of systems from the models and build a technology portfolio. Each system model estimates mass, and life-cycle costs are estimated by a common set of cost models. Other components of ATLAS include graphical-user-interface (GUI) software, algorithms for calculating the aforementioned index, a technology database, a report generator, and a form generator for creating the GUI for the system models. At the time of this reporting, ATLAS is a prototype, embodied in Microsoft Excel and several thousand lines of Visual Basic for Applications that run on both Windows and Macintosh computers.

  11. On-line Model Structure Selection for Estimation of Plasma Boundary in a Tokamak

    NASA Astrophysics Data System (ADS)

    Škvára, Vít; Šmídl, Václav; Urban, Jakub

    2015-11-01

    Control of the plasma field in the tokamak requires reliable estimation of the plasma boundary. The plasma boundary is given by a complex mathematical model and the only available measurements are responses of induction coils around the plasma. For the purpose of boundary estimation the model can be reduced to simple linear regression with potentially infinitely many elements. The number of elements must be selected manually and this choice significantly influences the resulting shape. In this paper, we investigate the use of formal model structure estimation techniques for the problem. Specifically, we formulate a sparse least squares estimator using the automatic relevance principle. The resulting algorithm is a repetitive evaluation of the least squares problem which could be computed in real time. Performance of the resulting algorithm is illustrated on simulated data and evaluated with respect to a more detailed and computationally costly model FREEBIE.

  12. PROFIT-PC: a program for estimating maximum net revenue from multiproduct harvests in Appalachian hardwoods

    Treesearch

    Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe

    1989-01-01

    PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...

  13. The Returns to the Brain Drain and Brain Circulation in Sub-Saharan Africa: Some Computations Using Data from Ghana. NBER Working Paper No. 16813

    ERIC Educational Resources Information Center

    Nyarko, Yaw

    2011-01-01

    We look at the decision of the government or "central planner" in the allocation of scarce governmental resources for tertiary education, as well as that for the individual. We provide estimates of the net present values, or cost and benefits. These include costs of tertiary education; the benefits of improved skills of those who remain…

  14. Portuguese Family Physicians’ Awareness of Diagnostic and Laboratory Test Costs: A Cross-Sectional Study

    PubMed Central

    Sá, Luísa; Costa-Santos, Cristina; Teixeira, Andreia; Couto, Luciana; Costa-Pereira, Altamiro; Hespanhol, Alberto; Santos, Paulo; Martins, Carlos

    2015-01-01

    Background Physicians’ ability to make cost-effective decisions has been shown to be affected by their knowledge of health care costs. This study assessed whether Portuguese family physicians are aware of the costs of the most frequently prescribed diagnostic and laboratory tests. Methods A cross-sectional study was conducted in a representative sample of Portuguese family physicians, using computer-assisted telephone interviews for data collection. A Likert scale was used to assess physician’s level of agreement with four statements about health care costs. Family physicians were also asked to estimate the costs of diagnostic and laboratory tests. Each physician’s cost estimate was compared with the true cost and the absolute error was calculated. Results One-quarter (24%; 95% confidence interval: 23%–25%) of all cost estimates were accurate to within 25% of the true cost, with 55% (95% IC: 53–56) overestimating and 21% (95% IC: 20–22) underestimating the true actual cost. The majority (76%) of family physicians thought they did not have or were uncertain as to whether they had adequate knowledge of diagnostic and laboratory test costs, and only 7% reported receiving adequate education. The majority of the family physicians (82%) said that they had adequate access to information about the diagnostic and laboratory test costs. Thirty-three percent thought that costs did not influence their decision to order tests, while 27% were uncertain. Conclusions Portuguese family physicians have limited awareness of diagnostic and laboratory test costs, and our results demonstrate a need for improved education in this area. Further research should focus on identifying whether interventions in cost knowledge actually change ordering behavior, in identifying optimal methods to disseminate cost information, and on improving the cost-effectiveness of care. PMID:26356625

  15. Comparisons of some large scientific computers

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1981-01-01

    In 1975, the National Aeronautics and Space Administration (NASA) began studies to assess the technical and economic feasibility of developing a computer having sustained computational speed of one billion floating point operations per second and a working memory of at least 240 million words. Such a powerful computer would allow computational aerodynamics to play a major role in aeronautical design and advanced fluid dynamics research. Based on favorable results from these studies, NASA proceeded with developmental plans. The computer was named the Numerical Aerodynamic Simulator (NAS). To help insure that the estimated cost, schedule, and technical scope were realistic, a brief study was made of past large scientific computers. Large discrepancies between inception and operation in scope, cost, or schedule were studied so that they could be minimized with NASA's proposed new compter. The main computers studied were the ILLIAC IV, STAR 100, Parallel Element Processor Ensemble (PEPE), and Shuttle Mission Simulator (SMS) computer. Comparison data on memory and speed were also obtained on the IBM 650, 704, 7090, 360-50, 360-67, 360-91, and 370-195; the CDC 6400, 6600, 7600, CYBER 203, and CYBER 205; CRAY 1; and the Advanced Scientific Computer (ASC). A few lessons learned conclude the report.

  16. Wastewater Treatment Costs and Outlays in Organic Petrochemicals: Standards Versus Taxes With Methodology Suggestions for Marginal Cost Pricing and Analysis

    NASA Astrophysics Data System (ADS)

    Thompson, Russell G.; Singleton, F. D., Jr.

    1986-04-01

    With the methodology recommended by Baumol and Oates, comparable estimates of wastewater treatment costs and industry outlays are developed for effluent standard and effluent tax instruments for pollution abatement in five hypothetical organic petrochemicals (olefins) plants. The computational method uses a nonlinear simulation model for wastewater treatment to estimate the system state inputs for linear programming cost estimation, following a practice developed in a National Science Foundation (Research Applied to National Needs) study at the University of Houston and used to estimate Houston Ship Channel pollution abatement costs for the National Commission on Water Quality. Focusing on best practical and best available technology standards, with effluent taxes adjusted to give nearly equal pollution discharges, shows that average daily treatment costs (and the confidence intervals for treatment cost) would always be less for the effluent tax than for the effluent standard approach. However, industry's total outlay for these treatment costs, plus effluent taxes, would always be greater for the effluent tax approach than the total treatment costs would be for the effluent standard approach. Thus the practical necessity of showing smaller outlays as a prerequisite for a policy change toward efficiency dictates the need to link the economics at the microlevel with that at the macrolevel. Aggregation of the plants into a programming modeling basis for individual sectors and for the economy would provide a sound basis for effective policy reform, because the opportunity costs of the salient regulatory policies would be captured. Then, the government's policymakers would have the informational insights necessary to legislate more efficient environmental policies in light of the wealth distribution effects.

  17. Dual Quaternions as Constraints in 4D-DPM Models for Pose Estimation.

    PubMed

    Martinez-Berti, Enrique; Sánchez-Salmerón, Antonio-José; Ricolfe-Viala, Carlos

    2017-08-19

    The goal of this research work is to improve the accuracy of human pose estimation using the Deformation Part Model (DPM) without increasing computational complexity. First, the proposed method seeks to improve pose estimation accuracy by adding the depth channel to DPM, which was formerly defined based only on red-green-blue (RGB) channels, in order to obtain a four-dimensional DPM (4D-DPM). In addition, computational complexity can be controlled by reducing the number of joints by taking it into account in a reduced 4D-DPM. Finally, complete solutions are obtained by solving the omitted joints by using inverse kinematics models. In this context, the main goal of this paper is to analyze the effect on pose estimation timing cost when using dual quaternions to solve the inverse kinematics.

  18. Sensitivity analysis of the add-on price estimate for the silicon web growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1981-01-01

    The web growth process, a silicon-sheet technology option, developed for the flat plate solar array (FSA) project, was examined. Base case data for the technical and cost parameters for the technical and commercial readiness phase of the FSA project are projected. The process add on price, using the base case data for cost parameters such as equipment, space, direct labor, materials and utilities, and the production parameters such as growth rate and run length, using a computer program developed specifically to do the sensitivity analysis with improved price estimation are analyzed. Silicon price, sheet thickness and cell efficiency are also discussed.

  19. Space Station racks weight and CG measurement using the rack insertion end-effector

    NASA Technical Reports Server (NTRS)

    Brewer, William V.

    1994-01-01

    The objective was to design a method to measure weight and center of gravity (C.G.) location for Space Station Modules by adding sensors to the existing Rack Insertion End Effector (RIEE). Accomplishments included alternative sensor placement schemes organized into categories. Vendors were queried for suitable sensor equipment recommendations. Inverse mathematical models for each category determine expected maximum sensor loads. Sensors are selected using these computations, yielding cost and accuracy data. Accuracy data for individual sensors are inserted into forward mathematical models to estimate the accuracy of an overall sensor scheme. Cost of the schemes can be estimated. Ease of implementation and operation are discussed.

  20. Value of Landsat in urban water resources planning

    NASA Technical Reports Server (NTRS)

    Jackson, T. J.; Ragan, R. M.

    1977-01-01

    The reported investigation had the objective to evaluate the utility of satellite multispectral remote sensing in urban water resources planning. The results are presented of a study which was conducted to determine the economic impact of Landsat data. The use of Landsat data to estimate hydrologic model parameters employed in urban water resources planning is discussed. A decision regarding an employment of the Landsat data has to consider the tradeoff between data accuracy and cost. Bayesian decision theory is used in this connection. It is concluded that computer-aided interpretation of Landsat data is a highly cost-effective method of estimating the percentage of impervious area.

  1. Cost Savings Associated with the Adoption of a Cloud Computing Data Transfer System for Trauma Patients.

    PubMed

    Feeney, James M; Montgomery, Stephanie C; Wolf, Laura; Jayaraman, Vijay; Twohig, Michael

    2016-09-01

    Among transferred trauma patients, challenges with the transfer of radiographic studies include problems loading or viewing the studies at the receiving hospitals, and problems manipulating, reconstructing, or evalu- ating the transferred images. Cloud-based image transfer systems may address some ofthese problems. We reviewed the charts of patients trans- ferred during one year surrounding the adoption of a cloud computing data transfer system. We compared the rates of repeat imaging before (precloud) and af- ter (postcloud) the adoption of the cloud-based data transfer system. During the precloud period, 28 out of 100 patients required 90 repeat studies. With the cloud computing transfer system in place, three out of 134 patients required seven repeat films. There was a statistically significant decrease in the proportion of patients requiring repeat films (28% to 2.2%, P < .0001). Based on an annualized volume of 200 trauma patient transfers, the cost savings estimated using three methods of cost analysis, is between $30,272 and $192,453.

  2. Simplifying silicon burning: Application of quasi-equilibrium to (alpha) network nucleosynthesis

    NASA Technical Reports Server (NTRS)

    Hix, W. R.; Thielemann, F.-K.; Khokhlov, A. M.; Wheeler, J. C.

    1997-01-01

    While the need for accurate calculation of nucleosynthesis and the resulting rate of thermonuclear energy release within hydrodynamic models of stars and supernovae is clear, the computational expense of these nucleosynthesis calculations often force a compromise in accuracy to reduce the computational cost. To redress this trade-off of accuracy for speed, the authors present an improved nuclear network which takes advantage of quasi- equilibrium in order to reduce the number of independent nuclei, and hence the computational cost of nucleosynthesis, without significant reduction in accuracy. In this paper they will discuss the first application of this method, the further reduction in size of the minimal alpha network. The resultant QSE- reduced alpha network is twice as fast as the conventional alpha network it replaces and requires the tracking of half as many abundance variables, while accurately estimating the rate of energy generation. Such reduction in cost is particularly necessary for future generation of multi-dimensional models for supernovae.

  3. Computer program to perform cost and weight analysis of transport aircraft. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A digital computer program for evaluating the weight and costs of advanced transport designs was developed. The resultant program, intended for use at the preliminary design level, incorporates both batch mode and interactive graphics run capability. The basis of the weight and cost estimation method developed is a unique way of predicting the physical design of each detail part of a vehicle structure at a time when only configuration concept drawings are available. In addition, the technique relies on methods to predict the precise manufacturing processes and the associated material required to produce each detail part. Weight data are generated in four areas of the program. Overall vehicle system weights are derived on a statistical basis as part of the vehicle sizing process. Theoretical weights, actual weights, and the weight of the raw material to be purchased are derived as part of the structural synthesis and part definition processes based on the computed part geometry.

  4. Computational methods for a three-dimensional model of the petroleum-discovery process

    USGS Publications Warehouse

    Schuenemeyer, J.H.; Bawiec, W.J.; Drew, L.J.

    1980-01-01

    A discovery-process model devised by Drew, Schuenemeyer, and Root can be used to predict the amount of petroleum to be discovered in a basin from some future level of exploratory effort: the predictions are based on historical drilling and discovery data. Because marginal costs of discovery and production are a function of field size, the model can be used to make estimates of future discoveries within deposit size classes. The modeling approach is a geometric one in which the area searched is a function of the size and shape of the targets being sought. A high correlation is assumed between the surface-projection area of the fields and the volume of petroleum. To predict how much oil remains to be found, the area searched must be computed, and the basin size and discovery efficiency must be estimated. The basin is assumed to be explored randomly rather than by pattern drilling. The model may be used to compute independent estimates of future oil at different depth intervals for a play involving multiple producing horizons. We have written FORTRAN computer programs that are used with Drew, Schuenemeyer, and Root's model to merge the discovery and drilling information and perform the necessary computations to estimate undiscovered petroleum. These program may be modified easily for the estimation of remaining quantities of commodities other than petroleum. ?? 1980.

  5. Asymptotic Analysis Of The Total Least Squares ESPRIT Algorithm'

    NASA Astrophysics Data System (ADS)

    Ottersten, B. E.; Viberg, M.; Kailath, T.

    1989-11-01

    This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.

  6. Computer Vision for High-Throughput Quantitative Phenotyping: A Case Study of Grapevine Downy Mildew Sporulation and Leaf Trichomes.

    PubMed

    Divilov, Konstantin; Wiesner-Hanks, Tyr; Barba, Paola; Cadle-Davidson, Lance; Reisch, Bruce I

    2017-12-01

    Quantitative phenotyping of downy mildew sporulation is frequently used in plant breeding and genetic studies, as well as in studies focused on pathogen biology such as chemical efficacy trials. In these scenarios, phenotyping a large number of genotypes or treatments can be advantageous but is often limited by time and cost. We present a novel computational pipeline dedicated to estimating the percent area of downy mildew sporulation from images of inoculated grapevine leaf discs in a manner that is time and cost efficient. The pipeline was tested on images from leaf disc assay experiments involving two F 1 grapevine families, one that had glabrous leaves (Vitis rupestris B38 × 'Horizon' [RH]) and another that had leaf trichomes (Horizon × V. cinerea B9 [HC]). Correlations between computer vision and manual visual ratings reached 0.89 in the RH family and 0.43 in the HC family. Additionally, we were able to use the computer vision system prior to sporulation to measure the percent leaf trichome area. We estimate that an experienced rater scoring sporulation would spend at least 90% less time using the computer vision system compared with the manual visual method. This will allow more treatments to be phenotyped in order to better understand the genetic architecture of downy mildew resistance and of leaf trichome density. We anticipate that this computer vision system will find applications in other pathosystems or traits where responses can be imaged with sufficient contrast from the background.

  7. Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*

    PubMed Central

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.

    2015-01-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363

  8. Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.

    PubMed

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L

    2015-02-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  9. Distributed Kalman filtering compared to Fourier domain preconditioned conjugate gradient for laser guide star tomography on extremely large telescopes.

    PubMed

    Gilles, Luc; Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Ellerbroek, Brent

    2013-05-01

    This paper discusses the performance and cost of two computationally efficient Fourier-based tomographic wavefront reconstruction algorithms for wide-field laser guide star (LGS) adaptive optics (AO). The first algorithm is the iterative Fourier domain preconditioned conjugate gradient (FDPCG) algorithm developed by Yang et al. [Appl. Opt.45, 5281 (2006)], combined with pseudo-open-loop control (POLC). FDPCG's computational cost is proportional to N log(N), where N denotes the dimensionality of the tomography problem. The second algorithm is the distributed Kalman filter (DKF) developed by Massioni et al. [J. Opt. Soc. Am. A28, 2298 (2011)], which is a noniterative spatially invariant controller. When implemented in the Fourier domain, DKF's cost is also proportional to N log(N). Both algorithms are capable of estimating spatial frequency components of the residual phase beyond the wavefront sensor (WFS) cutoff frequency thanks to regularization, thereby reducing WFS spatial aliasing at the expense of more computations. We present performance and cost analyses for the LGS multiconjugate AO system under design for the Thirty Meter Telescope, as well as DKF's sensitivity to uncertainties in wind profile prior information. We found that, provided the wind profile is known to better than 10% wind speed accuracy and 20 deg wind direction accuracy, DKF, despite its spatial invariance assumptions, delivers a significantly reduced wavefront error compared to the static FDPCG minimum variance estimator combined with POLC. Due to its nonsequential nature and high degree of parallelism, DKF is particularly well suited for real-time implementation on inexpensive off-the-shelf graphics processing units.

  10. Uncertainty in sample estimates and the implicit loss function for soil information.

    NASA Astrophysics Data System (ADS)

    Lark, Murray

    2015-04-01

    One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.

  11. Economics of movable interior blankets for greenhouses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, G.B.; Fohner, G.R.; Albright, L.D.

    1981-01-01

    A model for evaluating the economic impact of investment in a movable interior blanket was formulated. The method of analysis was net present value (NPV), in which the discounted, after-tax cash flow of costs and benefits was computed for the useful life of the system. An added feature was a random number component which permitted any or all of the input parameters to be varied within a specified range. Results from 100 computer runs indicated that all of the NPV estimates generated were positive, showing that the investment was profitable. However, there was a wide range of NPV estimates, frommore » $16.00/m/sup 2/ to $86.40/m/sup 2/, with a median value of $49.34/m/sup 2/. Key variables allowed to range in the analysis were: (1) the cost of fuel before the blanket is installed; (2) the percent fuel savings resulting from use of the blanket; (3) the annual real increase in the cost of fuel; and (4) the change in the annual value of the crop. The wide range in NPV estimates indicates the difficulty in making general recommendations regarding the economic feasibility of the investment when uncertainty exists as to the correct values for key variables in commercial settings. The results also point out needed research into the effect of the blanket on the crop, and on performance characteristics of the blanket.« less

  12. A Comparison of Costs of Searching the Machine-Readable Data Bases ERIC and "Psychological Abstracts" in an Annual Subscription Rate System Against Costs Estimated for the Same Searches Done in the Lockheed DIALOG System and the System Development Corporation for ERIC, and the Lockheed DIALOG System and PASAT for "Psychological Abstracts."

    ERIC Educational Resources Information Center

    Palmer, Crescentia

    A comparison of costs for computer-based searching of Psychological Abstracts and Educational Resources Information Center (ERIC) systems by the New York State Library at Albany was produced by combining data available from search request forms and from bills from the contract subscription service, the State University of New…

  13. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  14. Does sampling using random digit dialling really cost more than sampling from telephone directories: Debunking the myths

    PubMed Central

    Yang, Baohui; Eyeson-Annan, Margo

    2006-01-01

    Background Computer assisted telephone interviewing (CATI) is widely used for health surveys. The advantages of CATI over face-to-face interviewing are timeliness and cost reduction to achieve the same sample size and geographical coverage. Two major CATI sampling procedures are used: sampling directly from the electronic white pages (EWP) telephone directory and list assisted random digit dialling (LA-RDD) sampling. EWP sampling covers telephone numbers of households listed in the printed white pages. LA-RDD sampling has a better coverage of households than EWP sampling but is considered to be more expensive due to interviewers dialling more out-of-scope numbers. Methods This study compared an EWP sample and a LA-RDD sample from the New South Wales Population Health Survey in 2003 on demographic profiles, health estimates, coefficients of variation in weights, design effects on estimates, and cost effectiveness, on the basis of achieving the same level of precision of estimates. Results The LA-RDD sample better represented the population than the EWP sample, with a coefficient of variation of weights of 1.03 for LA-RDD compared with 1.21 for EWP, and average design effects of 2.00 for LA-RDD compared with 2.38 for EWP. Also, a LA-RDD sample can save up to 14.2% in cost compared to an EWP sample to achieve the same precision for health estimates. Conclusion A LA-RDD sample better represents the population, which potentially leads to reduced bias in health estimates, and rather than costing more than EWP actually costs less. PMID:16504117

  15. Analysis of longitudinal data from the Puget Sound transportation panel : task E : modal split analysis

    DOT National Transportation Integrated Search

    1996-11-01

    The Highway Economic Requirements System (HERS) is a computer model designed to simulate improvement selection decisions based on the relative benefit-cost merits of alternative improvement options. HERS is intended to estimate national level investm...

  16. Estimating Alcohol Content of Traditional Brew in Western Kenya Using Culturally Relevant Methods: The Case for Cost Over Volume

    PubMed Central

    Sidle, John E.; Wamalwa, Emmanuel S.; Okumu, Thomas O.; Bryant, Kendall L.; Goulet, Joseph L.; Maisto, Stephen A.; Braithwaite, R. Scott; Justice, Amy C.

    2010-01-01

    Traditional homemade brew is believed to represent the highest proportion of alcohol use in sub-Saharan Africa. In Eldoret, Kenya, two types of brew are common: chang’aa, spirits, and busaa, maize beer. Local residents refer to the amount of brew consumed by the amount of money spent, suggesting a culturally relevant estimation method. The purposes of this study were to analyze ethanol content of chang’aa and busaa; and to compare two methods of alcohol estimation: use by cost, and use by volume, the latter the current international standard. Laboratory results showed mean ethanol content was 34% (SD = 14%) for chang’aa and 4% (SD = 1%) for busaa. Standard drink unit equivalents for chang’aa and busaa, respectively, were 2 and 1.3 (US) and 3.5 and 2.3 (Great Britain). Using a computational approach, both methods demonstrated comparable results. We conclude that cost estimation of alcohol content is more culturally relevant and does not differ in accuracy from the international standard. PMID:19015972

  17. Low-complexity DOA estimation from short data snapshots for ULA systems using the annihilating filter technique

    NASA Astrophysics Data System (ADS)

    Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali

    2017-12-01

    This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.

  18. Estimating HIV-1 Fitness Characteristics from Cross-Sectional Genotype Data

    PubMed Central

    Gopalakrishnan, Sathej; Montazeri, Hesam; Menz, Stephan; Beerenwinkel, Niko; Huisinga, Wilhelm

    2014-01-01

    Despite the success of highly active antiretroviral therapy (HAART) in the management of human immunodeficiency virus (HIV)-1 infection, virological failure due to drug resistance development remains a major challenge. Resistant mutants display reduced drug susceptibilities, but in the absence of drug, they generally have a lower fitness than the wild type, owing to a mutation-incurred cost. The interaction between these fitness costs and drug resistance dictates the appearance of mutants and influences viral suppression and therapeutic success. Assessing in vivo viral fitness is a challenging task and yet one that has significant clinical relevance. Here, we present a new computational modelling approach for estimating viral fitness that relies on common sparse cross-sectional clinical data by combining statistical approaches to learn drug-specific mutational pathways and resistance factors with viral dynamics models to represent the host-virus interaction and actions of drug mechanistically. We estimate in vivo fitness characteristics of mutant genotypes for two antiretroviral drugs, the reverse transcriptase inhibitor zidovudine (ZDV) and the protease inhibitor indinavir (IDV). Well-known features of HIV-1 fitness landscapes are recovered, both in the absence and presence of drugs. We quantify the complex interplay between fitness costs and resistance by computing selective advantages for different mutants. Our approach extends naturally to multiple drugs and we illustrate this by simulating a dual therapy with ZDV and IDV to assess therapy failure. The combined statistical and dynamical modelling approach may help in dissecting the effects of fitness costs and resistance with the ultimate aim of assisting the choice of salvage therapies after treatment failure. PMID:25375675

  19. Modeling of power transmission and stress grading for corona protection

    NASA Astrophysics Data System (ADS)

    Zohdi, T. I.; Abali, B. E.

    2017-11-01

    Electrical high voltage (HV) machines are prone to corona discharges leading to power losses as well as damage of the insulating layer. Many different techniques are applied as corona protection and computational methods aid to select the best design. In this paper we develop a reduced-order model in 1D estimating electric field and temperature distribution of a conductor wrapped with different layers, as usual for HV-machines. Many assumptions and simplifications are undertaken for this 1D model, therefore, we compare its results to a direct numerical simulation in 3D quantitatively. Both models are transient and nonlinear, giving a possibility to quickly estimate in 1D or fully compute in 3D by a computational cost. Such tools enable understanding, evaluation, and optimization of corona shielding systems for multilayered coils.

  20. Efficient genetic algorithms using discretization scheduling.

    PubMed

    McLay, Laura A; Goldberg, David E

    2005-01-01

    In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations use numerical methods with varying discretization. In these types of applications, the cost and accuracy vary from discretization errors when implicit or explicit quadrature is used to estimate the function evaluations. This paper examines discretization scheduling, or how to vary the discretization within the genetic algorithm in order to use the least amount of computation time for a solution of a desired quality. The effectiveness of discretization scheduling can be determined by comparing its computation time to the computation time of a GA using a constant discretization. There are three ingredients for the discretization scheduling: population sizing, estimated time for each function evaluation and predicted convergence time analysis. Idealized one- and two-dimensional experiments and an inverse groundwater application illustrate the computational savings to be achieved from using discretization scheduling.

  1. Economic Burden of Heart Failure: Investigating Outpatient and Inpatient Costs in Abeokuta, Southwest Nigeria

    PubMed Central

    Ogah, Okechukwu S.; Stewart, Simon; Onwujekwe, Obinna E.; Falase, Ayodele O.; Adebayo, Saheed O.; Olunuga, Taiwo; Sliwa, Karen

    2014-01-01

    Background: Heart failure (HF) is a deadly, disabling and often costly syndrome world-wide. Unfortunately, there is a paucity of data describing its economic impact in sub Saharan Africa; a region in which the number of relatively younger cases will inevitably rise. Methods: Heath economic data were extracted from a prospective HF registry in a tertiary hospital situated in Abeokuta, southwest Nigeria. Outpatient and inpatient costs were computed from a representative cohort of 239 HF cases including personnel, diagnostic and treatment resources used for their management over a 12-month period. Indirect costs were also calculated. The annual cost per person was then calculated. Results: Mean age of the cohort was 58.0±15.1 years and 53.1% were men. The total computed cost of care of HF in Abeokuta was 76, 288,845 Nigerian Naira (US$508, 595) translating to 319,200 Naira (US$2,128 US Dollars) per patient per year. The total cost of in-patient care (46% of total health care expenditure) was estimated as 34,996,477 Naira (about 301,230 US dollars). This comprised of 17,899,977 Naira- 50.9% ($US114,600) and 17,806,500 naira −49.1%($US118,710) for direct and in-direct costs respectively. Out-patient cost was estimated as 41,292,368 Naira ($US 275,282). The relatively high cost of outpatient care was largely due to cost of transportation for monthly follow up visits. Payments were mostly made through out-of-pocket spending. Conclusion: The economic burden of HF in Nigeria is particularly high considering, the relatively young age of affected cases, a minimum wage of 18,000 Naira ($US120) per month and considerable component of out-of-pocket spending for those affected. Health reforms designed to mitigate the individual to societal burden imposed by the syndrome are required. PMID:25415310

  2. Economic burden of heart failure: investigating outpatient and inpatient costs in Abeokuta, Southwest Nigeria.

    PubMed

    Ogah, Okechukwu S; Stewart, Simon; Onwujekwe, Obinna E; Falase, Ayodele O; Adebayo, Saheed O; Olunuga, Taiwo; Sliwa, Karen

    2014-01-01

    Heart failure (HF) is a deadly, disabling and often costly syndrome world-wide. Unfortunately, there is a paucity of data describing its economic impact in sub Saharan Africa; a region in which the number of relatively younger cases will inevitably rise. Heath economic data were extracted from a prospective HF registry in a tertiary hospital situated in Abeokuta, southwest Nigeria. Outpatient and inpatient costs were computed from a representative cohort of 239 HF cases including personnel, diagnostic and treatment resources used for their management over a 12-month period. Indirect costs were also calculated. The annual cost per person was then calculated. Mean age of the cohort was 58.0 ± 15.1 years and 53.1% were men. The total computed cost of care of HF in Abeokuta was 76, 288,845 Nigerian Naira (US$508, 595) translating to 319,200 Naira (US$2,128 US Dollars) per patient per year. The total cost of in-patient care (46% of total health care expenditure) was estimated as 34,996,477 Naira (about 301,230 US dollars). This comprised of 17,899,977 Naira- 50.9% ($US114,600) and 17,806,500 naira -49.1%($US118,710) for direct and in-direct costs respectively. Out-patient cost was estimated as 41,292,368 Naira ($US 275,282). The relatively high cost of outpatient care was largely due to cost of transportation for monthly follow up visits. Payments were mostly made through out-of-pocket spending. The economic burden of HF in Nigeria is particularly high considering, the relatively young age of affected cases, a minimum wage of 18,000 Naira ($US120) per month and considerable component of out-of-pocket spending for those affected. Health reforms designed to mitigate the individual to societal burden imposed by the syndrome are required.

  3. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less

  4. Can Broader Diffusion of Value-Based Insurance Design Increase Benefits from US Health Care without Increasing Costs? Evidence from a Computer Simulation Model

    PubMed Central

    Scott Braithwaite, R.; Omokaro, Cynthia; Justice, Amy C.; Nucifora, Kimberly; Roberts, Mark S.

    2010-01-01

    Background Evidence suggests that cost sharing (i.e.,copayments and deductibles) decreases health expenditures but also reduces essential care. Value-based insurance design (VBID) has been proposed to encourage essential care while controlling health expenditures. Our objective was to estimate the impact of broader diffusion of VBID on US health care benefits and costs. Methods and Findings We used a published computer simulation of costs and life expectancy gains from US health care to estimate the impact of broader diffusion of VBID. Two scenarios were analyzed: (1) applying VBID solely to pharmacy benefits and (2) applying VBID to both pharmacy benefits and other health care services (e.g., devices). We assumed that cost sharing would be eliminated for high-value services (<$100,000 per life-year), would remain unchanged for intermediate- or unknown-value services ($100,000–$300,000 per life-year or unknown), and would be increased for low-value services (>$300,000 per life-year). All costs are provided in 2003 US dollars. Our simulation estimated that approximately 60% of health expenditures in the US are spent on low-value services, 20% are spent on intermediate-value services, and 20% are spent on high-value services. Correspondingly, the vast majority (80%) of health expenditures would have cost sharing that is impacted by VBID. With prevailing patterns of cost sharing, health care conferred 4.70 life-years at a per-capita annual expenditure of US$5,688. Broader diffusion of VBID to pharmaceuticals increased the benefit conferred by health care by 0.03 to 0.05 additional life-years, without increasing costs and without increasing out-of-pocket payments. Broader diffusion of VBID to other health care services could increase the benefit conferred by health care by 0.24 to 0.44 additional life-years, also without increasing costs and without increasing overall out-of-pocket payments. Among those without health insurance, using cost saving from VBID to subsidize insurance coverage would increase the benefit conferred by health care by 1.21 life-years, a 31% increase. Conclusion Broader diffusion of VBID may amplify benefits from US health care without increasing health expenditures. Please see later in the article for the Editors' Summary PMID:20169114

  5. Cost analysis of Human Papillomavirus-related cervical diseases and genital warts in Swaziland

    PubMed Central

    Sartorius, Benn; Dlamini, Xolisile; Östensson, Ellinor

    2017-01-01

    Background Human papillomavirus (HPV) has proven to be the cause of several severe clinical conditions on the cervix, vulva, vagina, anus, oropharynx and penis. Several studies have assessed the costs of cervical lesions, cervical cancer (CC), and genital warts. However, few have been done in Africa and none in Swaziland. Cost analysis is critical in providing useful information for economic evaluations to guide policymakers concerned with the allocation of resources in order to reduce the disease burden. Materials and methods A prevalence-based cost of illness (COI) methodology was used to investigate the economic burden of HPV-related diseases. We used a top-down approach for the cost associated with hospital care and a bottom-up approach to estimate the cost associated with outpatient and primary care. The current study was conducted from a provider perspective since the state bears the majority of the costs of screening and treatment in Swaziland. All identifiable direct medical costs were considered for cervical lesions, cervical cancer and genital warts, which were primary diagnoses during 2015. A mix of bottom up micro-costing ingredients approach and top-down approaches was used to collect data on costs. All costs were computed at the price level of 2015 and converted to dollars ($). Results The total annual estimated direct medical cost associated with screening, managing and treating cervical lesions, CC and genital warts in Swaziland was $16 million. The largest cost in the analysis was estimated for treatment of high-grade cervical lesions and cervical cancer representing 80% of the total cost ($12.6 million). Costs for screening only represented 5% of the total cost ($0.9 million). Treatment of genital warts represented 6% of the total cost ($1million). Conclusion According to the cost estimations in this study, the economic burden of HPV-related cervical diseases and genital warts represents a major public health issue in Swaziland. Prevention of HPV infection with a national HPV immunization programme for pre-adolescent girls would prevent the majority of CC related deaths and associated costs. PMID:28531205

  6. Documentation of the analysis of the benefits and costs of aeronautical research and technology models, volume 1

    NASA Technical Reports Server (NTRS)

    Bobick, J. C.; Braun, R. L.; Denny, R. E.

    1979-01-01

    The analysis of the benefits and costs of aeronautical research and technology (ABC-ART) models are documented. These models were developed by NASA for use in analyzing the economic feasibility of applying advanced aeronautical technology to future civil aircraft. The methodology is composed of three major modules: fleet accounting module, airframe manufacturing module, and air carrier module. The fleet accounting module is used to estimate the number of new aircraft required as a function of time to meet demand. This estimation is based primarily upon the expected retirement age of existing aircraft and the expected change in revenue passenger miles demanded. Fuel consumption estimates are also generated by this module. The airframe manufacturer module is used to analyze the feasibility of the manufacturing the new aircraft demanded. The module includes logic for production scheduling and estimating manufacturing costs. For a series of aircraft selling prices, a cash flow analysis is performed and a rate of return on investment is calculated. The air carrier module provides a tool for analyzing the financial feasibility of an airline purchasing and operating the new aircraft. This module includes a methodology for computing the air carrier direct and indirect operating costs, performing a cash flow analysis, and estimating the internal rate of return on investment for a set of aircraft purchase prices.

  7. Cost-effectiveness analysis of online hemodiafiltration versus high-flux hemodialysis.

    PubMed

    Ramponi, Francesco; Ronco, Claudio; Mason, Giacomo; Rettore, Enrico; Marcelli, Daniele; Martino, Francesca; Neri, Mauro; Martin-Malo, Alejandro; Canaud, Bernard; Locatelli, Francesco

    2016-01-01

    Clinical studies suggest that hemodiafiltration (HDF) may lead to better clinical outcomes than high-flux hemodialysis (HF-HD), but concerns have been raised about the cost-effectiveness of HDF versus HF-HD. Aim of this study was to investigate whether clinical benefits, in terms of longer survival and better health-related quality of life, are worth the possibly higher costs of HDF compared to HF-HD. The analysis comprised a simulation based on the combined results of previous published studies, with the following steps: 1) estimation of the survival function of HF-HD patients from a clinical trial and of HDF patients using the risk reduction estimated in a meta-analysis; 2) simulation of the survival of the same sample of patients as if allocated to HF-HD or HDF using three-state Markov models; and 3) application of state-specific health-related quality of life coefficients and differential costs derived from the literature. Several Monte Carlo simulations were performed, including simulations for patients with different risk profiles, for example, by age (patients aged 40, 50, and 60 years), sex, and diabetic status. Scatter plots of simulations in the cost-effectiveness plane were produced, incremental cost-effectiveness ratios were estimated, and cost-effectiveness acceptability curves were computed. An incremental cost-effectiveness ratio of €6,982/quality-adjusted life years (QALY) was estimated for the baseline cohort of 50-year-old male patients. Given the commonly accepted threshold of €40,000/QALY, HDF is cost-effective. The probabilistic sensitivity analysis showed that HDF is cost-effective with a probability of ~81% at a threshold of €40,000/QALY. It is fundamental to measure the outcome also in terms of quality of life. HDF is more cost-effective for younger patients. HDF can be considered cost-effective compared to HF-HD.

  8. Robust guaranteed-cost adaptive quantum phase estimation

    NASA Astrophysics Data System (ADS)

    Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.

    2017-05-01

    Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.

  9. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared.

    PubMed

    Nixon, Richard M; Wonderling, David; Grieve, Richard D

    2010-03-01

    Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.

  10. Estimated Cost to a Restaurant of a Foodborne Illness Outbreak.

    PubMed

    Bartsch, Sarah M; Asti, Lindsey; Nyathi, Sindiso; Spiker, Marie L; Lee, Bruce Y

    Although outbreaks of restaurant-associated foodborne illness occur periodically and make the news, a restaurant may not be aware of the cost of an outbreak. We estimated this cost under varying circumstances. We developed a computational simulation model; scenarios varied outbreak size (5 to 250 people affected), pathogen (n = 15), type of dining establishment (fast food, fast casual, casual dining, and fine dining), lost revenue (ie, meals lost per illness), cost of lawsuits and legal fees, fines, and insurance premium increases. We estimated that the cost of a single foodborne illness outbreak ranged from $3968 to $1.9 million for a fast-food restaurant, $6330 to $2.1 million for a fast-casual restaurant, $8030 to $2.2 million for a casual-dining restaurant, and $8273 to $2.6 million for a fine-dining restaurant, varying from a 5-person outbreak, with no lost revenue, lawsuits, legal fees, or fines, to a 250-person outbreak, with high lost revenue (100 meals lost per illness), and a high amount of lawsuits and legal fees ($1 656 569) and fines ($100 000). This cost amounts to 10% to 5790% of a restaurant's annual marketing costs and 0.3% to 101% of annual profits and revenue. The biggest cost drivers were lawsuits and legal fees, outbreak size, and lost revenue. Pathogen type affected the cost by a maximum of $337 000, the difference between a Bacillus cereus outbreak (least costly) and a listeria outbreak (most costly). The cost of a single foodborne illness outbreak to a restaurant can be substantial and outweigh the typical costs of prevention and control measures. Our study can help decision makers determine investment and motivate research for infection-control measures in restaurant settings.

  11. Upon Accounting for the Impact of Isoenzyme Loss, Gene Deletion Costs Anticorrelate with Their Evolutionary Rates.

    PubMed

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu; Segrè, Daniel

    2017-01-01

    System-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now" and the same gene's historical importance as evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.

  12. A stopping criterion for the iterative solution of partial differential equations

    NASA Astrophysics Data System (ADS)

    Rao, Kaustubh; Malan, Paul; Perot, J. Blair

    2018-01-01

    A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.

  13. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  14. A comparison of methods to handle skew distributed cost variables in the analysis of the resource consumption in schizophrenia treatment.

    PubMed

    Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C

    2002-03-01

    Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.

  15. The economics of improving medication adherence in osteoporosis: validation and application of a simulation model.

    PubMed

    Patrick, Amanda R; Schousboe, John T; Losina, Elena; Solomon, Daniel H

    2011-09-01

    Adherence to osteoporosis treatment is low. Although new therapies and behavioral interventions may improve medication adherence, questions are likely to arise regarding their cost-effectiveness. Our objectives were to develop and validate a model to simulate the clinical outcomes and costs arising from various osteoporosis medication adherence patterns among women initiating bisphosphonate treatment and to estimate the cost-effectiveness of a hypothetical intervention to improve medication adherence. We constructed a computer simulation using estimates of fracture rates, bisphosphonate treatment effects, costs, and utilities for health states drawn from the published literature. Probabilities of transitioning on and off treatment were estimated from administrative claims data. Patients were women initiating bisphosphonate therapy from the general community. We evaluated a hypothetical behavioral intervention to improve medication adherence. Changes in 10-yr fracture rates and incremental cost-effectiveness ratios were evaluated. A hypothetical intervention with a one-time cost of $250 and reducing bisphosphonate discontinuation by 30% had an incremental cost-effectiveness ratio (ICER) of $29,571 per quality-adjusted life year in 65-yr-old women initiating bisphosphonates. Although the ICER depended on patient age, intervention effectiveness, and intervention cost, the ICERs were less than $50,000 per quality-adjusted life year for the majority of intervention cost and effectiveness scenarios evaluated. Results were sensitive to bisphosphonate cost and effectiveness and assumptions about the rate at which intervention and treatment effects decline over time. Our results suggests that behavioral interventions to improve osteoporosis medication adherence will likely have favorable ICERs if their efficacy can be sustained.

  16. U.S. Coast Guard Cutter Procurement Lessons Impacts on the Offshore Patrol Cutter Program Affordability

    DTIC Science & Technology

    2015-12-01

    Budget Control Act BLS Bureau of Labor and Statistics C4ISR Command, Control, Communications, Computers, Intelligence, Surveillance, and...made prior to full-rate production. If the program is delinquent in the testing of all of the functionality and the ability to meet stated KPPs, the...estimated the price per pound of the ship by incorporating the Bureau of Labor Statistics calculations on shipbuilding labor costs, average material cost

  17. Identification of an educational production function for diverse technologies

    NASA Technical Reports Server (NTRS)

    Mcclung, R. L.

    1977-01-01

    Production function analysis used to estimate the cost effectiveness of three alternative technologies in higher education: traditional instruction, instructional television, and computer-assisted instruction is presented. Criteria and selection of a functional form are outlined and a general discussion of variable selection and measurement is presented.

  18. Parametric sensitivity study for solar-assisted heat-pump systems

    NASA Astrophysics Data System (ADS)

    White, N. M.; Morehouse, J. H.

    1981-07-01

    The engineering and economic parameters affecting life-cycle costs for solar-assisted heat pump systems are investigted. The change in energy usage resulting from each engineering parameter varied was developed from computer simulations, and is compared with results from a stand-alone heat pump system. Three geographical locations are considered: Washington, DC, Fort Worth, TX, and Madison, WI. Results indicate that most engineering changes to the systems studied do not provide significant energy savings. The most promising parameters to ary are the solar collector parameters tau (-) and U/sub L/ the heat pump capacity at design point, and the minimum utilizable evaporator temperature. Costs associated with each change are estimated, and life-cycle costs computed for both engineering parameters and economic variations in interest rate, discount rate, tax credits, fuel unit costs and fuel inflation rates. Results indicate that none of the feasibile engineering changes for the system configuration studied will make these systems economically competitive with the stand-alone heat pump without a considerable tax credit.

  19. Real-time optical flow estimation on a GPU for a skied-steered mobile robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2016-04-01

    Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.

  20. The cost-effectiveness of the RSI QuickScan intervention programme for computer workers: Results of an economic evaluation alongside a randomised controlled trial.

    PubMed

    Speklé, Erwin M; Heinrich, Judith; Hoozemans, Marco J M; Blatter, Birgitte M; van der Beek, Allard J; van Dieën, Jaap H; van Tulder, Maurits W

    2010-11-11

    The costs of arm, shoulder and neck symptoms are high. In order to decrease these costs employers implement interventions aimed at reducing these symptoms. One frequently used intervention is the RSI QuickScan intervention programme. It establishes a risk profile of the target population and subsequently advises interventions following a decision tree based on that risk profile. The purpose of this study was to perform an economic evaluation, from both the societal and companies' perspective, of the RSI QuickScan intervention programme for computer workers. In this study, effectiveness was defined at three levels: exposure to risk factors, prevalence of arm, shoulder and neck symptoms, and days of sick leave. The economic evaluation was conducted alongside a randomised controlled trial (RCT). Participating computer workers from 7 companies (N = 638) were assigned to either the intervention group (N = 320) or the usual care group (N = 318) by means of cluster randomisation (N = 50). The intervention consisted of a tailor-made programme, based on a previously established risk profile. At baseline, 6 and 12 month follow-up, the participants completed the RSI QuickScan questionnaire. Analyses to estimate the effect of the intervention were done according to the intention-to-treat principle. To compare costs between groups, confidence intervals for cost differences were computed by bias-corrected and accelerated bootstrapping. The mean intervention costs, paid by the employer, were 59 euro per participant in the intervention and 28 euro in the usual care group. Mean total health care and non-health care costs per participant were 108 euro in both groups. As to the cost-effectiveness, improvement in received information on healthy computer use as well as in their work posture and movement was observed at higher costs. With regard to the other risk factors, symptoms and sick leave, only small and non-significant effects were found. In this study, the RSI QuickScan intervention programme did not prove to be cost-effective from the both the societal and companies' perspective and, therefore, this study does not provide a financial reason for implementing this intervention. However, with a relatively small investment, the programme did increase the number of workers who received information on healthy computer use and improved their work posture and movement. NTR1117.

  1. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiao, Hongzhu; Rao, N.S.V.; Protopopescu, V.

    Regression or function classes of Euclidean type with compact support and certain smoothness properties are shown to be PAC learnable by the Nadaraya-Watson estimator based on complete orthonormal systems. While requiring more smoothness properties than typical PAC formulations, this estimator is computationally efficient, easy to implement, and known to perform well in a number of practical applications. The sample sizes necessary for PAC learning of regressions or functions under sup norm cost are derived for a general orthonormal system. The result covers the widely used estimators based on Haar wavelets, trignometric functions, and Daubechies wavelets.

  3. Cost profiles and budget impact of rechargeable versus non-rechargeable sacral neuromodulation devices in the treatment of overactive bladder syndrome.

    PubMed

    Noblett, Karen L; Dmochowski, Roger R; Vasavada, Sandip P; Garner, Abigail M; Liu, Shan; Pietzsch, Jan B

    2017-03-01

    Sacral neuromodulation (SNM) is a guideline-recommended third-line treatment option for managing overactive bladder. Current SNM devices are not rechargeable, and require neurostimulator replacement every 3-6 years. Our study objective was to assess potential cost effects to payers of adopting a rechargeable SNM neurostimulator device. We constructed a cost-consequence model to estimate the costs of long-term SNM-treatment with a rechargeable versus non-rechargeable device. Costs were considered from the payer perspective at 2015 reimbursement levels. Adverse events, therapy discontinuation, and programming rates were based on the latest published data. Neurostimulator longevity was assumed to be 4.4 and 10.0 years for non-rechargeable and rechargeable devices, respectively. A 15-year horizon was modeled, with costs discounted at 3% per year. Total budget impact to the United States healthcare system was estimated based on the computed per-patient cost findings. Over the 15-year horizon, per-patient cost of treatment with a non-rechargeable device was $64,111 versus $36,990 with a rechargeable device, resulting in estimated payer cost savings of $27,121. These cost savings were found to be robust across a wide range of scenarios. Longer analysis horizon, younger patient age, and longer rechargeable neurostimulator lifetime were associated with increased cost savings. Over a 15-year horizon, adoption of a rechargeable device strategy was projected to save the United States healthcare system up to $12 billion. At current reimbursement rates, our analysis suggests that rechargeable neurostimulator SNM technology for managing overactive bladder syndrome may deliver significant cost savings to payers over the course of treatment. Neurourol. Urodynam. 36:727-733, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. The financial and health burden of diabetic ambulatory care sensitive hospitalisations in Mexico.

    PubMed

    Lugo-Palacios, David G; Cairns, John

    2016-01-01

    To estimate the financial and health burden of diabetic ambulatory care sensitive hospitalisations (ACSH) in Mexico during 2001-2011. We identified ACSH due to diabetic complications in general hospitals run by local health ministries and estimated their financial cost using diagnostic related groups. The health burden estimation assumes that patients would not have experienced complications if they had received appropriate primary care and computes the associated Disability-Adjusted Life Years (DALYs). The financial cost of diabetic ACSH increased by 125% in real terms and their health burden in 2010 accounted for 4.2% of total DALYs associated with diabetes in Mexico. Avoiding preventable hospitalisations could free resources within the health system for other health purposes. In addition, patients with ACSH suffer preventable losses of health that should be considered when assessing the performance of any primary care intervention.

  5. Resources and costs for microbial sequence analysis evaluated using virtual machines and cloud computing.

    PubMed

    Angiuoli, Samuel V; White, James R; Matalka, Malcolm; White, Owen; Fricke, W Florian

    2011-01-01

    The widespread popularity of genomic applications is threatened by the "bioinformatics bottleneck" resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly. We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS) sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2), which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers. Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer) invested in 16S rRNA amplicon sequencing, microbial single-genome and metagenomics WGS projects can achieve cost-efficient bioinformatics support using CloVR in combination with Amazon EC2 as an alternative to local computing centers.

  6. Resources and Costs for Microbial Sequence Analysis Evaluated Using Virtual Machines and Cloud Computing

    PubMed Central

    Angiuoli, Samuel V.; White, James R.; Matalka, Malcolm; White, Owen; Fricke, W. Florian

    2011-01-01

    Background The widespread popularity of genomic applications is threatened by the “bioinformatics bottleneck” resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly. Results We present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS) sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2), which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers. Conclusions Although bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer) invested in 16S rRNA amplicon sequencing, microbial single-genome and metagenomics WGS projects can achieve cost-efficient bioinformatics support using CloVR in combination with Amazon EC2 as an alternative to local computing centers. PMID:22028928

  7. Systems cost/performance analysis (study 2.3). Volume 2: Systems cost/performance model. [unmanned automated payload programs and program planning

    NASA Technical Reports Server (NTRS)

    Campbell, B. H.

    1974-01-01

    A methodology which was developed for balanced designing of spacecraft subsystems and interrelates cost, performance, safety, and schedule considerations was refined. The methodology consists of a two-step process: the first step is one of selecting all hardware designs which satisfy the given performance and safety requirements, the second step is one of estimating the cost and schedule required to design, build, and operate each spacecraft design. Using this methodology to develop a systems cost/performance model allows the user of such a model to establish specific designs and the related costs and schedule. The user is able to determine the sensitivity of design, costs, and schedules to changes in requirements. The resulting systems cost performance model is described and implemented as a digital computer program.

  8. Physician Utilization of a Hospital Information System: A Computer Simulation Model

    PubMed Central

    Anderson, James G.; Jay, Stephen J.; Clevenger, Stephen J.; Kassing, David R.; Perry, Jane; Anderson, Marilyn M.

    1988-01-01

    The purpose of this research was to develop a computer simulation model that represents the process through which physicians enter orders into a hospital information system (HIS). Computer simulation experiments were performed to estimate the effects of two methods of order entry on outcome variables. The results of the computer simulation experiments were used to perform a cost-benefit analysis to compare the two different means of entering medical orders into the HIS. The results indicate that the use of personal order sets to enter orders into the HIS will result in a significant reduction in manpower, salaries and fringe benefits, and errors in order entry.

  9. Costs of IQ Loss from Leaded Aviation Gasoline Emissions

    PubMed Central

    Wolfe, Philip J.; Giang, Amanda; Ashok, Akshay; Selin, Noelle E.; Barrett, Steven R. H.

    2017-01-01

    In the United States, general aviation piston-driven aircraft are now the largest source of lead emitted to the atmosphere. Elevated lead concentrations impair children’s IQ and can lead to lower earnings potentials. This study is the first assessment of the nationwide annual costs of IQ losses from aircraft lead emissions. We develop a general aviation emissions inventory for the continental United States and model its impact on atmospheric concentrations using the Community Multi-Scale Air Quality Model (CMAQ). We use these concentrations to quantify the impacts of annual aviation lead emissions on the U.S. population using two methods: through static estimates of cohort-wide IQ deficits and through dynamic economy-wide effects using a computational general equilibrium model. We also examine the sensitivity of these damage estimates to different background lead concentrations, showing the impact of lead controls and regulations on marginal costs. We find that aircraft-attributable lead contributes to $1.06 billion 2006 USD ($0.01 – $11.6) in annual damages from lifetime earnings reductions, and that dynamic economy-wide methods result in damage estimates that are 54% larger. Because the marginal costs of lead are dependent on background concentration, the costs of piston-driven aircraft lead emissions are expected to increase over time as regulations on other emissions sources are tightened. PMID:27494542

  10. Costs of IQ Loss from Leaded Aviation Gasoline Emissions.

    PubMed

    Wolfe, Philip J; Giang, Amanda; Ashok, Akshay; Selin, Noelle E; Barrett, Steven R H

    2016-09-06

    In the United States, general aviation piston-driven aircraft are now the largest source of lead emitted to the atmosphere. Elevated lead concentrations impair children's IQ and can lead to lower earnings potentials. This study is the first assessment of the nationwide annual costs of IQ losses from aircraft lead emissions. We develop a general aviation emissions inventory for the continental United States and model its impact on atmospheric concentrations using the community multi-scale air quality model (CMAQ). We use these concentrations to quantify the impacts of annual aviation lead emissions on the U.S. population using two methods: through static estimates of cohort-wide IQ deficits and through dynamic economy-wide effects using a computational general equilibrium model. We also examine the sensitivity of these damage estimates to different background lead concentrations, showing the impact of lead controls and regulations on marginal costs. We find that aircraft-attributable lead contributes to $1.06 billion 2006 USD ($0.01-$11.6) in annual damages from lifetime earnings reductions, and that dynamic economy-wide methods result in damage estimates that are 54% larger. Because the marginal costs of lead are dependent on background concentration, the costs of piston-driven aircraft lead emissions are expected to increase over time as regulations on other emissions sources are tightened.

  11. Cost of speech-language interventions for children and youth with foetal alcohol spectrum disorder in Canada.

    PubMed

    Popova, Svetlana; Lange, Shannon; Burd, Larry; Shield, Kevin; Rehm, Jürgen

    2014-12-01

    This study, which is part of a large economic project on the overall burden and cost associated with Foetal Alcohol Spectrum Disorder (FASD) in Canada, estimated the cost of 1:1 speech-language interventions among children and youth with FASD for Canada in 2011. The number of children and youth with FASD and speech-language disorder(s) (SLD), the distribution of the level of severity, and the number of hours needed to treat were estimated using data from the available literature. 1:1 speech-language interventions were computed using the average cost per hour for speech-language pathologists. It was estimated that ˜ 37,928 children and youth with FASD had SLD in Canada in 2011. Using the most conservative approach, the annual cost of 1:1 speech-language interventions among children and youth with FASD is substantial, ranging from $72.5 million to $144.1 million Canadian dollars. Speech-language pathologists should be aware of the disproportionate number of children and youth with FASD who have SLD and the need for early identification to improve access to early intervention. Early identification and access to high quality services may have a role in decreasing the risk of developing the secondary disabilities and in reducing the economic burden of FASD on society.

  12. Limitations of polynomial chaos expansions in the Bayesian solution of inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Fei; Department of Mathematics, University of California, Berkeley; Morzfeld, Matthias, E-mail: mmo@math.lbl.gov

    2015-02-01

    Polynomial chaos expansions are used to reduce the computational cost in the Bayesian solutions of inverse problems by creating a surrogate posterior that can be evaluated inexpensively. We show, by analysis and example, that when the data contain significant information beyond what is assumed in the prior, the surrogate posterior can be very different from the posterior, and the resulting estimates become inaccurate. One can improve the accuracy by adaptively increasing the order of the polynomial chaos, but the cost may increase too fast for this to be cost effective compared to Monte Carlo sampling without a surrogate posterior.

  13. Shale Gas Boom or Bust? Estimating US and Global Economically Recoverable Resources

    NASA Astrophysics Data System (ADS)

    Brecha, R. J.; Hilaire, J.; Bauer, N.

    2014-12-01

    One of the most disruptive energy system technological developments of the past few decades is the rapid expansion of shale gas production in the United States. Because the changes have been so rapid there are great uncertainties as to the impacts of shale production for medium- and long-term energy and climate change mitigation policies. A necessary starting point for incorporating shale resources into modeling efforts is to understand the size of the resource, how much is technically recoverable (TRR), and finally, how much is economically recoverable (ERR) at a given cost. To assess production costs of shale gas, we combine top-down data with detailed bottom-up information. Studies solely based on top-down approaches do not adequately account for the heterogeneity of shale gas deposits and are unlikely to appropriately estimate extraction costs. We design an expedient bottom-up method based on publicly available US data to compute the levelized costs of shale gas extraction. Our results indicate the existence of economically attractive areas but also reveal a dramatic cost increase as lower-quality reservoirs are exploited. Extrapolating results for the US to the global level, our best estimate suggests that, at a cost of 6 US$/GJ, only 39% of the technically recoverable resources reported in top-down studies should be considered economically recoverable. This estimate increases to about 77% when considering optimistic TRR and estimated ultimate recovery parameters but could be lower than 12% for more pessimistic parameters. The current lack of information on the heterogeneity of shale gas deposits as well as on the development of future production technologies leads to significant uncertainties regarding recovery rates and production costs. Much of this uncertainty may be inherent, but for energy system planning purposes, with or without climate change mitigation policies, it is crucial to recognize the full ranges of recoverable quantities and costs.

  14. Systems Engineering and Integration (SE and I)

    NASA Technical Reports Server (NTRS)

    Chevers, ED; Haley, Sam

    1990-01-01

    The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.

  15. Use of computer models to assess exposure to agricultural chemicals via drinking water.

    PubMed

    Gustafson, D I

    1995-10-27

    Surveys of drinking water quality throughout the agricultural regions of the world have revealed the tendency of certain crop protection chemicals to enter water supplies. Fortunately, the trace concentrations that have been detected are generally well below the levels thought to have any negative impact on human health or the environment. However, the public expects drinking water to be pristine and seems willing to bear the costs involved in further regulating agricultural chemical use in such a way so as to eliminate the potential for such materials to occur at any detectable level. Of all the tools available to assess exposure to agricultural chemicals via drinking water, computer models are one of the most cost-effective. Although not sufficiently predictive to be used in the absence of any field data, such computer programs can be used with some degree of certainty to perform quantitative extrapolations and thereby quantify regional exposure from field-scale monitoring information. Specific models and modeling techniques will be discussed for performing such exposure analyses. Improvements in computer technology have recently made it practical to use Monte Carlo and other probabilistic techniques as a routine tool for estimating human exposure. Such methods make it possible, at least in principle, to prepare exposure estimates with known confidence intervals and sufficient statistical validity to be used in the regulatory management of agricultural chemicals.

  16. Resource utilization and costs during the initial years of lung cancer screening with computed tomography in Canada.

    PubMed

    Cressman, Sonya; Lam, Stephen; Tammemagi, Martin C; Evans, William K; Leighl, Natasha B; Regier, Dean A; Bolbocean, Corneliu; Shepherd, Frances A; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R; Mayo, John R; McWilliams, Annette; Couture, Christian; English, John C; Goffin, John; Hwang, David M; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J; Goss, Glenwood D; Nicholas, Garth; Seely, Jean M; Sekhon, Harmanjatinder S; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D; Tan, Wan C; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J

    2014-10-01

    It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer's perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400-$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553-$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254-$52,200; p = 0.061). In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure.

  17. COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL

    NASA Technical Reports Server (NTRS)

    Roush, G. B.

    1994-01-01

    The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo Professional 5.0 for recompilation. An executable is provided on the distribution diskettes. COSTMODL requires 512K RAM. The standard distribution medium for COSTMODL is three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. COSTMODL was developed in 1991. IBM PC is a registered trademark of International Business Machines. Borland and Turbo Pascal are registered trademarks of Borland International, Inc. Turbo Professional is a trademark of TurboPower Software. MS-DOS is a registered trademark of Microsoft Corporation. Turbo Professional is a trademark of TurboPower Software.

  18. Design and implementation of a Windows NT network to support CNC activities

    NASA Technical Reports Server (NTRS)

    Shearrow, C. A.

    1996-01-01

    The Manufacturing, Materials, & Processes Technology Division is undergoing dramatic changes to bring it's manufacturing practices current with today's technological revolution. The Division is developing Computer Automated Design and Computer Automated Manufacturing (CAD/CAM) abilities. The development of resource tracking is underway in the form of an accounting software package called Infisy. These two efforts will bring the division into the 1980's in relationship to manufacturing processes. Computer Integrated Manufacturing (CIM) is the final phase of change to be implemented. This document is a qualitative study and application of a CIM application capable of finishing the changes necessary to bring the manufacturing practices into the 1990's. The documentation provided in this qualitative research effort includes discovery of the current status of manufacturing in the Manufacturing, Materials, & Processes Technology Division including the software, hardware, network and mode of operation. The proposed direction of research included a network design, computers to be used, software to be used, machine to computer connections, estimate a timeline for implementation, and a cost estimate. Recommendation for the division's improvement include action to be taken, software to utilize, and computer configurations.

  19. A Modeling Framework for Optimal Computational Resource Allocation Estimation: Considering the Trade-offs between Physical Resolutions, Uncertainty and Computational Costs

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.; Rajagopal, R.

    2014-12-01

    Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.

  20. Computing return times or return periods with rare event algorithms

    NASA Astrophysics Data System (ADS)

    Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy

    2018-04-01

    The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.

  1. Efficient Data Assimilation Algorithms for Bathymetry Applications

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, H.; Kokkinaki, A.; Lee, J. H.; Farthing, M.; Hesser, T.; Kitanidis, P. K.; Darve, E. F.

    2016-12-01

    Information on the evolving state of the nearshore zone bathymetry is crucial to shoreline management, recreational safety, and naval operations. The high cost and complex logistics of using ship-based surveys for bathymetry estimation have encouraged the use of remote sensing monitoring. Data assimilation methods combine monitoring data and models of nearshore dynamics to estimate the unknown bathymetry and the corresponding uncertainties. Existing applications have been limited to the basic Kalman Filter (KF) and the Ensemble Kalman Filter (EnKF). The former can only be applied to low-dimensional problems due to its computational cost; the latter often suffers from ensemble collapse and uncertainty underestimation. This work explores the use of different variants of the Kalman Filter for bathymetry applications. In particular, we compare the performance of the EnKF to the Unscented Kalman Filter and the Hierarchical Kalman Filter, both of which are KF variants for non-linear problems. The objective is to identify which method can better handle the nonlinearities of nearshore physics, while also having a reasonable computational cost. We present two applications; first, the bathymetry of a synthetic one-dimensional cross section normal to the shore is estimated from wave speed measurements. Second, real remote measurements with unknown error statistics are used and compared to in situ bathymetric survey data collected at the USACE Field Research Facility in Duck, NC. We evaluate the information content of different data sets and explore the impact of measurement error and nonlinearities.

  2. Electricity from fossil fuels without CO2 emissions: assessing the costs of carbon dioxide capture and sequestration in U.S. electricity markets.

    PubMed

    Johnson, T L; Keith, D W

    2001-10-01

    The decoupling of fossil-fueled electricity production from atmospheric CO2 emissions via CO2 capture and sequestration (CCS) is increasingly regarded as an important means of mitigating climate change at a reasonable cost. Engineering analyses of CO2 mitigation typically compare the cost of electricity for a base generation technology to that for a similar plant with CO2 capture and then compute the carbon emissions mitigated per unit of cost. It can be hard to interpret mitigation cost estimates from this plant-level approach when a consistent base technology cannot be identified. In addition, neither engineering analyses nor general equilibrium models can capture the economics of plant dispatch. A realistic assessment of the costs of carbon sequestration as an emissions abatement strategy in the electric sector therefore requires a systems-level analysis. We discuss various frameworks for computing mitigation costs and introduce a simplified model of electric sector planning. Results from a "bottom-up" engineering-economic analysis for a representative U.S. North American Electric Reliability Council (NERC) region illustrate how the penetration of CCS technologies and the dispatch of generating units vary with the price of carbon emissions and thereby determine the relationship between mitigation cost and emissions reduction.

  3. Electricity from Fossil Fuels without CO2 Emissions: Assessing the Costs of Carbon Dioxide Capture and Sequestration in U.S. Electricity Markets.

    PubMed

    Johnson, Timothy L; Keith, David W

    2001-10-01

    The decoupling of fossil-fueled electricity production from atmospheric CO 2 emissions via CO 2 capture and sequestration (CCS) is increasingly regarded as an important means of mitigating climate change at a reasonable cost. Engineering analyses of CO 2 mitigation typically compare the cost of electricity for a base generation technology to that for a similar plant with CO 2 capture and then compute the carbon emissions mitigated per unit of cost. It can be hard to interpret mitigation cost estimates from this plant-level approach when a consistent base technology cannot be identified. In addition, neither engineering analyses nor general equilibrium models can capture the economics of plant dispatch. A realistic assessment of the costs of carbon sequestration as an emissions abatement strategy in the electric sector therefore requires a systems-level analysis. We discuss various frameworks for computing mitigation costs and introduce a simplified model of electric sector planning. Results from a "bottom-up" engineering-economic analysis for a representative U.S. North American Electric Reliability Council (NERC) region illustrate how the penetration of CCS technologies and the dispatch of generating units vary with the price of carbon emissions and thereby determine the relationship between mitigation cost and emissions reduction.

  4. 101 Computer Projects for Libraries. 101 Micro Series.

    ERIC Educational Resources Information Center

    Dewey, Patrick R.

    The projects collected in this book represent a wide cross section of the way microcomputers are used in libraries. Each project description includes organization and contact information, hardware and software used, cost and project length estimates, and Web or print references when available. Projects come from academic and public libraries,…

  5. Beyond Passwords: Usage and Policy Transformation

    DTIC Science & Technology

    2007-03-01

    case scenario for lost productivity due to users leaving their CAC at work, in their computer, is costing 261 work years per year with an estimated ...one for your CAC) are you currently using? ..................................................................................................... 43...PASSWORDS: USAGE AND POLICY TRANSFORMATION I. Introduction Background Currently , the primary method for network authentication on the

  6. Public Schools Energy Conservation Measures, Report Number 4: Hindman Elementary School, Hindman, Kentucky.

    ERIC Educational Resources Information Center

    American Association of School Administrators, Arlington, VA.

    Presented is a study identifying and evaluating opportunities for decreasing energy use at Hindman Elementary School, Hindman, Kentucky. Methods used in this engineering investigation include building surveys, computer simulations and cost estimates. Findings revealed that modifications to the school's boiler, temperature controls, electrical…

  7. Autonomous Sun-Direction Estimation Using Partially Underdetermined Coarse Sun Sensor Configurations

    NASA Astrophysics Data System (ADS)

    O'Keefe, Stephen A.

    In recent years there has been a significant increase in interest in smaller satellites as lower cost alternatives to traditional satellites, particularly with the rise in popularity of the CubeSat. Due to stringent mass, size, and often budget constraints, these small satellites rely on making the most of inexpensive hardware components and sensors, such as coarse sun sensors (CSS) and magnetometers. More expensive high-accuracy sun sensors often combine multiple measurements, and use specialized electronics, to deterministically solve for the direction of the Sun. Alternatively, cosine-type CSS output a voltage relative to the input light and are attractive due to their very low cost, simplicity to manufacture, small size, and minimal power consumption. This research investigates using coarse sun sensors for performing robust attitude estimation in order to point a spacecraft at the Sun after deployment from a launch vehicle, or following a system fault. As an alternative to using a large number of sensors, this thesis explores sun-direction estimation techniques with low computational costs that function well with underdetermined sets of CSS. Single-point estimators are coupled with simultaneous nonlinear control to achieve sun-pointing within a small percentage of a single orbit despite the partially underdetermined nature of the sensor suite. Leveraging an extensive analysis of the sensor models involved, sequential filtering techniques are shown to be capable of estimating the sun-direction to within a few degrees, with no a priori attitude information and using only CSS, despite the significant noise and biases present in the system. Detailed numerical simulations are used to compare and contrast the performance of the five different estimation techniques, with and without rate gyro measurements, their sensitivity to rate gyro accuracy, and their computation time. One of the key concerns with reducing the number of CSS is sensor degradation and failure. In this thesis, a Modified Rodrigues Parameter based CSS calibration filter suitable for autonomous on-board operation is developed. The sensitivity of this method's accuracy to the available Earth albedo data is evaluated and compared to the required computational effort. The calibration filter is expanded to perform sensor fault detection, and promising results are shown for reduced resolution albedo models. All of the methods discussed provide alternative attitude, determination, and control system algorithms for small satellite missions looking to use inexpensive, small sensors due to size, power, or budget limitations.

  8. Nonlinear stability of traffic models and the use of Lyapunov vectors for estimating the traffic state

    NASA Astrophysics Data System (ADS)

    Palatella, Luigi; Trevisan, Anna; Rambaldi, Sandro

    2013-08-01

    Valuable information for estimating the traffic flow is obtained with current GPS technology by monitoring position and velocity of vehicles. In this paper, we present a proof of concept study that shows how the traffic state can be estimated using only partial and noisy data by assimilating them in a dynamical model. Our approach is based on a data assimilation algorithm, developed by the authors for chaotic geophysical models, designed to be equivalent but computationally much less demanding than the traditional extended Kalman filter. Here we show that the algorithm is even more efficient if the system is not chaotic and demonstrate by numerical experiments that an accurate reconstruction of the complete traffic state can be obtained at a very low computational cost by monitoring only a small percentage of vehicles.

  9. Reducing the Time and Cost of Testing Engines

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Producing a new aircraft engine currently costs approximately $1 billion, with 3 years of development time for a commercial engine and 10 years for a military engine. The high development time and cost make it extremely difficult to transition advanced technologies for cleaner, quieter, and more efficient new engines. To reduce this time and cost, NASA created a vision for the future where designers would use high-fidelity computer simulations early in the design process in order to resolve critical design issues before building the expensive engine hardware. To accomplish this vision, NASA's Glenn Research Center initiated a collaborative effort with the aerospace industry and academia to develop its Numerical Propulsion System Simulation (NPSS), an advanced engineering environment for the analysis and design of aerospace propulsion systems and components. Partners estimate that using NPSS has the potential to dramatically reduce the time, effort, and expense necessary to design and test jet engines by generating sophisticated computer simulations of an aerospace object or system. These simulations will permit an engineer to test various design options without having to conduct costly and time-consuming real-life tests. By accelerating and streamlining the engine system design analysis and test phases, NPSS facilitates bringing the final product to market faster. NASA's NPSS Version (V)1.X effort was a task within the Agency s Computational Aerospace Sciences project of the High Performance Computing and Communication program, which had a mission to accelerate the availability of high-performance computing hardware and software to the U.S. aerospace community for its use in design processes. The technology brings value back to NASA by improving methods of analyzing and testing space transportation components.

  10. Computed tomographic colonography to screen for colorectal cancer, extracolonic cancer, and aortic aneurysm: model simulation with cost-effectiveness analysis.

    PubMed

    Hassan, Cesare; Pickhardt, Perry J; Pickhardt, Perry; Laghi, Andrea; Kim, Daniel H; Kim, Daniel; Zullo, Angelo; Iafrate, Franco; Di Giulio, Lorenzo; Morini, Sergio

    2008-04-14

    In addition to detecting colorectal neoplasia, abdominal computed tomography (CT) with colonography technique (CTC) can also detect unsuspected extracolonic cancers and abdominal aortic aneurysms (AAA).The efficacy and cost-effectiveness of this combined abdominal CT screening strategy are unknown. A computerized Markov model was constructed to simulate the occurrence of colorectal neoplasia, extracolonic malignant neoplasm, and AAA in a hypothetical cohort of 100,000 subjects from the United States who were 50 years of age. Simulated screening with CTC, using a 6-mm polyp size threshold for reporting, was compared with a competing model of optical colonoscopy (OC), both without and with abdominal ultrasonography for AAA detection (OC-US strategy). In the simulated population, CTC was the dominant screening strategy, gaining an additional 1458 and 462 life-years compared with the OC and OC-US strategies and being less costly, with a savings of $266 and $449 per person, respectively. The additional gains for CTC were largely due to a decrease in AAA-related deaths, whereas the modeled benefit from extracolonic cancer downstaging was a relatively minor factor. At sensitivity analysis, OC-US became more cost-effective only when the CTC sensitivity for large polyps dropped to 61% or when broad variations of costs were simulated, such as an increase in CTC cost from $814 to $1300 or a decrease in OC cost from $1100 to $500. With the OC-US approach, suboptimal compliance had a strong negative influence on efficacy and cost-effectiveness. The estimated mortality from CT-induced cancer was less than estimated colonoscopy-related mortality (8 vs 22 deaths), both of which were minor compared with the positive benefit from screening. When detection of extracolonic findings such as AAA and extracolonic cancer are considered in addition to colorectal neoplasia in our model simulation, CT colonography is a dominant screening strategy (ie, more clinically effective and more cost-effective) over both colonoscopy and colonoscopy with 1-time ultrasonography.

  11. A comparison of critical care research funding and the financial burden of critical illness in the United States.

    PubMed

    Coopersmith, Craig M; Wunsch, Hannah; Fink, Mitchell P; Linde-Zwirble, Walter T; Olsen, Keith M; Sommers, Marilyn S; Anand, Kanwaljeet J S; Tchorz, Kathryn M; Angus, Derek C; Deutschman, Clifford S

    2012-04-01

    To estimate federal dollars spent on critical care research, the cost of providing critical care, and to determine whether the percentage of federal research dollars spent on critical care research is commensurate with the financial burden of critical care. The National Institutes of Health Computer Retrieval of Information on Scientific Projects database was queried to identify funded grants whose title or abstract contained a key word potentially related to critical care. Each grant identified was analyzed by two reviewers (three if the analysis was discordant) to subjectively determine whether it was definitely, possibly, or definitely not related to critical care. Hospital and total costs of critical care were estimated from the Premier Database, state discharge data, and Medicare data. To estimate healthcare expenditures associated with caring for critically ill patients, total costs were calculated as the combination of hospitalization costs that included critical illness as well as additional costs in the year after hospital discharge. Of 19,257 grants funded by the National Institutes of Health, 332 (1.7%) were definitely related to critical care and a maximum of 1212 (6.3%) grants were possibly related to critical care. Between 17.4% and 39.0% of total hospital costs were spent on critical care, and a total of between $121 and $263 billion was estimated to be spent on patients who required intensive care. This represents 5.2% to 11.2%, respectively, of total U.S. healthcare spending. The proportion of research dollars spent on critical care is lower than the percentage of healthcare expenditures related to critical illness.

  12. Continuous stacking computational approach based automated microscope slide scanner

    NASA Astrophysics Data System (ADS)

    Murali, Swetha; Adhikari, Jayesh Vasudeva; Jagannadh, Veerendra Kalyan; Gorthi, Sai Siva

    2018-02-01

    Cost-effective and automated acquisition of whole slide images is a bottleneck for wide-scale deployment of digital pathology. In this article, a computation augmented approach for the development of an automated microscope slide scanner is presented. The realization of a prototype device built using inexpensive off-the-shelf optical components and motors is detailed. The applicability of the developed prototype to clinical diagnostic testing is demonstrated by generating good quality digital images of malaria-infected blood smears. Further, the acquired slide images have been processed to identify and count the number of malaria-infected red blood cells and thereby perform quantitative parasitemia level estimation. The presented prototype would enable cost-effective deployment of slide-based cyto-diagnostic testing in endemic areas.

  13. Is computer aided detection (CAD) cost effective in screening mammography? A model based on the CADET II study

    PubMed Central

    2011-01-01

    Background Single reading with computer aided detection (CAD) is an alternative to double reading for detecting cancer in screening mammograms. The aim of this study is to investigate whether the use of a single reader with CAD is more cost-effective than double reading. Methods Based on data from the CADET II study, the cost-effectiveness of single reading with CAD versus double reading was measured in terms of cost per cancer detected. Cost (Pound (£), year 2007/08) of single reading with CAD versus double reading was estimated assuming a health and social service perspective and a 7 year time horizon. As the equipment cost varies according to the unit size a separate analysis was conducted for high, average and low volume screening units. One-way sensitivity analyses were performed by varying the reading time, equipment and assessment cost, recall rate and reader qualification. Results CAD is cost increasing for all sizes of screening unit. The introduction of CAD is cost-increasing compared to double reading because the cost of CAD equipment, staff training and the higher assessment cost associated with CAD are greater than the saving in reading costs. The introduction of single reading with CAD, in place of double reading, would produce an additional cost of £227 and £253 per 1,000 women screened in high and average volume units respectively. In low volume screening units, the high cost of purchasing the equipment will results in an additional cost of £590 per 1,000 women screened. One-way sensitivity analysis showed that the factors having the greatest effect on the cost-effectiveness of CAD with single reading compared with double reading were the reading time and the reader's professional qualification (radiologist versus advanced practitioner). Conclusions Without improvements in CAD effectiveness (e.g. a decrease in the recall rate) CAD is unlikely to be a cost effective alternative to double reading for mammography screening in UK. This study provides updated estimates of CAD costs in a full-field digital system and assessment cost for women who are re-called after initial screening. However, the model is highly sensitive to various parameters e.g. reading time, reader qualification, and equipment cost. PMID:21241473

  14. Poster — Thur Eve — 44: Linearization of Compartmental Models for More Robust Estimates of Regional Hemodynamic, Metabolic and Functional Parameters using DCE-CT/PET Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blais, AR; Dekaban, M; Lee, T-Y

    2014-08-15

    Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less

  15. Postintroduction Study of Cost Effectiveness of Pneumococcal Vaccine PCV10 from Public Sector Payer's Perspective in the State of Santa Catarina, Brazil.

    PubMed

    Kupek, Emil; Viertel, Ilse

    2018-05-14

    To evaluate cost effectiveness of 10-valent pneumococcal conjugate vaccine in the routine immunization program for children younger than 5 years in Brazil by a postintroduction study. Ecological study of prevaccine (2006-2009) versus postvaccine (2011-2014) period related the changes in mortality rate and hospitalization rate to direct cost of pneumonia treatment from the payer's perspective to estimate the cost effectiveness regarding lives saved, life-years gained, and disability-adjusted life-year for children younger than 5 years in the southern Brazilian state of Santa Catarina. All-cause pneumonia (ICD-10 J12-J18) deaths, hospital admissions, and associated costs were retrieved from the Brazilian Ministry of Health official Web site. Life expectancy at birth, population, ambulatory costs, cost savings, and plausible range of these parameters were used from published sources. Computer simulations with sensitivity analysis were performed to obtain the cost-effectiveness estimates. About 27 lives were saved and 2573 hospitalizations averted by the 10-valent pneumococcal conjugate vaccine vaccination in the 2011 to 2014 period at the cost of US $24,348 per life-year gained and US $27,748 per disability-adjusted life-year. The latter cost is 81% of Brazilian gross domestic product per capita over the same period. The vaccine was very cost-effective according to the World Health Organization criterion. Copyright © 2018. Published by Elsevier Inc.

  16. Predictive Software Cost Model Study. Volume II. Software Package Detailed Data.

    DTIC Science & Technology

    1980-06-01

    will not be limited to: a. ASN-91 NWDS Computer b. Armament System Control Unit ( ASCU ) c. AN/ASN-90 IMS 6. CONFIGURATION CONTROL. OFP/OTP...planned approach. 3. Detailed analysis and study; impacts on hardware, manuals, data, AGE , etc; alternatives with pros and cons; cost estimates; ECP...WAIT UNTIL RESOURCE REQUEST FOR * : HAG TAPE HAS BEEN FULFILLED )MTS 0 RI * Ae* NESDIIRCE MAG TAPE (SHORT FORM)I:TST IN I" . TEST " AG TAPE RESOURCE

  17. Percutaneous cryoablation of metastatic renal cell carcinoma for local tumor control: feasibility, outcomes, and estimated cost-effectiveness for palliation.

    PubMed

    Bang, Hyun J; Littrup, Peter J; Goodrich, Dylan J; Currier, Brandt P; Aoun, Hussein D; Heilbrun, Lance K; Vaishampayan, Ulka; Adam, Barbara; Goodman, Allen C

    2012-06-01

    To assess complications, local tumor recurrences, overall survival (OS), and estimates of cost-effectiveness for multisite cryoablation (MCA) of oligometastatic renal cell carcinoma (RCC). A total of 60 computed tomography- and/or ultrasound-guided percutaneous MCA procedures were performed on 72 tumors in 27 patients (three women and 24 men). Average patient age was 63 years. Tumor location was grouped according to common metastatic sites. Established surgical selection criteria graded patient status. Median OS was determined by Kaplan-Meier method and defined life-years gained (LYGs). Estimates of MCA costs per LYG were compared with established values for systemic therapies. Total number of tumors and cryoablation procedures for each anatomic site are as follows: nephrectomy bed, 11 and 11; adrenal gland, nine and eight; paraaortic, seven and six; lung, 14 and 13; bone, 13 and 13; superficial, 12 and nine; intraperitoneal, five and three; and liver, one and one. A mean of 2.2 procedures per patient were performed, with a median clinical follow-up of 16 months. Major complication and local recurrence rates were 2% (one of 60) and 3% (two of 72), respectively. No patients were graded as having good surgical risk, but median OS was 2.69 years, with an estimated 5-year survival rate of 27%. Cryoablation remained cost-effective with or without the presence of systemic therapies according to historical cost comparisons, with an adjunctive cost-effectiveness ratio of $28,312-$59,554 per LYG. MCA was associated with very low morbidity and local tumor recurrence rates for all anatomic sites, with apparent increased OS. Even as an adjunct to systemic therapies, MCA appeared cost-effective for palliation of oligometastatic RCC. Copyright © 2012 SIR. Published by Elsevier Inc. All rights reserved.

  18. Computed Tomography Screening for Lung Cancer in the National Lung Screening Trial

    PubMed Central

    Black, William C.

    2016-01-01

    The National Lung Screening Trial (NLST) demonstrated that screening with low-dose CT versus chest radiography reduced lung cancer mortality by 16% to 20%. More recently, a cost-effectiveness analysis (CEA) of CT screening for lung cancer versus no screening in the NLST was performed. The CEA conformed to the reference-case recommendations of the US Panel on Cost-Effectiveness in Health and Medicine, including the use of the societal perspective and an annual discount rate of 3%. The CEA was based on several important assumptions. In this paper, I review the methods and assumptions used to obtain the base case estimate of $81,000 per quality-adjusted life-year gained. In addition, I show how this estimate varied widely among different subsets and when some of the base case assumptions were changed and speculate on the cost-effectiveness of CT screening for lung cancer outside the NLST. PMID:25635704

  19. Repository Planning, Design, and Engineering: Part II-Equipment and Costing.

    PubMed

    Baird, Phillip M; Gunter, Elaine W

    2016-08-01

    Part II of this article discusses and provides guidance on the equipment and systems necessary to operate a repository. The various types of storage equipment and monitoring and support systems are presented in detail. While the material focuses on the large repository, the requirements for a small-scale startup are also presented. Cost estimates and a cost model for establishing a repository are presented. The cost model presents an expected range of acquisition costs for the large capital items in developing a repository. A range of 5,000-7,000 ft(2) constructed has been assumed, with 50 frozen storage units, to reflect a successful operation with growth potential. No design or engineering costs, permit or regulatory costs, or smaller items such as the computers, software, furniture, phones, and barcode readers required for operations have been included.

  20. Cost-effectiveness assessment in outpatient sinonasal surgery.

    PubMed

    Mortuaire, G; Theis, D; Fackeure, R; Chevalier, D; Gengler, I

    2018-02-01

    To assess the cost-effectiveness of outpatient sinonasal surgery in terms of clinical efficacy and control of expenses. A retrospective study was conducted from January 2014 to January 2016. Patients scheduled for outpatient sinonasal surgery were systematically included. Clinical data were extracted from surgical and anesthesiology computer files. The cost accounting methods applied in our institution were used to evaluate logistic and technical costs. The standardized hospital fees rating system based on hospital stay and severity in diagnosis-related groups (Groupes homogènes de séjours: GHS) was used to estimate institutional revenue. Over 2years, 927 outpatient surgical procedures were performed. The crossover rate to conventional hospital admission was 2.9%. In a day-1 telephone interview, 85% of patients were very satisfied with the procedure. All outpatient cases showed significantly lower costs than estimated for conventional management with overnight admission, while hospital revenue did not differ between the two. This study confirmed the efficacy of outpatient surgery in this indication. Lower costs could allow savings for the health system by readjusting the rating for the procedure. More precise assessment of cost-effectiveness will require more fine-grained studies based on micro costing at hospital level and assessment of impact on conventional surgical activity and post-discharge community care. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  1. Cost-of-illness studies based on massive data: a prevalence-based, top-down regression approach.

    PubMed

    Stollenwerk, Björn; Welchowski, Thomas; Vogl, Matthias; Stock, Stephanie

    2016-04-01

    Despite the increasing availability of routine data, no analysis method has yet been presented for cost-of-illness (COI) studies based on massive data. We aim, first, to present such a method and, second, to assess the relevance of the associated gain in numerical efficiency. We propose a prevalence-based, top-down regression approach consisting of five steps: aggregating the data; fitting a generalized additive model (GAM); predicting costs via the fitted GAM; comparing predicted costs between prevalent and non-prevalent subjects; and quantifying the stochastic uncertainty via error propagation. To demonstrate the method, it was applied to aggregated data in the context of chronic lung disease to German sickness funds data (from 1999), covering over 7.3 million insured. To assess the gain in numerical efficiency, the computational time of the innovative approach has been compared with corresponding GAMs applied to simulated individual-level data. Furthermore, the probability of model failure was modeled via logistic regression. Applying the innovative method was reasonably fast (19 min). In contrast, regarding patient-level data, computational time increased disproportionately by sample size. Furthermore, using patient-level data was accompanied by a substantial risk of model failure (about 80 % for 6 million subjects). The gain in computational efficiency of the innovative COI method seems to be of practical relevance. Furthermore, it may yield more precise cost estimates.

  2. Two phase sampling for wheat acreage estimation. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Thomas, R. W.; Hay, C. M.

    1977-01-01

    A two phase LANDSAT-based sample allocation and wheat proportion estimation method was developed. This technique employs manual, LANDSAT full frame-based wheat or cultivated land proportion estimates from a large number of segments comprising a first sample phase to optimally allocate a smaller phase two sample of computer or manually processed segments. Application to the Kansas Southwest CRD for 1974 produced a wheat acreage estimate for that CRD within 2.42 percent of the USDA SRS-based estimate using a lower CRD inventory budget than for a simulated reference LACIE system. Factor of 2 or greater cost or precision improvements relative to the reference system were obtained.

  3. A Predictive Model to Estimate Cost Savings of a Novel Diagnostic Blood Panel for Diagnosis of Diarrhea-predominant Irritable Bowel Syndrome.

    PubMed

    Pimentel, Mark; Purdy, Chris; Magar, Raf; Rezaie, Ali

    2016-07-01

    A high incidence of irritable bowel syndrome (IBS) is associated with significant medical costs. Diarrhea-predominant IBS (IBS-D) is diagnosed on the basis of clinical presentation and diagnostic test results and procedures that exclude other conditions. This study was conducted to estimate the potential cost savings of a novel IBS diagnostic blood panel that tests for the presence of antibodies to cytolethal distending toxin B and anti-vinculin associated with IBS-D. A cost-minimization (CM) decision tree model was used to compare the costs of a novel IBS diagnostic blood panel pathway versus an exclusionary diagnostic pathway (ie, standard of care). The probability that patients proceed to treatment was modeled as a function of sensitivity, specificity, and likelihood ratios of the individual biomarker tests. One-way sensitivity analyses were performed for key variables, and a break-even analysis was performed for the pretest probability of IBS-D. Budget impact analysis of the CM model was extrapolated to a health plan with 1 million covered lives. The CM model (base-case) predicted $509 cost savings for the novel IBS diagnostic blood panel versus the exclusionary diagnostic pathway because of the avoidance of downstream testing (eg, colonoscopy, computed tomography scans). Sensitivity analysis indicated that an increase in both positive likelihood ratios modestly increased cost savings. Break-even analysis estimated that the pretest probability of disease would be 0.451 to attain cost neutrality. The budget impact analysis predicted a cost savings of $3,634,006 ($0.30 per member per month). The novel IBS diagnostic blood panel may yield significant cost savings by allowing patients to proceed to treatment earlier, thereby avoiding unnecessary testing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Cost-effectiveness of external cephalic version for term breech presentation.

    PubMed

    Tan, Jonathan M; Macario, Alex; Carvalho, Brendan; Druzin, Maurice L; El-Sayed, Yasser Y

    2010-01-21

    External cephalic version (ECV) is recommended by the American College of Obstetricians and Gynecologists to convert a breech fetus to vertex position and reduce the need for cesarean delivery. The goal of this study was to determine the incremental cost-effectiveness ratio, from society's perspective, of ECV compared to scheduled cesarean for term breech presentation. A computer-based decision model (TreeAge Pro 2008, Tree Age Software, Inc.) was developed for a hypothetical base case parturient presenting with a term singleton breech fetus with no contraindications for vaginal delivery. The model incorporated actual hospital costs (e.g., $8,023 for cesarean and $5,581 for vaginal delivery), utilities to quantify health-related quality of life, and probabilities based on analysis of published literature of successful ECV trial, spontaneous reversion, mode of delivery, and need for unanticipated emergency cesarean delivery. The primary endpoint was the incremental cost-effectiveness ratio in dollars per quality-adjusted year of life gained. A threshold of $50,000 per quality-adjusted life-years (QALY) was used to determine cost-effectiveness. The incremental cost-effectiveness of ECV, assuming a baseline 58% success rate, equaled $7,900/QALY. If the estimated probability of successful ECV is less than 32%, then ECV costs more to society and has poorer QALYs for the patient. However, as the probability of successful ECV was between 32% and 63%, ECV cost more than cesarean delivery but with greater associated QALY such that the cost-effectiveness ratio was less than $50,000/QALY. If the probability of successful ECV was greater than 63%, the computer modeling indicated that a trial of ECV is less costly and with better QALYs than a scheduled cesarean. The cost-effectiveness of a trial of ECV is most sensitive to its probability of success, and not to the probabilities of a cesarean after ECV, spontaneous reversion to breech, successful second ECV trial, or adverse outcome from emergency cesarean. From society's perspective, ECV trial is cost-effective when compared to a scheduled cesarean for breech presentation provided the probability of successful ECV is > 32%. Improved algorithms are needed to more precisely estimate the likelihood that a patient will have a successful ECV.

  5. Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 6: IPAD system development and operation

    NASA Technical Reports Server (NTRS)

    Redhed, D. D.; Tripp, L. L.; Kawaguchi, A. S.; Miller, R. E., Jr.

    1973-01-01

    The strategy of the IPAD implementation plan presented, proposes a three phase development of the IPAD system and technical modules, and the transfer of this capability from the development environment to the aerospace vehicle design environment. The system and technical module capabilities for each phase of development are described. The system and technical module programming languages are recommended as well as the initial host computer system hardware and operating system. The cost of developing the IPAD technology is estimated. A schedule displaying the flowtime required for each development task is given. A PERT chart gives the developmental relationships of each of the tasks and an estimate of the operational cost of the IPAD system is offered.

  6. Adaptive estimation of state of charge and capacity with online identified battery model for vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria

    2016-11-01

    Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.

  7. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  8. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  9. Incidence and lifetime costs of injuries in the United States

    PubMed Central

    Corso, P; Finkelstein, E; Miller, T; Fiebelkorn, I; Zaloshnja, E

    2006-01-01

    Background Standardized methodologies for assessing economic burden of injury at the national or international level do not exist. Objective To measure national incidence, medical costs, and productivity losses of medically treated injuries using the most recent data available in the United States, as a case study for similarly developed countries undertaking economic burden analyses. Method The authors combined several data sets to estimate the incidence of fatal and non‐fatal injuries in 2000. They computed unit medical and productivity costs and multiplied these costs by corresponding incidence estimates to yield total lifetime costs of injuries occurring in 2000. Main outcome measures Incidence, medical costs, productivity losses, and total costs for injuries stratified by age group, sex, and mechanism. Results More than 50 million Americans experienced a medically treated injury in 2000, resulting in lifetime costs of $406 billion; $80 billion for medical treatment and $326 billion for lost productivity. Males had a 20% higher rate of injury than females. Injuries resulting from falls or being struck by/against an object accounted for more than 44% of injuries. The rate of medically treated injuries declined by 15% from 1985 to 2000 in the US. For those aged 0–44, the incidence rate of injuries declined by more than 20%; while persons aged 75 and older experienced a 20% increase. Conclusions These national burden estimates provide unequivocal evidence of the large health and financial burden of injuries. This study can serve as a template for other countries or be used in intercountry comparisons. PMID:16887941

  10. Weighted Optimization-Based Distributed Kalman Filter for Nonlinear Target Tracking in Collaborative Sensor Networks.

    PubMed

    Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang

    2017-11-01

    The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.

  11. Implementing Generalized Additive Models to Estimate the Expected Value of Sample Information in a Microsimulation Model: Results of Three Case Studies.

    PubMed

    Rabideau, Dustin J; Pei, Pamela P; Walensky, Rochelle P; Zheng, Amy; Parker, Robert A

    2018-02-01

    The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.

  12. Collaborative localization in wireless sensor networks via pattern recognition in radio irregularity using omnidirectional antennas.

    PubMed

    Jiang, Joe-Air; Chuang, Cheng-Long; Lin, Tzu-Shiang; Chen, Chia-Pang; Hung, Chih-Hung; Wang, Jiing-Yi; Liu, Chang-Wang; Lai, Tzu-Yun

    2010-01-01

    In recent years, various received signal strength (RSS)-based localization estimation approaches for wireless sensor networks (WSNs) have been proposed. RSS-based localization is regarded as a low-cost solution for many location-aware applications in WSNs. In previous studies, the radiation patterns of all sensor nodes are assumed to be spherical, which is an oversimplification of the radio propagation model in practical applications. In this study, we present an RSS-based cooperative localization method that estimates unknown coordinates of sensor nodes in a network. Arrangement of two external low-cost omnidirectional dipole antennas is developed by using the distance-power gradient model. A modified robust regression is also proposed to determine the relative azimuth and distance between a sensor node and a fixed reference node. In addition, a cooperative localization scheme that incorporates estimations from multiple fixed reference nodes is presented to improve the accuracy of the localization. The proposed method is tested via computer-based analysis and field test. Experimental results demonstrate that the proposed low-cost method is a useful solution for localizing sensor nodes in unknown or changing environments.

  13. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  14. Nearshore Measurements From a Small UAV.

    NASA Astrophysics Data System (ADS)

    Holman, R. A.; Brodie, K. L.; Spore, N.

    2016-02-01

    Traditional measurements of nearshore hydrodynamics and evolving bathymetry are expensive and dangerous and must be frequently repeated to track the rapid changes of typical ocean beaches. However, extensive research into remote sensing methods using cameras or radars mounted on fixed towers has resulted in increasingly mature algorithms for estimating bathymetry, currents and wave characteristics. This naturally raises questions about how easily and effectively these algorithms can be applied to optical data from low-cost, easily-available UAV platforms. This paper will address the characteristics and quality of data taken from a small, low-cost UAV, the DJI Phantom. In particular, we will study the stability of imagery from a vehicle `parked' at 300 feet altitude, methods to stabilize remaining wander, and the quality of nearshore bathymetry estimates from the resulting image time series, computed using the cBathy algorithm. Estimates will be compared to ground truth surveys collected at the Field Research Facility at Duck, NC.

  15. [Contrast-enhanced ultrasound for the characterization of incidental liver lesions - an economical evaluation in comparison with multi-phase computed tomography].

    PubMed

    Giesel, F L; Delorme, S; Sibbel, R; Kauczor, H-U; Krix, M

    2009-06-01

    The aim of the study was to conduct a cost-minimization analysis of contrast-enhanced ultrasound (CEUS) compared to multi-phase computed tomography (M-CT) as the diagnostic standard for diagnosing incidental liver lesions. Different scenarios of a cost-covering realization of CEUS in the ambulant sector in the general health insurance system of Germany were compared to the current cost situation. The absolute savings potential was estimated using different approaches for the calculation of the incidence of liver lesions which require further characterization. CEUS was the more cost-effective method in all scenarios in which CEUS examinations where performed at specialized centers (122.18-186.53 euro) compared to M-CT (223.19 euro). With about 40 000 relevant liver lesions per year, systematic implementation of CEUS would result in a cost savings of 4 m euro per year. However, the scenario of a cost-covering CEUS examination for all physicians who perform liver ultrasound would be the most cost-intensive approach (e. g., 407.87 euro at an average utilization of the ultrasound machine of 25 %, and a CEUS ratio of 5 %). A cost-covering realization of the CEUS method can result in cost savings in the German healthcare system. A centralized approach as proposed by the DEGUM should be targeted.

  16. Decision support for hospital bed management using adaptable individual length of stay estimations and shared resources

    PubMed Central

    2013-01-01

    Background Elective patient admission and assignment planning is an important task of the strategic and operational management of a hospital and early on became a central topic of clinical operations research. The management of hospital beds is an important subtask. Various approaches have been proposed, involving the computation of efficient assignments with regard to the patients’ condition, the necessity of the treatment, and the patients’ preferences. However, these approaches are mostly based on static, unadaptable estimates of the length of stay and, thus, do not take into account the uncertainty of the patient’s recovery. Furthermore, the effect of aggregated bed capacities have not been investigated in this context. Computer supported bed management, combining an adaptable length of stay estimation with the treatment of shared resources (aggregated bed capacities) has not yet been sufficiently investigated. The aim of our work is: 1) to define a cost function for patient admission taking into account adaptable length of stay estimations and aggregated resources, 2) to define a mathematical program formally modeling the assignment problem and an architecture for decision support, 3) to investigate four algorithmic methodologies addressing the assignment problem and one base-line approach, and 4) to evaluate these methodologies w.r.t. cost outcome, performance, and dismissal ratio. Methods The expected free ward capacity is calculated based on individual length of stay estimates, introducing Bernoulli distributed random variables for the ward occupation states and approximating the probability densities. The assignment problem is represented as a binary integer program. Four strategies for solving the problem are applied and compared: an exact approach, using the mixed integer programming solver SCIP; and three heuristic strategies, namely the longest expected processing time, the shortest expected processing time, and random choice. A baseline approach serves to compare these optimization strategies with a simple model of the status quo. All the approaches are evaluated by a realistic discrete event simulation: the outcomes are the ratio of successful assignments and dismissals, the computation time, and the model’s cost factors. Results A discrete event simulation of 226,000 cases shows a reduction of the dismissal rate compared to the baseline by more than 30 percentage points (from a mean dismissal ratio of 74.7% to 40.06% comparing the status quo with the optimization strategies). Each of the optimization strategies leads to an improved assignment. The exact approach has only a marginal advantage over the heuristic strategies in the model’s cost factors (≤3%). Moreover,this marginal advantage was only achieved at the price of a computational time fifty times that of the heuristic models (an average computing time of 141 s using the exact method, vs. 2.6 s for the heuristic strategy). Conclusions In terms of its performance and the quality of its solution, the heuristic strategy RAND is the preferred method for bed assignment in the case of shared resources. Future research is needed to investigate whether an equally marked improvement can be achieved in a large scale clinical application study, ideally one comprising all the departments involved in admission and assignment planning. PMID:23289448

  17. Decision support for hospital bed management using adaptable individual length of stay estimations and shared resources.

    PubMed

    Schmidt, Robert; Geisler, Sandra; Spreckelsen, Cord

    2013-01-07

    Elective patient admission and assignment planning is an important task of the strategic and operational management of a hospital and early on became a central topic of clinical operations research. The management of hospital beds is an important subtask. Various approaches have been proposed, involving the computation of efficient assignments with regard to the patients' condition, the necessity of the treatment, and the patients' preferences. However, these approaches are mostly based on static, unadaptable estimates of the length of stay and, thus, do not take into account the uncertainty of the patient's recovery. Furthermore, the effect of aggregated bed capacities have not been investigated in this context. Computer supported bed management, combining an adaptable length of stay estimation with the treatment of shared resources (aggregated bed capacities) has not yet been sufficiently investigated. The aim of our work is: 1) to define a cost function for patient admission taking into account adaptable length of stay estimations and aggregated resources, 2) to define a mathematical program formally modeling the assignment problem and an architecture for decision support, 3) to investigate four algorithmic methodologies addressing the assignment problem and one base-line approach, and 4) to evaluate these methodologies w.r.t. cost outcome, performance, and dismissal ratio. The expected free ward capacity is calculated based on individual length of stay estimates, introducing Bernoulli distributed random variables for the ward occupation states and approximating the probability densities. The assignment problem is represented as a binary integer program. Four strategies for solving the problem are applied and compared: an exact approach, using the mixed integer programming solver SCIP; and three heuristic strategies, namely the longest expected processing time, the shortest expected processing time, and random choice. A baseline approach serves to compare these optimization strategies with a simple model of the status quo. All the approaches are evaluated by a realistic discrete event simulation: the outcomes are the ratio of successful assignments and dismissals, the computation time, and the model's cost factors. A discrete event simulation of 226,000 cases shows a reduction of the dismissal rate compared to the baseline by more than 30 percentage points (from a mean dismissal ratio of 74.7% to 40.06% comparing the status quo with the optimization strategies). Each of the optimization strategies leads to an improved assignment. The exact approach has only a marginal advantage over the heuristic strategies in the model's cost factors (≤3%). Moreover,this marginal advantage was only achieved at the price of a computational time fifty times that of the heuristic models (an average computing time of 141 s using the exact method, vs. 2.6 s for the heuristic strategy). In terms of its performance and the quality of its solution, the heuristic strategy RAND is the preferred method for bed assignment in the case of shared resources. Future research is needed to investigate whether an equally marked improvement can be achieved in a large scale clinical application study, ideally one comprising all the departments involved in admission and assignment planning.

  18. Cost of treatment for breast cancer in central Vietnam

    PubMed Central

    Hoang Lan, Nguyen; Laohasiriwong, Wongsa; Stewart, John Frederick; Tung, Nguyen Dinh; Coyte, Peter C.

    2013-01-01

    Background In recent years, cases of breast cancer have been on the rise in Vietnam. To date, there has been no study on the financial burden of the disease. This study estimates the direct medical cost of a 5-year treatment course for women with primary breast cancer in central Vietnam. Methods Retrospective patient-level data from medical records at the Hue Central Hospital between 2001 and 2006 were analyzed. Cost analysis was conducted from the health care payers’ perspective. Various direct medical cost categories were computed for a 5-year treatment course for patients with breast cancer. Costs, in US dollars, discounted at a 3% rate, were converted to 2010 after adjusting for inflation. For each cost category, the mean, standard deviation, median, and cost range were estimated. Median regression was used to investigate the relationship between costs and the stage, age at diagnosis, and the health insurance coverage of the patients. Results The total direct medical cost for a 5-year treatment course for breast cancer in central Vietnam was estimated at $975 per patient (range: $11.7–$3,955). The initial treatment cost, particularly the cost of chemotherapy, was found to account for the greatest proportion of total costs (64.9%). Among the patient characteristics studied, stage at diagnosis was significantly associated with total treatment costs. Patients at later stages of breast cancer did not differ significantly in their total costs from those at earlier stages however, but their survival time was much shorter. The absence of health insurance was the main factor limiting service uptake. Conclusion From the health care payers’ perspective, the Government subsidization of public hospital charges lowered the direct medical costs of a 5-year treatment course for primary breast cancer in central Vietnam. However, the long treatment course was significantly influenced by out-of-pocket payments for patients without health insurance. PMID:23394855

  19. Fast Quaternion Attitude Estimation from Two Vector Measurements

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. Existing closed-form attitude estimates based on Wahba's optimality criterion for two arbitrarily weighted observations are somewhat slow to evaluate. This paper presents two new fast quaternion attitude estimation algorithms using two vector observations, one optimal and one suboptimal. The suboptimal method gives the same estimate as the TRIAD algorithm, at reduced computational cost. Simulations show that the TRIAD estimate is almost as accurate as the optimal estimate in representative test scenarios.

  20. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  1. 76 FR 23854 - Reclassification of Motorcycles (Two and Three Wheeled Vehicles) in the Guide to Reporting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-28

    ... and the unintended consequences of misclassification. Harley Davidson Motor Company (HDMC) stated that... concerns about the administrative, logistical and financial burdens of providing information based on the... estimated that the cost of updating their computers to process the information included in the new guidance...

  2. Local Special Education Planning Model: User's Manual.

    ERIC Educational Resources Information Center

    Hartman, Peggy L.; Hartman, William T.

    To help school districts estimate the present and future needs and costs of their special education programs, this manual presents the Local Special Education Planning Model, an interactive computer program (with worksheets) that provides a framework for using a district's own data to analyze its special education program. Part 1 of the manual…

  3. The Mix of Military and Civilian Faculty at the United States Air Force Academy: Finding a Sustainable Balance for Enduring Success

    DTIC Science & Technology

    2013-01-01

    academic departments are as follows: The Basic Sciences Division includes the Departments of Biology, Chemistry, Computer Science, Mathematical Sciences...percent). This factor is based on actuarial estimates for the costs of the government- paid portion of health insurance under the Federal Employees

  4. Financial feasibility of marker-aided selection in Douglas-fir.

    Treesearch

    G.R. Johnson; N.C. Wheeler; S.H. Strauss

    2000-01-01

    The land area required for a marker-aided selection (MAS) program to break-even (i.e., have equal costs and benefits) was estimated using computer simulation for coastal Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) in the Pacific Northwestern United States. We compared the selection efficiency obtained when using an index that included the...

  5. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  6. Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison

    2017-11-01

    Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.

  7. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  8. Multistage variable probability forest volume inventory. [the Defiance Unit of the Navajo Nation

    NASA Technical Reports Server (NTRS)

    Anderson, J. E. (Principal Investigator)

    1979-01-01

    An inventory scheme based on the use of computer processed LANDSAT MSS data was developed. Output from the inventory scheme provides an estimate of the standing net saw timber volume of a major timber species on a selected forested area of the Navajo Nation. Such estimates are based on the values of parameters currently used for scaled sawlog conversion to mill output. The multistage variable probability sampling appears capable of producing estimates which compare favorably with those produced using conventional techniques. In addition, the reduction in time, manpower, and overall costs lend it to numerous applications.

  9. An investigation into the cost, coverage and activities of Helicopter Emergency Medical Services in the state of New South Wales, Australia.

    PubMed

    Taylor, Colman B; Stevenson, Mark; Jan, Stephen; Liu, Bette; Tall, Gary; Middleton, Paul M; Fitzharris, Michael; Myburgh, John

    2011-10-01

    Helicopter Emergency Medical Services (HEMS) have been incorporated into modern health systems for their speed and coverage. In the state of New South Wales (NSW), nine HEMS operate from various locations around the state and currently there is no clear picture of their resource implications. The aim of this study was to assess the cost of HEMS in NSW and investigate the factors linked with the variation in the costs, coverage and activities of HEMS. We undertook a survey of HEMS costs, structures and operations in NSW for the 2008/2009 financial year. Costs were estimated from annual reports and contractual agreements. Data related to the structure and operation of services was obtained by face-to-face interviews, from operational data extracted from individual HEMS, from the NSW Ambulance Computer Aided Despatch system and from the Aeromedical Operations Centre database. In order to estimate population coverage for each HEMS, we used GIS mapping techniques with Australian Bureau of Statistics census information. Across HEMS, cost per mission estimates ranged between $9300 and $19,000 and cost per engine hour estimates ranged between $5343 and $15,743. Regarding structural aspects, six HEMS were run by charities or not-for-profit companies (with partial government funding) and three HEMS were run (and fully funded) by the state government through NSW Ambulance. Two HEMS operated as 'hub' services in conjunction with three associated 'satellite' services and in contrast, four services operated independently. Variation also existed between the HEMS in the type of helicopter used, the clinical staffing and the hours of operation. The majority of services undertook both primary scene responses and secondary inter-facility transfers, although the proportion of each type of transport contributing to total operations varied across the services. This investigation highlighted the cost of HEMS operations in NSW which in total equated to over $50 million per annum. Across services, we found large variation in the cost estimates which was underscored by variation in the structure and operations of HEMS. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Estimation of surface temperature in remote pollution measurement experiments

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.

  11. An Application of UAV Attitude Estimation Using a Low-Cost Inertial Navigation System

    NASA Technical Reports Server (NTRS)

    Eure, Kenneth W.; Quach, Cuong Chi; Vazquez, Sixto L.; Hogge, Edward F.; Hill, Boyd L.

    2013-01-01

    Unmanned Aerial Vehicles (UAV) are playing an increasing role in aviation. Various methods exist for the computation of UAV attitude based on low cost microelectromechanical systems (MEMS) and Global Positioning System (GPS) receivers. There has been a recent increase in UAV autonomy as sensors are becoming more compact and onboard processing power has increased significantly. Correct UAV attitude estimation will play a critical role in navigation and separation assurance as UAVs share airspace with civil air traffic. This paper describes attitude estimation derived by post-processing data from a small low cost Inertial Navigation System (INS) recorded during the flight of a subscale commercial off the shelf (COTS) UAV. Two discrete time attitude estimation schemes are presented here in detail. The first is an adaptation of the Kalman Filter to accommodate nonlinear systems, the Extended Kalman Filter (EKF). The EKF returns quaternion estimates of the UAV attitude based on MEMS gyro, magnetometer, accelerometer, and pitot tube inputs. The second scheme is the complementary filter which is a simpler algorithm that splits the sensor frequency spectrum based on noise characteristics. The necessity to correct both filters for gravity measurement errors during turning maneuvers is demonstrated. It is shown that the proposed algorithms may be used to estimate UAV attitude. The effects of vibration on sensor measurements are discussed. Heuristic tuning comments pertaining to sensor filtering and gain selection to achieve acceptable performance during flight are given. Comparisons of attitude estimation performance are made between the EKF and the complementary filter.

  12. Real-Time Algebraic Derivative Estimations Using a Novel Low-Cost Architecture Based on Reconfigurable Logic

    PubMed Central

    Morales, Rafael; Rincón, Fernando; Gazzano, Julio Dondo; López, Juan Carlos

    2014-01-01

    Time derivative estimation of signals plays a very important role in several fields, such as signal processing and control engineering, just to name a few of them. For that purpose, a non-asymptotic algebraic procedure for the approximate estimation of the system states is used in this work. The method is based on results from differential algebra and furnishes some general formulae for the time derivatives of a measurable signal in which two algebraic derivative estimators run simultaneously, but in an overlapping fashion. The algebraic derivative algorithm presented in this paper is computed online and in real-time, offering high robustness properties with regard to corrupting noises, versatility and ease of implementation. Besides, in this work, we introduce a novel architecture to accelerate this algebraic derivative estimator using reconfigurable logic. The core of the algorithm is implemented in an FPGA, improving the speed of the system and achieving real-time performance. Finally, this work proposes a low-cost platform for the integration of hardware in the loop in MATLAB. PMID:24859033

  13. Real-time moving horizon estimation for a vibrating active cantilever

    NASA Astrophysics Data System (ADS)

    Abdollahpouri, Mohammad; Takács, Gergely; Rohaľ-Ilkiv, Boris

    2017-03-01

    Vibrating structures may be subject to changes throughout their operating lifetime due to a range of environmental and technical factors. These variations can be considered as parameter changes in the dynamic model of the structure, while their online estimates can be utilized in adaptive control strategies, or in structural health monitoring. This paper implements the moving horizon estimation (MHE) algorithm on a low-cost embedded computing device that is jointly observing the dynamic states and parameter variations of an active cantilever beam in real time. The practical behavior of this algorithm has been investigated in various experimental scenarios. It has been found, that for the given field of application, moving horizon estimation converges faster than the extended Kalman filter; moreover, it handles atypical measurement noise, sensor errors or other extreme changes, reliably. Despite its improved performance, the experiments demonstrate that the disadvantage of solving the nonlinear optimization problem in MHE is that it naturally leads to an increase in computational effort.

  14. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  15. Parallel mutual information estimation for inferring gene regulatory networks on GPUs

    PubMed Central

    2011-01-01

    Background Mutual information is a measure of similarity between two variables. It has been widely used in various application domains including computational biology, machine learning, statistics, image processing, and financial computing. Previously used simple histogram based mutual information estimators lack the precision in quality compared to kernel based methods. The recently introduced B-spline function based mutual information estimation method is competitive to the kernel based methods in terms of quality but at a lower computational complexity. Results We present a new approach to accelerate the B-spline function based mutual information estimation algorithm with commodity graphics hardware. To derive an efficient mapping onto this type of architecture, we have used the Compute Unified Device Architecture (CUDA) programming model to design and implement a new parallel algorithm. Our implementation, called CUDA-MI, can achieve speedups of up to 82 using double precision on a single GPU compared to a multi-threaded implementation on a quad-core CPU for large microarray datasets. We have used the results obtained by CUDA-MI to infer gene regulatory networks (GRNs) from microarray data. The comparisons to existing methods including ARACNE and TINGe show that CUDA-MI produces GRNs of higher quality in less time. Conclusions CUDA-MI is publicly available open-source software, written in CUDA and C++ programming languages. It obtains significant speedup over sequential multi-threaded implementation by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs. PMID:21672264

  16. Economic Outcomes with Anatomic versus Functional Diagnostic Testing for Coronary Artery Disease

    PubMed Central

    Mark, Daniel B.; Federspiel, Jerome J.; Cowper, Patricia A.; Anstrom, Kevin J.; Hoffmann, Udo; Patel, Manesh R.; Davidson-Ray, Linda; Daniels, Melanie R.; Cooper, Lawton S.; Knight, J. David; Lee, Kerry L.; Douglas, Pamela S.

    2016-01-01

    Background The PROMISE trial found that initial use of ≥64-slice multidetector computed tomographic angiography (CTA) versus functional diagnostic testing strategies did not improve clinical outcomes in stable symptomatic patients with suspected coronary artery disease (CAD) requiring noninvasive testing. Objective Economic analysis of PROMISE, a major secondary aim. Design Prospective economic study from the US perspective. Comparisons were made by intention-to-treat. Confidence intervals were calculated using bootstrap methods. Setting 190 U.S. centers Patients 9649 U.S. patients enrolled in PROMISE. Enrollment began July 2010 and completed September 2013. Median follow-up was 25 months. Measurements Technical costs of the initial (outpatient) testing strategy were estimated from Premier Research Database data. Hospital-based costs were estimated using hospital bills and Medicare cost-to-charge ratios. Physician fees were taken from the Medicare Fee Schedule. Costs were expressed in 2014 US dollars discounted at 3% and estimated out to 3 years using inverse probability weighting methods. Results The mean initial testing costs were: $174 for exercise ECG; $404 for CTA; $501 to $514 for (exercise, pharmacologic) stress echo; $946 to $1132 for (exercise, pharmacologic) stress nuclear. Mean costs at 90 days for the CTA strategy were $2494 versus $2240 for the functional strategy (mean difference $254, 95% CI −$634 to $906). The difference was associated with more revascularizations and catheterizations (4.25 per 100 patients) with CTA use. After 90 days, the mean cost difference between the arms out to 3 years remained small ($373). Limitations Cost weights for test strategies obtained from sources outside PROMISE. Conclusions CTA and functional diagnostic testing strategies in patients with suspected CAD have similar costs through three years of follow-up. PMID:27214597

  17. Simplified methods for real-time prediction of storm surge uncertainty: The city of Venice case study

    NASA Astrophysics Data System (ADS)

    Mel, Riccardo; Viero, Daniele Pietro; Carniello, Luca; Defina, Andrea; D'Alpaos, Luigi

    2014-09-01

    Providing reliable and accurate storm surge forecasts is important for a wide range of problems related to coastal environments. In order to adequately support decision-making processes, it also become increasingly important to be able to estimate the uncertainty associated with the storm surge forecast. The procedure commonly adopted to do this uses the results of a hydrodynamic model forced by a set of different meteorological forecasts; however, this approach requires a considerable, if not prohibitive, computational cost for real-time application. In the present paper we present two simplified methods for estimating the uncertainty affecting storm surge prediction with moderate computational effort. In the first approach we use a computationally fast, statistical tidal model instead of a hydrodynamic numerical model to estimate storm surge uncertainty. The second approach is based on the observation that the uncertainty in the sea level forecast mainly stems from the uncertainty affecting the meteorological fields; this has led to the idea to estimate forecast uncertainty via a linear combination of suitable meteorological variances, directly extracted from the meteorological fields. The proposed methods were applied to estimate the uncertainty in the storm surge forecast in the Venice Lagoon. The results clearly show that the uncertainty estimated through a linear combination of suitable meteorological variances nicely matches the one obtained using the deterministic approach and overcomes some intrinsic limitations in the use of a statistical tidal model.

  18. VEHIOT: Design and Evaluation of an IoT Architecture Based on Low-Cost Devices to Be Embedded in Production Vehicles.

    PubMed

    Redondo, Jonatan Pajares; González, Lisardo Prieto; Guzman, Javier García; Boada, Beatriz L; Díaz, Vicente

    2018-02-06

    Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices.

  19. VEHIOT: Design and Evaluation of an IoT Architecture Based on Low-Cost Devices to Be Embedded in Production Vehicles

    PubMed Central

    Díaz, Vicente

    2018-01-01

    Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices. PMID:29415507

  20. Is There an Economic Case for Training Intervention in the Manual Material Handling Sector of Developing Countries?

    PubMed

    Lahiri, Supriya; Tempesti, Tommaso; Gangopadhyay, Somnath

    2016-02-01

    To estimate cost-effectiveness ratios and net costs of a training intervention to reduce morbidity among porters who carry loads without mechanical assistance in a developing country informal sector setting. Pre- and post-intervention survey data (n = 100) were collected in a prospective study: differences in physical/mental composite scores and pain scale scores were computed. Costs and economic benefits of the intervention were monetized with a net-cost model. Significant changes in physical composite scores (2.5), mental composite scores (3.2), and pain scale scores (-1.0) led to cost-effectiveness ratios of $6.97, $5.41, and $17.91, respectively. Multivariate analysis showed that program adherence enhanced effectiveness. The net cost of the intervention was -$5979.00 due to a reduction in absenteeism. Workplace ergonomic training is cost-effective and should be implemented wherein other engineering-control interventions are precluded due to infrastructural constraints.

  1. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.

    2016-07-01

    Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.

  2. Herd-Level Mastitis-Associated Costs on Canadian Dairy Farms

    PubMed Central

    Aghamohammadi, Mahjoob; Haine, Denis; Kelton, David F.; Barkema, Herman W.; Hogeveen, Henk; Keefe, Gregory P.; Dufour, Simon

    2018-01-01

    Mastitis imposes considerable and recurring economic losses on the dairy industry worldwide. The main objective of this study was to estimate herd-level costs incurred by expenditures and production losses associated with mastitis on Canadian dairy farms in 2015, based on producer reports. Previously, published mastitis economic frameworks were used to develop an economic model with the most important cost components. Components investigated were divided between clinical mastitis (CM), subclinical mastitis (SCM), and other costs components (i.e., preventive measures and product quality). A questionnaire was mailed to 374 dairy producers randomly selected from the (Canadian National Dairy Study 2015) to collect data on these costs components, and 145 dairy producers returned a completed questionnaire. For each herd, costs due to the different mastitis-related components were computed by applying the values reported by the dairy producer to the developed economic model. Then, for each herd, a proportion of the costs attributable to a specific component was computed by dividing absolute costs for this component by total herd mastitis-related costs. Median self-reported CM incidence was 19 cases/100 cow-year and mean self-reported bulk milk somatic cell count was 184,000 cells/mL. Most producers reported using post-milking teat disinfection (97%) and dry cow therapy (93%), and a substantial proportion of producers reported using pre-milking teat disinfection (79%) and wearing gloves during milking (77%). Mastitis costs were substantial (662 CAD per milking cow per year for a typical Canadian dairy farm), with a large portion of the costs (48%) being attributed to SCM, and 34 and 15% due to CM and implementation of preventive measures, respectively. For SCM, the two most important cost components were the subsequent milk yield reduction and culling (72 and 25% of SCM costs, respectively). For CM, first, second, and third most important cost components were culling (48% of CM costs), milk yield reduction following the CM events (34%), and discarded milk (11%), respectively. This study is the first since 1990 to investigate costs of mastitis in Canada. The model developed in the current study can be used to compute mastitis costs at the herd and national level in Canada. PMID:29868620

  3. Simultaneous quaternion estimation (QUEST) and bias determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  4. Reduced Design Load Basis for Ultimate Blade Loads Estimation in Multidisciplinary Design Optimization Frameworks

    NASA Astrophysics Data System (ADS)

    Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.; Kim, Taeseong; Thomsen, Kenneth

    2016-09-01

    The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar to the one computed using the full DLB. This observation is used to propose a scheme that is computationally cheap, and that can be integrated inside an MDO framework, providing a sufficiently reliable estimation of the blade ultimate loading. The latter aspect is of key importance when design variables implementing passive control methodologies are included in the formulation of the optimization problem. An MDO of a 10 MW wind turbine blade is presented as an applied case study to show the efficacy of the reduced DLB concept.

  5. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  6. Long-term cost-effectiveness of disease management in systolic heart failure.

    PubMed

    Miller, George; Randolph, Stephen; Forkner, Emma; Smith, Brad; Galbreath, Autumn Dawn

    2009-01-01

    Although congestive heart failure (CHF) is a primary target for disease management programs, previous studies have generated mixed results regarding the effectiveness and cost savings of disease management when applied to CHF. We estimated the long-term impact of systolic heart failure disease management from the results of an 18-month clinical trial. We used data generated from the trial (starting population distributions, resource utilization, mortality rates, and transition probabilities) in a Markov model to project results of continuing the disease management program for the patients' lifetimes. Outputs included distribution of illness severity, mortality, resource consumption, and the cost of resources consumed. Both cost and effectiveness were discounted at a rate of 3% per year. Cost-effectiveness was computed as cost per quality-adjusted life year (QALY) gained. Model results were validated against trial data and indicated that, over their lifetimes, patients experienced a lifespan extension of 51 days. Combined discounted lifetime program and medical costs were $4850 higher in the disease management group than the control group, but the program had a favorable long-term discounted cost-effectiveness of $43,650/QALY. These results are robust to assumptions regarding mortality rates, the impact of aging on the cost of care, the discount rate, utility values, and the targeted population. Estimation of the clinical benefits and financial burden of disease management can be enhanced by model-based analyses to project costs and effectiveness. Our results suggest that disease management of heart failure patients can be cost-effective over the long term.

  7. Advanced space communications architecture study. Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    Horstein, Michael; Hadinger, Peter J.

    1987-01-01

    The technical feasibility and economic viability of satellite system architectures that are suitable for customer premise service (CPS) communications are investigated. System evaluation is performed at 30/20 GHz (Ka-band); however, the system architectures examined are equally applicable to 14/11 GHz (Ku-band). Emphasis is placed on systems that permit low-cost user terminals. Frequency division multiple access (FDMA) is used on the uplink, with typically 10,000 simultaneous accesses per satellite, each of 64 kbps. Bulk demodulators onboard the satellite, in combination with a baseband multiplexer, convert the many narrowband uplink signals into a small number of wideband data streams for downlink transmission. Single-hop network interconnectivity is accomplished via downlink scanning beams. Each satellite is estimated to weigh 5600 lb and consume 6850W of power; the corresponding payload totals are 1000 lb and 5000 W. Nonrecurring satellite cost is estimated at $110 million, with the first-unit cost at $113 million. In large quantities, the user terminal cost estimate is $25,000. For an assumed traffic profile, the required system revenue has been computed as a function of the internal rate of return (IRR) on invested capital. The equivalent user charge per-minute of 64-kbps channel service has also been determined.

  8. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform

    PubMed Central

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-01-01

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm. PMID:29438317

  9. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.

    PubMed

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-02-13

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.

  10. Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments

    NASA Astrophysics Data System (ADS)

    Munsky, Brian; Shepherd, Douglas

    2014-03-01

    Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.

  11. Model documentation renewable fuels module of the National Energy Modeling System

    NASA Astrophysics Data System (ADS)

    1995-06-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1995 Annual Energy Outlook (AEO95) forecasts. The report catalogs and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. The RFM consists of six analytical submodules that represent each of the major renewable energy resources -- wood, municipal solid waste (MSW), solar energy, wind energy, geothermal energy, and alcohol fuels. The RFM also reads in hydroelectric facility capacities and capacity factors from a data file for use by the NEMS Electricity Market Module (EMM). The purpose of the RFM is to define the technological, cost, and resource size characteristics of renewable energy technologies. These characteristics are used to compute a levelized cost to be competed against other similarly derived costs from other energy sources and technologies. The competition of these energy sources over the NEMS time horizon determines the market penetration of these renewable energy technologies. The characteristics include available energy capacity, capital costs, fixed operating costs, variable operating costs, capacity factor, heat rate, construction lead time, and fuel product price.

  12. Aid to planning the marketing of mining area boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giles, R.H. Jr.

    Reducing trespass, legal costs, and timber and wildlife poaching and increasing control, safety, and security are key reasons why mine land boundaries need to be marked. Accidents may be reduced, especially when associated with blast area boundaries, and in some cases increased income may be gained from hunting and recreational fees on well-marked areas. A BASIC computer program for an IBM-PC has been developed that requires minimum inputs to estimate boundary marking costs. This paper describes the rationale for the program and shows representative outputs. 3 references, 3 tables.

  13. Ag-Air Service

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Econ, Inc.'s agricultural aerial application, "ag-air," involves more than 10,000 aircraft spreading insecticides, herbicides, fertilizer, seed and other materials over millions of acres of farmland. Difficult for an operator to estimate costs accurately and decide what to charge or which airplane can handle which assignment most efficiently. Computerized service was designed to improve business efficiency in choice of aircraft and determination of charge rates based on realistic operating cost data. Each subscriber fills out a detailed form which pertains to his needs and then receives a custom-tailored computer printout best suited to his particular business mix.

  14. Divide and conquer approach to quantum Hamiltonian simulation

    NASA Astrophysics Data System (ADS)

    Hadfield, Stuart; Papageorgiou, Anargyros

    2018-04-01

    We show a divide and conquer approach for simulating quantum mechanical systems on quantum computers. We can obtain fast simulation algorithms using Hamiltonian structure. Considering a sum of Hamiltonians we split them into groups, simulate each group separately, and combine the partial results. Simulation is customized to take advantage of the properties of each group, and hence yield refined bounds to the overall simulation cost. We illustrate our results using the electronic structure problem of quantum chemistry, where we obtain significantly improved cost estimates under very mild assumptions.

  15. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries.

    PubMed

    Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien

    2018-01-01

    In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.

  16. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries

    PubMed Central

    Resch, Stephen

    2018-01-01

    Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964

  17. Computer mapping of LANDSAT data for environmental applications

    NASA Technical Reports Server (NTRS)

    Rogers, R. H. (Principal Investigator); Mckeon, J. B.; Reed, L. E.; Schmidt, N. F.; Schecter, R. N.

    1975-01-01

    The author has identified the following significant results. Land cover overlays and maps produced from LANDSAT are providing information on existing land use and resources throughout the 208 study area. The overlays are being used to delineate drainage areas of a predominant land cover type. Information on cover type is also being combined with other pertinent data to develop estimates of sediment and nutrients flows from the drainage area. The LANDSAT inventory of present land cover together with population projects is providing a basis for developing maps of anticipated land use patterns required to evaluate impact on water quality which may result from these patterns. Overlays of forest types were useful for defining wildlife habitat and vegetational resources in the region. LANDSAT data and computer assisted interpretation was found to be a rapid cost effective procedure for inventorying land cover on a regional basis. The entire 208 inventory which include acquisition of ground truth, LANDSAT tapes, computer processing, and production of overlays and coded tapes was completed within a period of 2 months at a cost of about 0.6 cents per acre, a significant improvement in time and cost over conventional photointerpretation and mapping techniques.

  18. Using cost-effectiveness estimates from survey data to guide commissioning: an application to home care.

    PubMed

    Forder, Julien; Malley, Juliette; Towers, Ann-Marie; Netten, Ann

    2014-08-01

    The aim is to describe and trial a pragmatic method to produce estimates of the incremental cost-effectiveness of care services from survey data. The main challenge is in estimating the counterfactual; that is, what the patient's quality of life would be if they did not receive that level of service. A production function method is presented, which seeks to distinguish the variation in care-related quality of life in the data that is due to service use as opposed to other factors. A problem is that relevant need factors also affect the amount of service used and therefore any missing factors could create endogeneity bias. Instrumental variable estimation can mitigate this problem. This method was applied to a survey of older people using home care as a proof of concept. In the analysis, we were able to estimate a quality-of-life production function using survey data with the expected form and robust estimation diagnostics. The practical advantages with this method are clear, but there are limitations. It is computationally complex, and there is a risk of misspecification and biased results, particularly with IV estimation. One strategy would be to use this method to produce preliminary estimates, with a full trial conducted thereafter, if indicated. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Upon accounting for the impact of isoenzyme loss, gene deletion costs anticorrelate with their evolutionary rates

    DOE PAGES

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu; ...

    2017-01-20

    Here, system-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now"º and the same gene's historical importance asmore » evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.« less

  20. Upon accounting for the impact of isoenzyme loss, gene deletion costs anticorrelate with their evolutionary rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu

    Here, system-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now"º and the same gene's historical importance asmore » evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.« less

  1. Economic evaluation of long-term impacts of universal newborn hearing screening.

    PubMed

    Chiou, Shu-Ti; Lung, Hou-Ling; Chen, Li-Sheng; Yen, Amy Ming-Fang; Fann, Jean Ching-Yuan; Chiu, Sherry Yueh-Hsia; Chen, Hsiu-Hsi

    2017-01-01

    Little is known about the long-term efficacious and economic impacts of universal newborn hearing screening (UNHS). An analytical Markov decision model was framed with two screening strategies: UNHS with transient evoked otoacoustic emission (TEOAE) test and automatic acoustic brainstem response (aABR) test against no screening. By estimating intervention and long-term costs on treatment and productivity losses and the utility of life years determined by the status of hearing loss, we computed base-case estimates of the incremental cost-utility ratios (ICURs). The scattered plot of ICUR and acceptability curve was used to assess the economic results of aABR versus TEOAE or both versus no screening. A hypothetical cohort of 200,000 Taiwanese newborns. TEOAE and aABR dominated over no screening strategy (ICUR = $-4800.89 and $-4111.23, indicating less cost and more utility). Given $20,000 of willingness to pay (WTP), the probability of being cost-effective of aABR against TEOAE was up to 90%. UNHS for hearing loss with aABR is the most economic option and supported by economically evidence-based evaluation from societal perspective.

  2. Location estimation in wireless sensor networks using spring-relaxation technique.

    PubMed

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  3. Advanced vehicles: Costs, energy use, and macroeconomic impacts

    NASA Astrophysics Data System (ADS)

    Wang, Guihua

    Advanced vehicles and alternative fuels could play an important role in reducing oil use and changing the economy structure. We developed the Costs for Advanced Vehicles and Energy (CAVE) model to investigate a vehicle portfolio scenario in California during 2010-2030. Then we employed a computable general equilibrium model to estimate macroeconomic impacts of the advanced vehicle scenario on the economy of California. Results indicate that, due to slow fleet turnover, conventional vehicles are expected to continue to dominate the on-road fleet and gasoline is the major transportation fuel over the next two decades. However, alternative fuels could play an increasingly important role in gasoline displacement. Advanced vehicle costs are expected to decrease dramatically with production volume and technological progress; e.g., incremental costs for fuel cell vehicles and hydrogen could break even with gasoline savings in 2028. Overall, the vehicle portfolio scenario is estimated to have a slightly negative influence on California's economy, because advanced vehicles are very costly and, therefore, the resulting gasoline savings generally cannot offset the high incremental expenditure on vehicles and alternative fuels. Sensitivity analysis shows that an increase in gasoline price or a drop in alternative fuel prices could offset a portion of the negative impact.

  4. Attitude Determination Algorithm based on Relative Quaternion Geometry of Velocity Incremental Vectors for Cost Efficient AHRS Design

    NASA Astrophysics Data System (ADS)

    Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung

    2018-05-01

    A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.

  5. Wide field imaging problems in radio astronomy

    NASA Astrophysics Data System (ADS)

    Cornwell, T. J.; Golap, K.; Bhatnagar, S.

    2005-03-01

    The new generation of synthesis radio telescopes now being proposed, designed, and constructed face substantial problems in making images over wide fields of view. Such observations are required either to achieve the full sensitivity limit in crowded fields or for surveys. The Square Kilometre Array (SKA Consortium, Tech. Rep., 2004), now being developed by an international consortium of 15 countries, will require advances well beyond the current state of the art. We review the theory of synthesis radio telescopes for large fields of view. We describe a new algorithm, W projection, for correcting the non-coplanar baselines aberration. This algorithm has improved performance over those previously used (typically an order of magnitude in speed). Despite the advent of W projection, the computing hardware required for SKA wide field imaging is estimated to cost up to $500M (2015 dollars). This is about half the target cost of the SKA. Reconfigurable computing is one way in which the costs can be decreased dramatically.

  6. A model to forecast data centre infrastructure costs.

    NASA Astrophysics Data System (ADS)

    Vernet, R.

    2015-12-01

    The computing needs in the HEP community are increasing steadily, but the current funding situation in many countries is tight. As a consequence experiments, data centres, and funding agencies have to rationalize resource usage and expenditures. CC-IN2P3 (Lyon, France) provides computing resources to many experiments including LHC, and is a major partner for astroparticle projects like LSST, CTA or Euclid. The financial cost to accommodate all these experiments is substantial and has to be planned well in advance for funding and strategic reasons. In that perspective, leveraging infrastructure expenses, electric power cost and hardware performance observed in our site over the last years, we have built a model that integrates these data and provides estimates of the investments that would be required to cater to the experiments for the mid-term future. We present how our model is built and the expenditure forecast it produces, taking into account the experiment roadmaps. We also examine the resource growth predicted by our model over the next years assuming a flat-budget scenario.

  7. Assessing the Potential Cost-Effectiveness of Microneedle Patches in Childhood Measles Vaccination Programs: The Case for Further Research and Development.

    PubMed

    Adhikari, Bishwa B; Goodson, James L; Chu, Susan Y; Rota, Paul A; Meltzer, Martin I

    2016-12-01

    Currently available measles vaccines are administered by subcutaneous injections and require reconstitution with a diluent and a cold chain, which is resource intensive and challenging to maintain. To overcome these challenges and potentially increase vaccination coverage, microneedle patches are being developed to deliver the measles vaccine. This study compares the cost-effectiveness of using microneedle patches with traditional vaccine delivery by syringe-and-needle (subcutaneous vaccination) in children's measles vaccination programs. We built a simple spreadsheet model to compute the vaccination costs for using microneedle patch and syringe-and-needle technologies. We assumed that microneedle vaccines will be, compared with current vaccines, more heat stable and require less expensive cool chains when used in the field. We used historical data on the incidence of measles among communities with low measles vaccination rates. The cost of microneedle vaccination was estimated at US$0.95 (range US$0.71-US$1.18) for the first dose, compared with US$1.65 (range US$1.24-US$2.06) for the first dose delivered by subcutaneous vaccination. At 95 % vaccination coverage, microneedle patch vaccination was estimated to cost US$1.66 per measles case averted (range US$1.24-US$2.07) compared with an estimated cost of US$2.64 per case averted (range US$1.98-US$3.30) using subcutaneous vaccination. Use of microneedle patches may reduce costs; however, the cost-effectiveness of patches would depend on the vaccine recipients' acceptability and vaccine effectiveness of the patches relative to the existing conventional vaccine-delivery method. This study emphasizes the need to continue research and development of this vaccine-delivery method that could boost measles elimination efforts through improved access to vaccines and increased vaccination coverage.

  8. Antihypertensive drugs: a perspective on pharmaceutical price erosion and its impact on cost-effectiveness.

    PubMed

    Refoios Camejo, Rodrigo; McGrath, Clare; Herings, Ron; Meerding, Willem-Jan; Rutten, Frans

    2012-01-01

    When comparators' prices decrease due to market competition and loss of exclusivity, the incremental clinical effectiveness required for a new technology to be cost-effective is expected to increase; and/or the minimum price at which it will be funded will tend to decrease. This may be, however, either unattainable physiologically or financially unviable for drug development. The objective of this study is to provide an empirical basis for this discussion by estimating the potential for price decreases to impact on the cost-effectiveness of new therapies in hypertension. Cost-effectiveness at launch was estimated for all antihypertensive drugs launched between 1998 and 2008 in the United Kingdom using hypothetical degrees of incremental clinical effectiveness within the methodologic framework applied by the UK National Institute for Health and Clinical Excellence. Incremental cost-effectiveness ratios were computed and compared with funding thresholds. In addition, the levels of incremental clinical effectiveness required to achieve specific cost-effectiveness thresholds at given prices were estimated. Significant price decreases were observed for existing drugs. This was shown to markedly affect cost-effectiveness of technologies entering the market. The required incremental clinical effectiveness was in many cases greater than physiologically possible so, as a consequence, a number of products might not be available today if current methods of economic appraisal had been applied. We conclude that the definition of cost-effectiveness thresholds is fundamental in promoting efficient innovation. Our findings demonstrate that comparator price attrition has the potential to put pressure in the pharmaceutical research model and presents a challenge to new therapies being accepted for funding. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  9. An emulator for minimizing finite element analysis implementation resources

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.

    1982-01-01

    A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.

  10. Study of the modifications needed for effective operation NASTRAN on IBM virtual storage computers

    NASA Technical Reports Server (NTRS)

    Mccormick, C. W.; Render, K. H.

    1975-01-01

    The necessary modifications were determined to make NASTRAN operational under virtual storage operating systems (VS1 and VS2). Suggested changes are presented which will make NASTRAN operate more efficiently under these systems. Estimates of the cost and time involved in design, coding, and implementation of all suggested modifications are included.

  11. Simulating Timber and Deer Food Potential In Loblolly Pine Plantations

    Treesearch

    Clifford A. Myers

    1977-01-01

    This computer program analyzes both timber and deer food production on managed forests, providing estimates of the number of acres required per deer for each week or month, yearly timber cuts, and current timber growing stock, as well as a cost and return analysis of the timber operation. Input variables include stand descriptors, controls on management, stumpage...

  12. 26 CFR 1.167(a)-1 - Depreciation in general.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... salvage value, will, at the end of the estimated useful life of the depreciable property, equal the cost... depreciated below a reasonable salvage value under any method of computing depreciation. However, see section 167(f) and § 1.167(f)-1 for rules which permit a reduction in the amount of salvage value to be taken...

  13. 26 CFR 1.167(a)-1 - Depreciation in general.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... salvage value, will, at the end of the estimated useful life of the depreciable property, equal the cost... depreciated below a reasonable salvage value under any method of computing depreciation. However, see section 167(f) and § 1.167(f)-1 for rules which permit a reduction in the amount of salvage value to be taken...

  14. 26 CFR 1.167(a)-1 - Depreciation in general.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... salvage value, will, at the end of the estimated useful life of the depreciable property, equal the cost... depreciated below a reasonable salvage value under any method of computing depreciation. However, see section 167(f) and § 1.167(f)-1 for rules which permit a reduction in the amount of salvage value to be taken...

  15. 26 CFR 1.167(a)-1 - Depreciation in general.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... salvage value, will, at the end of the estimated useful life of the depreciable property, equal the cost... depreciated below a reasonable salvage value under any method of computing depreciation. However, see section 167(f) and § 1.167(f)-1 for rules which permit a reduction in the amount of salvage value to be taken...

  16. Hardwood silviculture and skyline yarding on steep slopes: economic and environmental impacts

    Treesearch

    John E. Baumgras; Chris B. LeDoux

    1995-01-01

    Ameliorating the visual and environmental impact associated with harvesting hardwoods on steep slopes will require the efficient use of skyline yarding along with silvicultural alternatives to clearcutting. In evaluating the effects of these alternatives on harvesting revenue, results of field studies and computer simulations were used to estimate costs and revenue for...

  17. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  18. The optimality of different strategies for supplemental staging of non-small-cell lung cancer: a health economic decision analysis.

    PubMed

    Søgaard, Rikke; Fischer, Barbara Malene B; Mortensen, Jann; Rasmussen, Torben R; Lassen, Ulrik

    2013-01-01

    To assess the expected costs and outcomes of alternative strategies for staging of lung cancer to inform a Danish National Health Service perspective about the most cost-effective strategy. A decision tree was specified for patients with a confirmed diagnosis of non-small-cell lung cancer. Six strategies were defined from relevant combinations of mediastinoscopy, endoscopic or endobronchial ultrasound with needle aspiration, and combined positron emission tomography-computed tomography with F18-fluorodeoxyglucose. Patients without distant metastases and central or contralateral nodal involvement (N2/N3) were considered to be candidates for surgical resection. Diagnostic accuracies were informed from literature reviews, prevalence and survival from the Danish Lung Cancer Registry, and procedure costs from national average tariffs. All parameters were specified probabilistically to determine the joint decision uncertainty. The cost-effectiveness analysis was based on the net present value of expected costs and life years accrued over a time horizon of 5 years. At threshold values of around €30,000 for cost-effectiveness, it was found to be cost-effective to send all patients to positron emission tomography-computed tomography with confirmation of positive findings on nodal involvement by endobronchial ultrasound. This result appeared robust in deterministic sensitivity analysis. The expected value of perfect information was estimated at €52 per patient, indicating that further research might be worthwhile. The policy recommendation is to make combined positron emission tomography-computed tomography and endobronchial ultrasound available for supplemental staging of patients with non-small-cell lung cancer. The effects of alternative strategies on patients' quality of life, however, should be examined in future studies. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  19. Resource Utilization and Costs during the Initial Years of Lung Cancer Screening with Computed Tomography in Canada

    PubMed Central

    Lam, Stephen; Tammemagi, Martin C.; Evans, William K.; Leighl, Natasha B.; Regier, Dean A.; Bolbocean, Corneliu; Shepherd, Frances A.; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R.; Mayo, John R.; McWilliams, Annette; Couture, Christian; English, John C.; Goffin, John; Hwang, David M.; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J.; Goss, Glenwood D.; Nicholas, Garth; Seely, Jean M.; Sekhon, Harmanjatinder S.; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N.; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D.; Tan, Wan C.; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J.

    2014-01-01

    Background: It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Methods: Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer’s perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. Results: The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400–$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553–$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254–$52,200; p = 0.061). Conclusion: In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure. PMID:25105438

  20. Measuring direct and indirect costs of land retirement in an irrigated river basin: A budgeting regional multiplier approach

    NASA Astrophysics Data System (ADS)

    Hamilton, Joel; Whittlesey, Norman K.; Robison, M. Henry; Willis, David

    2002-08-01

    This analysis addresses three important conceptual problems in the measurement of direct and indirect costs and benefits: (1) the distribution of impacts between a regional economy and the encompassing state economy; (2) the distinction between indirect impacts and indirect costs (IC), focusing on the dynamic time path unemployed resources follow to find alternative employment; and (3) the distinction among the affected firms' microeconomic categories of fixed and variable costs as they are used to compute regional direct and indirect costs. It uses empirical procedures that reconcile the usual measures of economic impact provided by input/output models with the estimates of economic costs and benefits required for analysis of welfare changes. The paper illustrates the relationships and magnitudes involved in the context of water policy issues facing the Pecos River Basin of New Mexico.

  1. Why projects often fail even with high cost contingencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kujawski, Edouard

    2002-02-28

    In this note we assume that the individual risks have been adequately quantified and the total project cost contingency adequately computed to ensure an agreed-to probability or confidence level that the total project cost estimate will not be exceeded. But even projects that implement such a process are likely to result in significant cost overruns and/or project failure if the project manager allocates the contingencies to the individual subsystems. The intuitive and mathematically valid solution is to maintain a project-wide contingency and to distribute it to the individual risks on an as-needed basis. Such an approach ensures cost-efficient risk management,more » and projects that implement it are more likely to succeed and to cost less. We illustrate these ideas using a simplified project with two independent risks. The formulation can readily be extended to multiple risks.« less

  2. Security systems engineering overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steele, B.J.

    Crime prevention is on the minds of most people today. The concern for public safety and the theft of valuable assets are being discussed at all levels of government and throughout the public sector. There is a growing demand for security systems that can adequately safeguard people and valuable assets against the sophistication of those criminals or adversaries who pose a threat. The crime in this country has been estimated at $70 billion in direct costs and up to $300 billion in indirect costs. Health insurance fraud alone is estimated to cost American businesses $100 billion. Theft, warranty fraud, andmore » counterfeiting of computer hardware totaled $3 billion in 1994. A threat analysis is a prerequisite to any security system design to assess the vulnerabilities with respect to the anticipated threat. Having established a comprehensive definition of the threat, crime prevention, detection, and threat assessment technologies can be used to address these criminal activities. This talk will outline the process used to design a security system regardless of the level of security. This methodology has been applied to many applications including: government high security facilities; residential and commercial intrusion detection and assessment; anti-counterfeiting/fraud detection technologies (counterfeit currency, cellular phone billing, credit card fraud, health care fraud, passport, green cards, and questionable documents); industrial espionage detection and prevention (intellectual property, computer chips, etc.); and security barrier technology (creation of delay such as gates, vaults, etc.).« less

  3. Public safety answering point readiness for wireless E-911 in New York State.

    PubMed

    Bailey, Bob W; Scott, Jay M; Brown, Lawrence H

    2003-01-01

    To determine the level of wireless enhanced 911 readiness among New York's primary public safety answering points. This descriptive study utilized a simple, single-page survey that was distributed in August 2001, with telephone follow-up concluding in January 2002. Surveys were distributed to directors of the primary public safety answering points in each of New York's 62 counties. Information was requested regarding current readiness for providing wireless enhanced 911 service, hardware and software needs for implementing the service, and the estimated costs for obtaining the necessary hardware and software. Two directors did not respond and could not be contacted by telephone; three declined participation; one did not operate an answering point; and seven provided incomplete responses, resulting in usable data from 49 (79%) of the state's public safety answering points. Only 27% of the responding public safety answering points were currently wireless enhanced 911 ready. Specific needs included obtaining or upgrading computer systems (16%), computer-aided dispatch systems (53%), mapping software (71%), telephone systems (27%), and local exchange carrier trunk lines (42%). The total estimated hardware and software costs for achieving wireless enhanced 911 readiness was between 16 million and 20 million dollars. New York's primary public safety answering points are not currently ready to provide wireless enhanced 911 service, and the cost for achieving readiness could be as high as 20 million dollars.

  4. Remedial Action Assessment System: A computer-based methodology for conducting feasibility studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, M.K.; Buelt, J.L.; Stottlemyre, J.A.

    1991-02-01

    Because of the complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, DOE is supporting the development of a computer-based methodology to streamline the remedial investigation/feasibility study process. The Remedial Action Assessment System (RAAS), can be used for screening, linking, and evaluating established technology processes in support of conducting feasibility studies. It is also intended to do the same in support of corrective measures studies. The user interface employs menus, windows, help features, and graphical information while RAAS is in operation. Object-oriented programming is used to link unit processes into sets ofmore » compatible processes that form appropriate remedial alternatives. Once the remedial alternatives are formed, the RAAS methodology can evaluate them in terms of effectiveness, implementability, and cost. RAAS will access a user-selected risk assessment code to determine the reduction of risk after remedial action by each recommended alternative. The methodology will also help determine the implementability of the remedial alternatives at a site and access cost estimating tools to provide estimates of capital, operating, and maintenance costs. This paper presents the characteristics of two RAAS prototypes currently being developed. These include the RAAS Technology Information System, which accesses graphical, tabular and textual information about technologies, and the main RAAS methodology, which screens, links, and evaluates remedial technologies. 4 refs., 3 figs., 1 tab.« less

  5. [Monetary value of the human costs of road traffic injuries in Spain].

    PubMed

    Martínez Pérez, Jorge Eduardo; Sánchez Martínez, Fernando Ignacio; Abellán Perpiñán, José María; Pinto Prades, José Luis

    2015-09-01

    Cost-benefit analyses in the field of road safety compute human costs as a key component of total costs. The present article presents two studies promoted by the Directorate-General for Traffic aimed at obtaining official values for the costs associated with fatal and non-fatal traffic injuries in Spain. We combined the contingent valuation approach and the (modified) standard gamble technique in two surveys administered to large representative samples (n1=2,020, n2=2,000) of the Spanish population. The monetary value of preventing a fatality was estimated to be 1.4 million euros. Values of 219,000 and 6,100 euros were obtained for minor and severe non-fatal injuries, respectively. These figures are comparable to those observed in neighboring countries. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.

  6. Parameter estimation using weighted total least squares in the two-compartment exchange model.

    PubMed

    Garpebring, Anders; Löfstedt, Tommy

    2018-01-01

    The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Can we reduce the cost of illness with more compliant patients? An estimation of the effect of 100% compliance with hypertension treatment.

    PubMed

    Koçkaya, Güvenç; Wertheimer, Albert

    2011-06-01

    The current study was designed to calculate the direct cost of noncompliance of hypertensive patients to the US health system. Understanding these expenses can inform screening and education budget policy regarding expenditure levels that can be calculated to be cost-beneficial. The study was conducted in 3 parts. First, a computer search of National Institutes of Health Web sites and professional society Web sites for organizations with members that treat hypertension, and a PubMed search were performed to obtain the numbers required for calculations. Second, formulas were developed to estimate the risk of noncompliance and undiagnosed hypertension. Third, risk calculations were performed using the information obtained in part 1 and the formulas developed in part 2. Direct risk reduction for stroke caused by hypertension, heart attack, kidney disease, and heart disease was calculated for a 100% compliant strategy. Risk, case, and cost reduction for a 100% compliant strategy for hypertension were 32%, 8.5 million and US$ 72 billion, respectively. Our analysis means that the society can spend up to the cost of noncompliance in screening, education, and prevention efforts in an attempt to reduce these costly and traumatic sequelae of poorly controlled hypertension in the light of published analysis.

  8. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  9. NASA/BLM Applications Pilot Test (APT), phase 2. Volume 1: Executive summary. [vegetation mapping and production estimation in northwestern Arizona

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Data from LANDSAT, low altitude color aerial photography, and ground visits were combined and used to produce vegetation cover maps and to estimate productivity of range, woodland, and forest resources in northwestern Arizona. A planning session, two workshops, and four status reviews were held to assist technology transfer from NASA. Computer aided digital classification of LANDSAT data was selected as a major source of input data. An overview is presented of the data processing, data collection, productivity estimation, and map verification techniques used. Cost analysis and digital LANDSAT digital products are also considered.

  10. The macroeconomic impact of pandemic influenza: estimates from models of the United Kingdom, France, Belgium and The Netherlands.

    PubMed

    Keogh-Brown, Marcus Richard; Smith, Richard D; Edmunds, John W; Beutels, Philippe

    2010-12-01

    The 2003 outbreak of severe acute respiratory syndrome (SARS) showed that infectious disease outbreaks can have notable macroeconomic impacts. The current H1N1 and potential H5N1 flu pandemics could have a much greater impact. Using a multi-sector single country computable general equilibrium model of the United Kingdom, France, Belgium and The Netherlands, together with disease scenarios of varying severity, we examine the potential economic cost of a modern pandemic. Policies of school closure, vaccination and antivirals, together with prophylactic absence from work are evaluated and their cost impacts are estimated. Results suggest GDP losses from the disease of approximately 0.5-2% but school closure and prophylactic absenteeism more than triples these effects. Increasing school closures from 4 weeks at the peak to entire pandemic closure almost doubles the economic cost, but antivirals and vaccinations seem worthwhile. Careful planning is therefore important to ensure expensive policies to mitigate the pandemic are effective in minimising illness and deaths.

  11. Financial risk protection from social health insurance.

    PubMed

    Barnes, Kayleigh; Mukherji, Arnab; Mullen, Patrick; Sood, Neeraj

    2017-09-01

    This paper estimates the impact of social health insurance on financial risk by utilizing data from a natural experiment created by the phased roll-out of a social health insurance program for the poor in India. We estimate the distributional impact of insurance on of out-of-pocket costs and incorporate these results with a stylized expected utility model to compute associated welfare effects. We adjust the standard model, accounting for conditions of developing countries by incorporating consumption floors, informal borrowing, and asset selling which allow us to separate the value of financial risk reduction from consumption smoothing and asset protection. Results show that insurance reduces out-of-pocket costs, particularly in higher quantiles of the distribution. We find reductions in the frequency and amount of money borrowed for health reasons. Finally, we find that the value of financial risk reduction outweighs total per household costs of the insurance program by two to five times. Copyright © 2017. Published by Elsevier B.V.

  12. Space shuttle/payload interface analysis. Volume 4: Business Risk and Value of Operations in Space (BRAVO). Part 3: Workbook

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A collection of blank worksheets for use on each BRAVO problem to be analyzed is supplied, for the purposes of recording the inputs for the BRAVO analysis, working out the definition of mission equipment, recording inputs to the satellite synthesis computer program, estimating satellite earth station costs, costing terrestrial systems, and cost effectiveness calculations. The group of analysts working BRAVO will normally use a set of worksheets on each problem, however, the workbook pages are of sufficiently good quality that the user can duplicate them, if more worksheet blanks are required than supplied. For Vol. 1, see N74-12493; for Vol. 2, see N74-14530.

  13. Design of surface-water data networks for regional information

    USGS Publications Warehouse

    Moss, Marshall E.; Gilroy, E.J.; Tasker, Gary D.; Karlinger, M.R.

    1982-01-01

    This report describes a technique, Network Analysis of Regional Information (NARI), and the existing computer procedures that have been developed for the specification of the regional information-cost relation for several statistical parameters of streamflow. The measure of information used is the true standard error of estimate of a regional logarithmic regression. The cost is a function of the number of stations at which hydrologic data are collected and the number of years for which the data are collected. The technique can be used to obtain either (1) a minimum cost network that will attain a prespecified accuracy and reliability or (2) a network that maximizes information given a set of budgetary and time constraints.

  14. System and method for design and optimization of grid connected photovoltaic power plant with multiple photovoltaic module technologies

    DOEpatents

    Thomas, Bex George; Elasser, Ahmed; Bollapragada, Srinivas; Galbraith, Anthony William; Agamy, Mohammed; Garifullin, Maxim Valeryevich

    2016-03-29

    A system and method of using one or more DC-DC/DC-AC converters and/or alternative devices allows strings of multiple module technologies to coexist within the same PV power plant. A computing (optimization) framework estimates the percentage allocation of PV power plant capacity to selected PV module technologies. The framework and its supporting components considers irradiation, temperature, spectral profiles, cost and other practical constraints to achieve the lowest levelized cost of electricity, maximum output and minimum system cost. The system and method can function using any device enabling distributed maximum power point tracking at the module, string or combiner level.

  15. The IEA/ORAU Long-Term Global Energy- CO2 Model: Personal Computer Version A84PC

    DOE Data Explorer

    Edmonds, Jae A.; Reilly, John M.; Boden, Thomas A. [CDIAC; Reynolds, S. E. [CDIAC; Barns, D. W.

    1995-01-01

    The IBM A84PC version of the Edmonds-Reilly model has the capability to calculate both CO2 and CH4 emission estimates by source and region. Population, labor productivity, end-use energy efficiency, income effects, price effects, resource base, technological change in energy production, environmental costs of energy production, market-penetration rate of energy-supply technology, solar and biomass energy costs, synfuel costs, and the number of forecast periods may be interactively inspected and altered producing a variety of global and regional CO2 and CH4 emission scenarios for 1975 through 2100. Users are strongly encouraged to see our instructions for downloading, installing, and running the model.

  16. Event Rates, Hospital Utilization, and Costs Associated with Major Complications of Diabetes: A Multicountry Comparative Analysis

    PubMed Central

    Clarke, Philip M.; Glasziou, Paul; Patel, Anushka; Chalmers, John; Woodward, Mark; Harrap, Stephen B.; Salomon, Joshua A.

    2010-01-01

    Background Diabetes imposes a substantial burden globally in terms of premature mortality, morbidity, and health care costs. Estimates of economic outcomes associated with diabetes are essential inputs to policy analyses aimed at prevention and treatment of diabetes. Our objective was to estimate and compare event rates, hospital utilization, and costs associated with major diabetes-related complications in high-, middle-, and low-income countries. Methods and Findings Incidence and history of diabetes-related complications, hospital admissions, and length of stay were recorded in 11,140 patients with type 2 diabetes participating in the Action in Diabetes and Vascular Disease (ADVANCE) study (mean age at entry 66 y). The probability of hospital utilization and number of days in hospital for major events associated with coronary disease, cerebrovascular disease, congestive heart failure, peripheral vascular disease, and nephropathy were estimated for three regions (Asia, Eastern Europe, and Established Market Economies) using multiple regression analysis. The resulting estimates of days spent in hospital were multiplied by regional estimates of the costs per hospital bed-day from the World Health Organization to compute annual acute and long-term costs associated with the different types of complications. To assist, comparability, costs are reported in international dollars (Int$), which represent a hypothetical currency that allows for the same quantities of goods or services to be purchased regardless of country, standardized on purchasing power in the United States. A cost calculator accompanying this paper enables the estimation of costs for individual countries and translation of these costs into local currency units. The probability of attending a hospital following an event was highest for heart failure (93%–96% across regions) and lowest for nephropathy (15%–26%). The average numbers of days in hospital given at least one admission were greatest for stroke (17–32 d across region) and heart failure (16–31 d) and lowest for nephropathy (12–23 d). Considering regional differences, probabilities of hospitalization were lowest in Asia and highest in Established Market Economies; on the other hand, lengths of stay were highest in Asia and lowest in Established Market Economies. Overall estimated annual hospital costs for patients with none of the specified events or event histories ranged from Int$76 in Asia to Int$296 in Established Market Economies. All complications included in this analysis led to significant increases in hospital costs; coronary events, cerebrovascular events, and heart failure were the most costly, at more than Int$1,800, Int$3,000, and Int$4,000 in Asia, Eastern Europe, and Established Market Economies, respectively. Conclusions Major complications of diabetes significantly increase hospital use and costs across various settings and are likely to impose a high economic burden on health care systems. Please see later in the article for the Editors' Summary PMID:20186272

  17. Improving Conceptual Design for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1998-01-01

    This report summarizes activities performed during the second year of a three year cooperative agreement between NASA - Langley Research Center and Georgia Tech. Year 1 of the project resulted in the creation of a new Cost and Business Assessment Model (CABAM) for estimating the economic performance of advanced reusable launch vehicles including non-recurring costs, recurring costs, and revenue. The current year (second year) activities were focused on the evaluation of automated, collaborative design frameworks (computation architectures or computational frameworks) for automating the design process in advanced space vehicle design. Consistent with NASA's new thrust area in developing and understanding Intelligent Synthesis Environments (ISE), the goals of this year's research efforts were to develop and apply computer integration techniques and near-term computational frameworks for conducting advanced space vehicle design. NASA - Langley (VAB) has taken a lead role in developing a web-based computing architectures within which the designer can interact with disciplinary analysis tools through a flexible web interface. The advantages of this approach are, 1) flexible access to the designer interface through a simple web browser (e.g. Netscape Navigator), 2) ability to include existing 'legacy' codes, and 3) ability to include distributed analysis tools running on remote computers. To date, VAB's internal emphasis has been on developing this test system for the planetary entry mission under the joint Integrated Design System (IDS) program with NASA - Ames and JPL. Georgia Tech's complementary goals this year were to: 1) Examine an alternate 'custom' computational architecture for the three-discipline IDS planetary entry problem to assess the advantages and disadvantages relative to the web-based approach.and 2) Develop and examine a web-based interface and framework for a typical launch vehicle design problem.

  18. Cost-of-living indexes and demographic change.

    PubMed

    Diamond, C A

    1990-06-01

    The Consumer Price Index (CPI), although not without problems, is the most often used mechanism for adjusting contracts for cost-of-living changes in the US. The US Bureau of Labor Statistics lists several problems associated with using the CPI as a cost-of-living index where the proportion of 2-worker families is increasing, population is shifting, and work week hours are changing. This study shows how to compute cost-of-living indexes which are inexpensive to update, use less restrictive assumptions about consumer preferences, do not require statistical estimation, and handle the problem of increasing numbers of families where both the husband and wife work. This study attempts to how widely in fact the CPI varies with alternative true cost-of-living varies with alternative true cost-of-living indexes although in the end this de facto cost-of-living measure holds up quite well. In times of severe price inflation people change their preferences by substitution, necessitating a flexible cost-of-living index that accounts for this fundamental economic behavior.

  19. Microcomputer software to facilitate costing in pathology laboratories.

    PubMed Central

    Stilwell, J A; Woodford, F P

    1987-01-01

    A software program is described which will enable laboratory managers to calculate, for their laboratory over a 12 month period, the cost of each test or investigation and of components of that cost. These comprise the costs of direct labour, consumables, equipment maintenance and depreciation; allocated costs of intermediate operations--for example, specimen procurement, reception, and data processing; and apportioned indirect costs such as senior staff time as well as external overheads such as telephone charges, rent, and rates. Total annual expenditure on each type of test is also calculated. The principles on which the program is based are discussed. Considered in particular, are the problems of apportioning indirect costs (which are considerable in clinical laboratory work) over different test costs, and the merits of different ways of estimating the amount or fraction of staff members' time spent on each kind of test. The computer program is Crown copyright but is available under licence from one of us (JAS). PMID:3654982

  20. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    NASA Astrophysics Data System (ADS)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  1. Statistical methodologies for the control of dynamic remapping

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Nicol, D. M.

    1986-01-01

    Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.

  2. Improving the performance of extreme learning machine for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong

    2015-05-01

    Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.

  3. Probabilistic/Fracture-Mechanics Model For Service Life

    NASA Technical Reports Server (NTRS)

    Watkins, T., Jr.; Annis, C. G., Jr.

    1991-01-01

    Computer program makes probabilistic estimates of lifetime of engine and components thereof. Developed to fill need for more accurate life-assessment technique that avoids errors in estimated lives and provides for statistical assessment of levels of risk created by engineering decisions in designing system. Implements mathematical model combining techniques of statistics, fatigue, fracture mechanics, nondestructive analysis, life-cycle cost analysis, and management of engine parts. Used to investigate effects of such engine-component life-controlling parameters as return-to-service intervals, stresses, capabilities for nondestructive evaluation, and qualities of materials.

  4. Two-voice fundamental frequency estimation

    NASA Astrophysics Data System (ADS)

    de Cheveigné, Alain

    2002-05-01

    An algorithm is presented that estimates the fundamental frequencies of two concurrent voices or instruments. The algorithm models each voice as a periodic function of time, and jointly estimates both periods by cancellation according to a previously proposed method [de Cheveigné and Kawahara, Speech Commun. 27, 175-185 (1999)]. The new algorithm improves on the old in several respects; it allows an unrestricted search range, effectively avoids harmonic and subharmonic errors, is more accurate (it uses two-dimensional parabolic interpolation), and is computationally less costly. It remains subject to unavoidable errors when periods are in certain simple ratios and the task is inherently ambiguous. The algorithm is evaluated on a small database including speech, singing voice, and instrumental sounds. It can be extended in several ways; to decide the number of voices, to handle amplitude variations, and to estimate more than two voices (at the expense of increased processing cost and decreased reliability). It makes no use of instrument models, learned or otherwise, although it could usefully be combined with such models. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.

  5. Cost-benefit analysis of biopsy methods for suspicious mammographic lesions; discussion 994-5.

    PubMed

    Fahy, B N; Bold, R J; Schneider, P D; Khatri, V; Goodnight, J E

    2001-09-01

    Stereotactic core biopsy (SCB) is more cost-effective than needle-localized biopsy (NLB) for evaluation and treatment of mammographic lesions. A computer-generated mathematical model was developed based on clinical outcome modeling to estimate costs accrued during evaluation and treatment of suspicious mammographic lesions. Total costs were determined for evaluation and subsequent treatment of cancer when either SCB or NLB was used as the initial biopsy method. Cost was estimated by the cumulative work relative value units accrued. The risk of malignancy based on the Breast Imaging Reporting Data System (BIRADS) score and mammographic suspicion of ductal carcinoma in situ were varied to simulate common clinical scenarios. Total cost accumulated during evaluation and subsequent surgical therapy (if required). Evaluation of BIRADS 5 lesions (highly suggestive, risk of malignancy = 90%) resulted in equivalent relative value units for both techniques (SCB, 15.54; NLB, 15.47). Evaluation of lesions highly suspicious for ductal carcinoma in situ yielded similar total treatment relative value units (SCB, 11.49; NLB, 10.17). Only for evaluation of BIRADS 4 lesions (suspicious abnormality, risk of malignancy = 34%) was SCB more cost-effective than NLB (SCB, 7.65 vs. NLB, 15.66). No difference in cost-benefit was found when lesions highly suggestive of malignancy (BIRADS 5) or those suspicious for ductal carcinoma in situ were evaluated initially with SCB vs. NLB, thereby disproving the hypothesis. Only for intermediate-risk lesions (BIRADS 4) did initial evaluation with SCB yield a greater cost savings than with NLB.

  6. Cost-effectiveness of external cephalic version for term breech presentation

    PubMed Central

    2010-01-01

    Background External cephalic version (ECV) is recommended by the American College of Obstetricians and Gynecologists to convert a breech fetus to vertex position and reduce the need for cesarean delivery. The goal of this study was to determine the incremental cost-effectiveness ratio, from society's perspective, of ECV compared to scheduled cesarean for term breech presentation. Methods A computer-based decision model (TreeAge Pro 2008, Tree Age Software, Inc.) was developed for a hypothetical base case parturient presenting with a term singleton breech fetus with no contraindications for vaginal delivery. The model incorporated actual hospital costs (e.g., $8,023 for cesarean and $5,581 for vaginal delivery), utilities to quantify health-related quality of life, and probabilities based on analysis of published literature of successful ECV trial, spontaneous reversion, mode of delivery, and need for unanticipated emergency cesarean delivery. The primary endpoint was the incremental cost-effectiveness ratio in dollars per quality-adjusted year of life gained. A threshold of $50,000 per quality-adjusted life-years (QALY) was used to determine cost-effectiveness. Results The incremental cost-effectiveness of ECV, assuming a baseline 58% success rate, equaled $7,900/QALY. If the estimated probability of successful ECV is less than 32%, then ECV costs more to society and has poorer QALYs for the patient. However, as the probability of successful ECV was between 32% and 63%, ECV cost more than cesarean delivery but with greater associated QALY such that the cost-effectiveness ratio was less than $50,000/QALY. If the probability of successful ECV was greater than 63%, the computer modeling indicated that a trial of ECV is less costly and with better QALYs than a scheduled cesarean. The cost-effectiveness of a trial of ECV is most sensitive to its probability of success, and not to the probabilities of a cesarean after ECV, spontaneous reversion to breech, successful second ECV trial, or adverse outcome from emergency cesarean. Conclusions From society's perspective, ECV trial is cost-effective when compared to a scheduled cesarean for breech presentation provided the probability of successful ECV is > 32%. Improved algorithms are needed to more precisely estimate the likelihood that a patient will have a successful ECV. PMID:20092630

  7. Advanced Machine Learning Emulators of Radiative Transfer Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.

    2017-12-01

    Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.

  8. A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments

    PubMed Central

    López, Elena; García, Sergio; Barea, Rafael; Bergasa, Luis M.; Molinos, Eduardo J.; Arroyo, Roberto; Romera, Eduardo; Pardo, Samuel

    2017-01-01

    One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control. PMID:28397758

  9. Monitoring benthic aIgal communides: A comparison of targeted and coefficient sampling methods

    USGS Publications Warehouse

    Edwards, Matthew S.; Tinker, M. Tim

    2009-01-01

    Choosing an appropriate sample unit is a fundamental decision in the design of ecological studies. While numerous methods have been developed to estimate organism abundance, they differ in cost, accuracy and precision.Using both field data and computer simulation modeling, we evaluated the costs and benefits associated with two methods commonly used to sample benthic organisms in temperate kelp forests. One of these methods, the Targeted Sampling method, relies on different sample units, each "targeted" for a specific species or group of species while the other method relies on coefficients that represent ranges of bottom cover obtained from visual esti-mates within standardized sample units. Both the field data and the computer simulations suggest that both methods yield remarkably similar estimates of organism abundance and among-site variability, although the Coefficient method slightly underestimates variability among sample units when abundances are low. In contrast, the two methods differ considerably in the effort needed to sample these communities; the Targeted Sampling requires more time and twice the personnel to complete. We conclude that the Coefficent Sampling method may be better for environmental monitoring programs where changes in mean abundance are of central concern and resources are limiting, but that the Targeted sampling methods may be better for ecological studies where quantitative relationships among species and small-scale variability in abundance are of central concern.

  10. Computing rank-revealing QR factorizations of dense matrices.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science

    1998-06-01

    We develop algorithms and implementations for computing rank-revealing QR (RRQR) factorizations of dense matrices. First, we develop an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy, aided by incremental condition estimation. Second, we develop efficiently implementable variants of guaranteed reliable RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang. We suggest algorithmic improvements with respect to condition estimation, termination criteria, and Givens updating. By combining the block algorithm with one of the triangular postprocessing steps, we arrive at an efficient and reliablemore » algorithm for computing an RRQR factorization of a dense matrix. Experimental results on IBM RS/6000 SGI R8000 platforms show that this approach performs up to three times faster that the less reliable QR factorization with column pivoting as it is currently implemented in LAPACK, and comes within 15% of the performance of the LAPACK block algorithm for computing a QR factorization without any column exchanges. Thus, we expect this routine to be useful in may circumstances where numerical rank deficiency cannot be ruled out, but currently has been ignored because of the computational cost of dealing with it.« less

  11. A cost-benefit analysis of Wisconsin's screening, brief intervention, and referral to treatment program: adding the employer's perspective.

    PubMed

    Quanbeck, Andrew; Lang, Katharine; Enami, Kohei; Brown, Richard L

    2010-02-01

    A previous cost-benefit analysis found Screening, Brief Intervention, and Referral to Treatment (SBIRT) to be cost-beneficial from a societal perspective. This paper develops a cost-benefit model that includes the employer's perspective by considering the costs of absenteeism and impaired presenteeism due to problem drinking. We developed a Monte Carlo simulation model to estimate the costs and benefits of SBIRT implementation to an employer. We first presented the likely costs of problem drinking to a theoretical Wisconsin firm that does not currently provide SBIRT services. We then constructed a cost-benefit model in which the firm funds SBIRT for its employees. The net present value of SBIRT adoption was computed by comparing costs due to problem drinking both with and without the program. When absenteeism and impaired presenteeism costs were considered from the employer's perspective, the net present value of SBIRT adoption was $771 per employee. We concluded that implementing SBIRT is cost-beneficial from the employer's perspective and recommend that Wisconsin employers consider covering SBIRT services for their employees.

  12. Computational circular dichroism estimation for point-of-care diagnostics via vortex half-wave retarders

    NASA Astrophysics Data System (ADS)

    Haider, Shahid A.; Tran, Megan Y.; Wong, Alexander

    2018-02-01

    Observing the circular dichroism (CD) caused by organic molecules in biological fluids can provide powerful indicators of patient health and provide diagnostic clues for treatment. Methods for this kind of analysis involve tabletop devices that weigh tens of kilograms with costs on the order of tens of thousands of dollars, making them prohibitive in point-of-care diagnostic applications. In an e ort to reduce the size, cost, and complexity of CD estimation systems for point-of-care diagnostics, we propose a novel method for CD estimation that leverages a vortex half-wave retarder in between two linear polarizers and a two-dimensional photodetector array to provide an overall complexity reduction in the system. This enables the measurement of polarization variations across multiple polarizations after they interact with a biological sample, simultaneously, without the need for mechanical actuation. We further discuss design considerations of this methodology in the context of practical applications to point-of-care diagnostics.

  13. Tuning support vector machines for minimax and Neyman-Pearson classification.

    PubMed

    Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D

    2010-10-01

    This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.

  14. Cost Effectiveness of Ofatumumab Plus Chlorambucil in First-Line Chronic Lymphocytic Leukaemia in Canada.

    PubMed

    Herring, William; Pearson, Isobel; Purser, Molly; Nakhaipour, Hamid Reza; Haiderali, Amin; Wolowacz, Sorrel; Jayasundara, Kavisha

    2016-01-01

    Our objective was to estimate the cost effectiveness of ofatumumab plus chlorambucil (OChl) versus chlorambucil in patients with chronic lymphocytic leukaemia for whom fludarabine-based therapies are considered inappropriate from the perspective of the publicly funded healthcare system in Canada. A semi-Markov model (3-month cycle length) used survival curves to govern progression-free survival (PFS) and overall survival (OS). Efficacy and safety data and health-state utility values were estimated from the COMPLEMENT-1 trial. Post-progression treatment patterns were based on clinical guidelines, Canadian treatment practices and published literature. Total and incremental expected lifetime costs (in Canadian dollars [$Can], year 2013 values), life-years and quality-adjusted life-years (QALYs) were computed. Uncertainty was assessed via deterministic and probabilistic sensitivity analyses. The discounted lifetime health and economic outcomes estimated by the model showed that, compared with chlorambucil, first-line treatment with OChl led to an increase in QALYs (0.41) and total costs ($Can27,866) and to an incremental cost-effectiveness ratio (ICER) of $Can68,647 per QALY gained. In deterministic sensitivity analyses, the ICER was most sensitive to the modelling time horizon and to the extrapolation of OS treatment effects beyond the trial duration. In probabilistic sensitivity analysis, the probability of cost effectiveness at a willingness-to-pay threshold of $Can100,000 per QALY gained was 59 %. Base-case results indicated that improved overall response and PFS for OChl compared with chlorambucil translated to improved quality-adjusted life expectancy. Sensitivity analysis suggested that OChl is likely to be cost effective subject to uncertainty associated with the presence of any long-term OS benefit and the model time horizon.

  15. Efficient implementation of a real-time estimation system for thalamocortical hidden Parkinsonian properties

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.

    2017-01-01

    Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.

  16. A cascaded two-step Kalman filter for estimation of human body segment orientation using MEMS-IMU.

    PubMed

    Zihajehzadeh, S; Loh, D; Lee, M; Hoskinson, R; Park, E J

    2014-01-01

    Orientation of human body segments is an important quantity in many biomechanical analyses. To get robust and drift-free 3-D orientation, raw data from miniature body worn MEMS-based inertial measurement units (IMU) should be blended in a Kalman filter. Aiming at less computational cost, this work presents a novel cascaded two-step Kalman filter orientation estimation algorithm. Tilt angles are estimated in the first step of the proposed cascaded Kalman filter. The estimated tilt angles are passed to the second step of the filter for yaw angle calculation. The orientation results are benchmarked against the ones from a highly accurate tactical grade IMU. Experimental results reveal that the proposed algorithm provides robust orientation estimation in both kinematically and magnetically disturbed conditions.

  17. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    DOE PAGES

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; ...

    2016-06-09

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  19. 75 FR 47490 - Raisins Produced From Grapes Grown In California; Use of Estimated Trade Demand to Compute Volume...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ... this spring; and the potential for higher prices in the wine and juice markets, which compete for...-term benefits of this action are expected to outweigh the costs. The committee believes that with no... NS raisin producers benefit more from those raisins which are free tonnage, a lower free tonnage...

  20. Determining Optimal Machine Replacement Events with Periodic Inspection Intervals

    DTIC Science & Technology

    2013-03-01

    10 2.3 Remaining Useful Life Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4...has some idea of the characteristic reliability inherent to that system. From assembly lines, to computers, to aircraft, quantities such as mean time...to failure, mean time to critical failure, and others have been quantified to a great extent. Further, any entity concerned with cost will also have an

  1. Estimating Missing Unit Process Data in Life Cycle Assessment Using a Similarity-Based Approach.

    PubMed

    Hou, Ping; Cai, Jiarui; Qu, Shen; Xu, Ming

    2018-05-01

    In life cycle assessment (LCA), collecting unit process data from the empirical sources (i.e., meter readings, operation logs/journals) is often costly and time-consuming. We propose a new computational approach to estimate missing unit process data solely relying on limited known data based on a similarity-based link prediction method. The intuition is that similar processes in a unit process network tend to have similar material/energy inputs and waste/emission outputs. We use the ecoinvent 3.1 unit process data sets to test our method in four steps: (1) dividing the data sets into a training set and a test set; (2) randomly removing certain numbers of data in the test set indicated as missing; (3) using similarity-weighted means of various numbers of most similar processes in the training set to estimate the missing data in the test set; and (4) comparing estimated data with the original values to determine the performance of the estimation. The results show that missing data can be accurately estimated when less than 5% data are missing in one process. The estimation performance decreases as the percentage of missing data increases. This study provides a new approach to compile unit process data and demonstrates a promising potential of using computational approaches for LCA data compilation.

  2. Cost Function Network-based Design of Protein-Protein Interactions: predicting changes in binding affinity.

    PubMed

    Viricel, Clément; de Givry, Simon; Schiex, Thomas; Barbe, Sophie

    2018-02-20

    Accurate and economic methods to predict change in protein binding free energy upon mutation are imperative to accelerate the design of proteins for a wide range of applications. Free energy is defined by enthalpic and entropic contributions. Following the recent progresses of Artificial Intelligence-based algorithms for guaranteed NP-hard energy optimization and partition function computation, it becomes possible to quickly compute minimum energy conformations and to reliably estimate the entropic contribution of side-chains in the change of free energy of large protein interfaces. Using guaranteed Cost Function Network algorithms, Rosetta energy functions and Dunbrack's rotamer library, we developed and assessed EasyE and JayZ, two methods for binding affinity estimation that ignore or include conformational entropic contributions on a large benchmark of binding affinity experimental measures. If both approaches outperform most established tools, we observe that side-chain conformational entropy brings little or no improvement on most systems but becomes crucial in some rare cases. as open-source Python/C ++ code at sourcesup.renater.fr/projects/easy-jayz. thomas.schiex@inra.fr and sophie.barbe@insa-toulouse.fr. Supplementary data are available at Bioinformatics online.

  3. Incorporating approximation error in surrogate based Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.; Li, W.; Wu, L.

    2015-12-01

    There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.

  4. Comparison of least squares and exponential sine sweep methods for Parallel Hammerstein Models estimation

    NASA Astrophysics Data System (ADS)

    Rebillat, Marc; Schoukens, Maarten

    2018-05-01

    Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.

  5. Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise

    PubMed Central

    Burge, Johannes

    2017-01-01

    Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and neurophysiological performance, we expect that task-specific methods for feature learning like AMA will become increasingly important. PMID:28178266

  6. Robust estimation-free prescribed performance back-stepping control of air-breathing hypersonic vehicles without affine models

    NASA Astrophysics Data System (ADS)

    Bu, Xiangwei; Wu, Xiaoyan; Huang, Jiaqi; Wei, Daozhi

    2016-11-01

    This paper investigates the design of a novel estimation-free prescribed performance non-affine control strategy for the longitudinal dynamics of an air-breathing hypersonic vehicle (AHV) via back-stepping. The proposed control scheme is capable of guaranteeing tracking errors of velocity, altitude, flight-path angle, pitch angle and pitch rate with prescribed performance. By prescribed performance, we mean that the tracking error is limited to a predefined arbitrarily small residual set, with convergence rate no less than a certain constant, exhibiting maximum overshoot less than a given value. Unlike traditional back-stepping designs, there is no need of an affine model in this paper. Moreover, both the tedious analytic and numerical computations of time derivatives of virtual control laws are completely avoided. In contrast to estimation-based strategies, the presented estimation-free controller possesses much lower computational costs, while successfully eliminating the potential problem of parameter drifting. Owing to its independence on an accurate AHV model, the studied methodology exhibits excellent robustness against system uncertainties. Finally, simulation results from a fully nonlinear model clarify and verify the design.

  7. Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association

    PubMed Central

    Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You

    2017-01-01

    This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets’ state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems. PMID:29113085

  8. Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association.

    PubMed

    Liu, Yu; Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You

    2017-11-05

    This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets' state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems.

  9. Low-dose chest computed tomography for lung cancer screening among Hodgkin lymphoma survivors: a cost-effectiveness analysis.

    PubMed

    Wattson, Daniel A; Hunink, M G Myriam; DiPiro, Pamela J; Das, Prajnan; Hodgson, David C; Mauch, Peter M; Ng, Andrea K

    2014-10-01

    Hodgkin lymphoma (HL) survivors face an increased risk of treatment-related lung cancer. Screening with low-dose computed tomography (LDCT) may allow detection of early stage, resectable cancers. We developed a Markov decision-analytic and cost-effectiveness model to estimate the merits of annual LDCT screening among HL survivors. Population databases and HL-specific literature informed key model parameters, including lung cancer rates and stage distribution, cause-specific survival estimates, and utilities. Relative risks accounted for radiation therapy (RT) technique, smoking status (>10 pack-years or current smokers vs not), age at HL diagnosis, time from HL treatment, and excess radiation from LDCTs. LDCT assumptions, including expected stage-shift, false-positive rates, and likely additional workup were derived from the National Lung Screening Trial and preliminary results from an internal phase 2 protocol that performed annual LDCTs in 53 HL survivors. We assumed a 3% discount rate and a willingness-to-pay (WTP) threshold of $50,000 per quality-adjusted life year (QALY). Annual LDCT screening was cost effective for all smokers. A male smoker treated with mantle RT at age 25 achieved maximum QALYs by initiating screening 12 years post-HL, with a life expectancy benefit of 2.1 months and an incremental cost of $34,841/QALY. Among nonsmokers, annual screening produced a QALY benefit in some cases, but the incremental cost was not below the WTP threshold for any patient subsets. As age at HL diagnosis increased, earlier initiation of screening improved outcomes. Sensitivity analyses revealed that the model was most sensitive to the lung cancer incidence and mortality rates and expected stage-shift from screening. HL survivors are an important high-risk population that may benefit from screening, especially those treated in the past with large radiation fields including mantle or involved-field RT. Screening may be cost effective for all smokers but possibly not for nonsmokers despite a small life expectancy benefit. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Improving Spleen Volume Estimation via Computer Assisted Segmentation on Clinically Acquired CT Scans

    PubMed Central

    Xu, Zhoubing; Gertz, Adam L.; Burke, Ryan P.; Bansal, Neil; Kang, Hakmook; Landman, Bennett A.; Abramson, Richard G.

    2016-01-01

    OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomical structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically-acquired CT scans. MATERIALS AND METHODS Under IRB approval, we obtained 294 deidentified (HIPAA-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1–manual segmentation of all scans, Pipeline 2–automated segmentation of all scans, Pipeline 3–automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, Pipelines 4 and 5–volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracy of Pipelines 2–5 (Dice similarity coefficient [DSC], Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1–5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation 23.7 cm3, and 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient 0.98, absolute deviation 46.92 cm3, and 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. PMID:27519156

  11. A Cost Comparison of Alternative Approaches to Distance Education in Developing Countries

    NASA Technical Reports Server (NTRS)

    Ventre, Gerard G.; Kalu, Alex

    1996-01-01

    This paper presents a cost comparison of three approaches to two-way interactive distance learning systems for developing countries. Included are costs for distance learning hardware, terrestrial and satellite communication links, and designing instruction for two-way interactive courses. As part of this project, FSEC is developing a 30-hour course in photovoltaic system design that will be used in a variety of experiments using the Advanced Communications Technology Satellite (ACTS). A primary goal of the project is to develop an instructional design and delivery model that can be used for other education and training programs. Over two-thirds of the world photovoltaics market is in developing countries. One of the objectives of this NASA-sponsored project was to develop new and better energy education programs that take advantage of advances in telecommunications and computer technology. The combination of desktop video systems and the sharing of computer applications software is of special interest. Research is being performed to evaluate the effectiveness of some of these technologies as part of this project. The design of the distance learning origination and receive sites discussed in this paper were influenced by the educational community's growing interest in distance education. The following approach was used to develop comparative costs for delivering interactive distance education to developing countries: (1) Representative target locations for receive sites were chosen. The originating site was assumed to be Cocoa, Florida, where FSEC is located; (2) A range of course development costs were determined; (3) The cost of equipment for three alternative two-way interactive distance learning system configurations was determined or estimated. The types of system configurations ranged from a PC-based system that allows instructors to originate instruction from their office using desktop video and shared application software, to a high cost system that uses a electronic classroom; (4) A range of costs for both satellite and terrestrial communications was investigated; (5) The costs of equipment and operation of the alternative configurations for the origination and receive sites were determined; (6) A range of costs for several alternative delivery scenarios (i.e., a mix of live-interactive; asynchronous interactive;use of videotapes) was determined; and (7) A preferred delivery scenario, including cost estimate, was developed.

  12. Improved Feature Matching for Mobile Devices with IMU.

    PubMed

    Masiero, Andrea; Vettore, Antonio

    2016-08-05

    Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor). More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS) in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT) is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose) is proposed in order to increase its estimation robustness and computational efficiency.

  13. Aquifer thermal-energy-storage costs with a seasonal-chill source

    NASA Astrophysics Data System (ADS)

    Brown, D. R.

    1983-01-01

    The cost of energy supplied by an aquifer thermal energy storage (ATES) ystem from a seasonal chill source was investigated. Costs were estimated for point demand and residential development ATES systems using the computer code AQUASTOR. AQUASTOR was developed at PNL specifically for the economic analysis of ATES systems. In this analysis the cost effect of varying a wide range of technical and economic parameters was examined. Those parameters exhibiting a substantial influence on the costs of ATES delivered chill were: system size; well flow rate; transmission distance; source temperature; well depth; and cost of capital. The effects of each parameter are discussed. Two primary constraints of ATES chill systems are the extremely low energy density of the storage fluid and the prohibitive costs of lengthly pipelines for delivering chill to residential users. This economic analysis concludes that ATES-delivered chill will not be competitive for residential cooling applications. The otherwise marginal attractiveness of ATES chill systems vanishes under the extremely low load factors characteristic of residential cooling systems. (LCL)

  14. Quantum chemical approach for condensed-phase thermochemistry (V): Development of rigid-body type harmonic solvation model

    NASA Astrophysics Data System (ADS)

    Tarumi, Moto; Nakai, Hiromi

    2018-05-01

    This letter proposes an approximate treatment of the harmonic solvation model (HSM) assuming the solute to be a rigid body (RB-HSM). The HSM method can appropriately estimate the Gibbs free energy for condensed phases even where an ideal gas model used by standard quantum chemical programs fails. The RB-HSM method eliminates calculations for intra-molecular vibrations in order to reduce the computational costs. Numerical assessments indicated that the RB-HSM method can evaluate entropies and internal energies with the same accuracy as the HSM method but with lower calculation costs.

  15. A Big RISC

    DTIC Science & Technology

    1983-07-18

    architecture . Design , performance, and cost of BRISC is presented. Performance is shown to be better than high end mainframes such as the IBM 3081 and Amdahl 470V/8 on integer benchmarks written in C, Pascal and LISP. The cost, conservatively estimated to be $132,400 is about the same as a high end minicomputer such as the VAX-11/780. BRISC has a CPU cycle time of 46 ns, providing a RISC I instruction execution rate of greater than 15 MIPs. BRISC is designed with a Structured Computer Aided Logic Design System (SCALD) by Valid Logic Systems. An evaluation of the utility of

  16. HIV prevention costs and their predictors: evidence from the ORPHEA Project in Kenya

    PubMed Central

    Galárraga, Omar; Wamai, Richard G; Sosa-Rubí, Sandra G; Mugo, Mercy G; Contreras-Loya, David; Bautista-Arredondo, Sergio; Nyakundi, Helen; Wang’ombe, Joseph K

    2017-01-01

    Abstract We estimate costs and their predictors for three HIV prevention interventions in Kenya: HIV testing and counselling (HTC), prevention of mother-to-child transmission (PMTCT) and voluntary medical male circumcision (VMMC). As part of the ‘Optimizing the Response of Prevention: HIV Efficiency in Africa’ (ORPHEA) project, we collected retrospective data from government and non-governmental health facilities for 2011–12. We used multi-stage sampling to determine a sample of health facilities by type, ownership, size and interventions offered totalling 144 sites in 78 health facilities in 33 districts across Kenya. Data sources included key informants, registers and time-motion observation methods. Total costs of production were computed using both quantity and unit price of each input. Average cost was estimated by dividing total cost per intervention by number of clients accessing the intervention. Multivariate regression methods were used to analyse predictors of log-transformed average costs. Average costs were $7 and $79 per HTC and PMTCT client tested, respectively; and $66 per VMMC procedure. Results show evidence of economies of scale for PMTCT and VMMC: increasing the number of clients per year by 100% was associated with cost reductions of 50% for PMTCT, and 45% for VMMC. Task shifting was associated with reduced costs for both PMTCT (59%) and VMMC (54%). Costs in hospitals were higher for PMTCT (56%) in comparison to non-hospitals. Facilities that performed testing based on risk factors as opposed to universal screening had higher HTC average costs (79%). Lower VMMC costs were associated with availability of male reproductive health services (59%) and presence of community advisory board (52%). Aside from increasing production scale, HIV prevention costs may be contained by using task shifting, non-hospital sites, service integration and community supervision. PMID:29029086

  17. Estimating multi-period global cost efficiency and productivity change of systems with network structures

    NASA Astrophysics Data System (ADS)

    Tohidnia, S.; Tohidi, G.

    2018-02-01

    The current paper develops three different ways to measure the multi-period global cost efficiency for homogeneous networks of processes when the prices of exogenous inputs are known at all time periods. A multi-period network data envelopment analysis model is presented to measure the minimum cost of the network system based on the global production possibility set. We show that there is a relationship between the multi-period global cost efficiency of network system and its subsystems, and also its processes. The proposed model is applied to compute the global cost Malmquist productivity index for measuring the productivity changes of network system and each of its process between two time periods. This index is circular. Furthermore, we show that the productivity changes of network system can be defined as a weighted average of the process productivity changes. Finally, a numerical example will be presented to illustrate the proposed approach.

  18. Computational laser intensity stabilisation for organic molecule concentration estimation in low-resource settings

    NASA Astrophysics Data System (ADS)

    Haider, Shahid A.; Kazemzadeh, Farnoud; Wong, Alexander

    2017-03-01

    An ideal laser is a useful tool for the analysis of biological systems. In particular, the polarization property of lasers can allow for the concentration of important organic molecules in the human body, such as proteins, amino acids, lipids, and carbohydrates, to be estimated. However, lasers do not always work as intended and there can be effects such as mode hopping and thermal drift that can cause time-varying intensity fluctuations. The causes of these effects can be from the surrounding environment, where either an unstable current source is used or the temperature of the surrounding environment is not temporally stable. This intensity fluctuation can cause bias and error in typical organic molecule concentration estimation techniques. In a low-resource setting where cost must be limited and where environmental factors, like unregulated power supplies and temperature, cannot be controlled, the hardware required to correct for these intensity fluctuations can be prohibitive. We propose a method for computational laser intensity stabilisation that uses Bayesian state estimation to correct for the time-varying intensity fluctuations from electrical and thermal instabilities without the use of additional hardware. This method will allow for consistent intensities across all polarization measurements for accurate estimates of organic molecule concentrations.

  19. Methods for estimating magnitude and frequency of peak flows for natural streams in Utah

    USGS Publications Warehouse

    Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.

    2007-01-01

    Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.

  20. The Flight Optimization System Weights Estimation Method

    NASA Technical Reports Server (NTRS)

    Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.

    2017-01-01

    FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.

  1. Cost-effectiveness of screening for asymptomatic carotid atherosclerotic disease.

    PubMed

    Derdeyn, C P; Powers, W J

    1996-11-01

    The value of screening for asymptomatic carotid stenosis has become an important issue with the recently reported beneficial effect of endarterectomy. The purpose of this study is to evaluate the cost-effectiveness of using Doppler ultrasound as a screening tool to select subjects for arteriography and subsequent surgery. A computer model was developed to simulate the cost-effectiveness of screening a cohort of 1000 men during a 20-year period. The primary outcome measure was incremental present-value dollar expenditures for screening and treatment per incremental present-value quality-adjusted life-year (QALY) saved. Estimates of disease prevalence and arteriographic and surgical complication rates were obtained from the literature. Probabilities of stroke and death with surgical and medical treatment were obtained from published clinical trials. Doppler ultrasound sensitivity and specificity were obtained through review of local experience. Estimates of costs were obtained from local Medicare reimbursement data. A one-time screening program of a population with a high prevalence (20%) of > or = 60% stenosis cost $35130 per incremental QALY gained. Decreased surgical benefit or increased annual discount rate was detrimental, resulting in lost QALYs. Annual screening cost $457773 per incremental QALY gained. In a low-prevalence (4%) population, one-time screening cost $52588 per QALY gained, while annual screening was detrimental. The cost-effectiveness of a one-time screening program for an asymptomatic population with a high prevalence of carotid stenosis may be cost-effective. Annual screening is detrimental. The most sensitive variables in this simulation model were long-term stroke risk reduction after surgery and annual discount rate for accumulated costs and QALYs.

  2. Landmark-Based Drift Compensation Algorithm for Inertial Pedestrian Navigation

    PubMed Central

    Munoz Diaz, Estefania; Caamano, Maria; Fuentes Sánchez, Francisco Javier

    2017-01-01

    The navigation of pedestrians based on inertial sensors, i.e., accelerometers and gyroscopes, has experienced a great growth over the last years. However, the noise of medium- and low-cost sensors causes a high error in the orientation estimation, particularly in the yaw angle. This error, called drift, is due to the bias of the z-axis gyroscope and other slow changing errors, such as temperature variations. We propose a seamless landmark-based drift compensation algorithm that only uses inertial measurements. The proposed algorithm adds a great value to the state of the art, because the vast majority of the drift elimination algorithms apply corrections to the estimated position, but not to the yaw angle estimation. Instead, the presented algorithm computes the drift value and uses it to prevent yaw errors and therefore position errors. In order to achieve this goal, a detector of landmarks, i.e., corners and stairs, and an association algorithm have been developed. The results of the experiments show that it is possible to reliably detect corners and stairs using only inertial measurements eliminating the need that the user takes any action, e.g., pressing a button. Associations between re-visited landmarks are successfully made taking into account the uncertainty of the position. After that, the drift is computed out of all associations and used during a post-processing stage to obtain a low-drifted yaw angle estimation, that leads to successfully drift compensated trajectories. The proposed algorithm has been tested with quasi-error-free turn rate measurements introducing known biases and with medium-cost gyroscopes in 3D indoor and outdoor scenarios. PMID:28671622

  3. Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas

    NASA Astrophysics Data System (ADS)

    Harabi, F.; Akkar, S.; Gharsallah, A.

    2016-07-01

    Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.

  4. Cost analysis of non-invasive fractional flow reserve derived from coronary computed tomographic angiography in Japan.

    PubMed

    Kimura, Takeshi; Shiomi, Hiroki; Kuribayashi, Sachio; Isshiki, Takaaki; Kanazawa, Susumu; Ito, Hiroshi; Ikeda, Shunya; Forrest, Ben; Zarins, Christopher K; Hlatky, Mark A; Norgaard, Bjarne L

    2015-01-01

    Percutaneous coronary intervention (PCI) based on fractional flow reserve (FFRcath) measurement during invasive coronary angiography (CAG) results in improved patient outcome and reduced healthcare costs. FFR can now be computed non-invasively from standard coronary CT angiography (cCTA) scans (FFRCT). The purpose of this study is to determine the potential impact of non-invasive FFRCT on costs and clinical outcomes of patients with suspected coronary artery disease in Japan. Clinical data from 254 patients in the HeartFlowNXT trial, costs of goods and services in Japan, and clinical outcome data from the literature were used to estimate the costs and outcomes of 4 clinical pathways: (1) CAG-visual guided PCI, (2) CAG-FFRcath guided PCI, (3) cCTA followed by CAG-visual guided PCI, (4) cCTA-FFRCT guided PCI. The CAG-visual strategy demonstrated the highest projected cost ($10,360) and highest projected 1-year death/myocardial infarction rate (2.4 %). An assumed price for FFRCT of US $2,000 produced equivalent clinical outcomes (death/MI rate: 1.9 %) and healthcare costs ($7,222) for the cCTA-FFRCT strategy and the CAG-FFRcath guided PCI strategy. Use of the cCTA-FFRCT strategy to select patients for PCI would result in 32 % lower costs and 19 % fewer cardiac events at 1 year compared to the most commonly used CAG-visual strategy. Use of cCTA-FFRCT to select patients for CAG and PCI may reduce costs and improve clinical outcome in patients with suspected coronary artery disease in Japan.

  5. NUVEM - New methods to Use gnss water Vapor Estimates for Meteorology of Portugal

    NASA Astrophysics Data System (ADS)

    Fernandes, R. M. S.; Viterbo, P.; Bos, M. S.; Martins, J. P.; Sá, A. G.; Valentim, H.; Jones, J.

    2014-12-01

    NUVEM (New methods to Use gnss water Vapor Estimates for Meteorology of Portugal) is a collaborative project funded by the Portuguese National Science Foundation (FCT) aiming to implement a multi-disciplinary approach in order to operationalize the inclusion of GNSS-PWV estimates for nowcasting in Portugal, namely for the preparation of warnings of severe weather. To achieve such goal, the NUVEM project is divided in two major components: a) Development and implementation of methods to compute accurate estimates of PWV (Precipitable Water Vapor) in NRT (Near Real-Time); b) Integration of such estimates in nowcasting procedures in use at IPMA (Portuguese Meteorological Service). Methodologies will be optimized at SEGAL to passive and actively access to the data; the PWV estimations will be computed using PPP (Precise Point Positioning), which permits the estimation of each individual station separately; solutions will be validated using internal and external values; and computed solutions will be transferred timely to the IPMA Operational Center. Validation of derived estimations using robust statistics is an important component of the project. The need for sending computed values as soon as possible to IPMA requires fast but reliable internal (e.g., noise estimation) and external (e.g., feedback from IPMA using other sensors like radiosondes) assessment of the quality of the PWV estimates. At IPMA, the goal is to implement the operational use of GNSS-PWV to assist weather nowcasting in Portugal. This will be done with the assistance of the Meteo group of IDL. Maps of GNSS-PWV will be automatically created and compared with solutions provided by other operational systems in order to help IPMA to detect suspicious patterns at near real time. This will be the first step towards the assimilation of GNSS-PWV estimates at IPMA nowcasting models. The NUVEM (EXPL/GEO-MET/0413/2013) project will also contribute to the active participation of Portugal at the COST Action ES1206 - Advanced Global Navigation Satellite Systems tropospheric products for monitoring severe weather events and climate (GNSS4SWEC). This work is also carried out in the framework of the Portuguese Project SMOG (PTDC/CTE-ATM/119922/2010).

  6. Network Community Detection based on the Physarum-inspired Computational Framework.

    PubMed

    Gao, Chao; Liang, Mingxin; Li, Xianghua; Zhang, Zili; Wang, Zhen; Zhou, Zhili

    2016-12-13

    Community detection is a crucial and essential problem in the structure analytics of complex networks, which can help us understand and predict the characteristics and functions of complex networks. Many methods, ranging from the optimization-based algorithms to the heuristic-based algorithms, have been proposed for solving such a problem. Due to the inherent complexity of identifying network structure, how to design an effective algorithm with a higher accuracy and a lower computational cost still remains an open problem. Inspired by the computational capability and positive feedback mechanism in the wake of foraging process of Physarum, which is a large amoeba-like cell consisting of a dendritic network of tube-like pseudopodia, a general Physarum-based computational framework for community detection is proposed in this paper. Based on the proposed framework, the inter-community edges can be identified from the intra-community edges in a network and the positive feedback of solving process in an algorithm can be further enhanced, which are used to improve the efficiency of original optimization-based and heuristic-based community detection algorithms, respectively. Some typical algorithms (e.g., genetic algorithm, ant colony optimization algorithm, and Markov clustering algorithm) and real-world datasets have been used to estimate the efficiency of our proposed computational framework. Experiments show that the algorithms optimized by Physarum-inspired computational framework perform better than the original ones, in terms of accuracy and computational cost. Moreover, a computational complexity analysis verifies the scalability of our framework.

  7. SOFTCOST - DEEP SPACE NETWORK SOFTWARE COST MODEL

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1994-01-01

    The early-on estimation of required resources and a schedule for the development and maintenance of software is usually the least precise aspect of the software life cycle. However, it is desirable to make some sort of an orderly and rational attempt at estimation in order to plan and organize an implementation effort. The Software Cost Estimation Model program, SOFTCOST, was developed to provide a consistent automated resource and schedule model which is more formalized than the often used guesswork model based on experience, intuition, and luck. SOFTCOST was developed after the evaluation of a number of existing cost estimation programs indicated that there was a need for a cost estimation program with a wide range of application and adaptability to diverse kinds of software. SOFTCOST combines several software cost models found in the open literature into one comprehensive set of algorithms that compensate for nearly fifty implementation factors relative to size of the task, inherited baseline, organizational and system environment, and difficulty of the task. SOFTCOST produces mean and variance estimates of software size, implementation productivity, recommended staff level, probable duration, amount of computer resources required, and amount and cost of software documentation. Since the confidence level for a project using mean estimates is small, the user is given the opportunity to enter risk-biased values for effort, duration, and staffing, to achieve higher confidence levels. SOFTCOST then produces a PERT/CPM file with subtask efforts, durations, and precedences defined so as to produce the Work Breakdown Structure (WBS) and schedule having the asked-for overall effort and duration. The SOFTCOST program operates in an interactive environment prompting the user for all of the required input. The program builds the supporting PERT data base in a file for later report generation or revision. The PERT schedule and the WBS schedule may be printed and stored in a file for later use. The SOFTCOST program is written in Microsoft BASIC for interactive execution and has been implemented on an IBM PC-XT/AT operating MS-DOS 2.1 or higher with 256K bytes of memory. SOFTCOST was originally developed for the Zylog Z80 system running under CP/M in 1981. It was converted to run on the IBM PC XT/AT in 1986. SOFTCOST is a copyrighted work with all copyright vested in NASA.

  8. A Budget Impact Analysis of Newly Available Hepatitis C Therapeutics and the Financial Burden on a State Correctional System.

    PubMed

    Nguyen, John T; Rich, Josiah D; Brockmann, Bradley W; Vohr, Fred; Spaulding, Anne; Montague, Brian T

    2015-08-01

    Hepatitis C virus (HCV) infection continues to disproportionately affect incarcerated populations. New HCV drugs present opportunities and challenges to address HCV in corrections. The goal of this study was to evaluate the impact of the treatment costs for HCV infection in a state correctional population through a budget impact analysis comparing differing treatment strategies. Electronic and paper medical records were reviewed to estimate the prevalence of hepatitis C within the Rhode Island Department of Corrections. Three treatment strategies were evaluated as follows: (1) treating all chronically infected persons, (2) treating only patients with demonstrated fibrosis, and (3) treating only patients with advanced fibrosis. Budget impact was computed as the percentage of pharmacy and overall healthcare expenditures accrued by total drug costs assuming entirely interferon-free therapy. Sensitivity analyses assessed potential variance in costs related to variability in HCV prevalence, genotype, estimated variation in market pricing, length of stay for the sentenced population, and uptake of newly available regimens. Chronic HCV prevalence was estimated at 17% of the total population. Treating all sentenced inmates with at least 6 months remaining of their sentence would cost about $34 million-13 times the pharmacy budget and almost twice the overall healthcare budget. Treating inmates with advanced fibrosis would cost about $15 million. A hypothetical 50% reduction in total drug costs for future therapies could cost $17 million to treat all eligible inmates. With immense costs projected with new treatment, it is unlikely that correctional facilities will have the capacity to treat all those afflicted with HCV. Alternative payment strategies in collaboration with outside programs may be necessary to curb this epidemic. In order to improve care and treatment delivery, drug costs also need to be seriously reevaluated to be more accessible and equitable now that HCV is more curable.

  9. Distributed information system (water fact sheet)

    USGS Publications Warehouse

    Harbaugh, A.W.

    1986-01-01

    During 1982-85, the Water Resources Division (WRD) of the U.S. Geological Survey (USGS) installed over 70 large minicomputers in offices across the country to support its mission in the science of hydrology. These computers are connected by a communications network that allows information to be shared among computers in each office. The computers and network together are known as the Distributed Information System (DIS). The computers are accessed through the use of more than 1500 terminals and minicomputers. The WRD has three fundamentally different needs for computing: data management; hydrologic analysis; and administration. Data management accounts for 50% of the computational workload of WRD because hydrologic data are collected in all 50 states, Puerto Rico, and the Pacific trust territories. Hydrologic analysis consists of 40% of the computational workload of WRD. Cost accounting, payroll, personnel records, and planning for WRD programs occupies an estimated 10% of the computer workload. The DIS communications network is shown on a map. (Lantz-PTT)

  10. Addressing the computational cost of large EIT solutions.

    PubMed

    Boyle, Alistair; Borsic, Andrea; Adler, Andy

    2012-05-01

    Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.

  11. Essays in financial economics and econometrics

    NASA Astrophysics Data System (ADS)

    La Spada, Gabriele

    Chapter 1 (my job market paper) asks the following question: Do asset managers reach for yield because of competitive pressures in a low rate environment? I propose a tournament model of money market funds (MMFs) to study this issue. I show that funds with different costs of default respond differently to changes in interest rates, and that it is important to distinguish the role of risk-free rates from that of risk premia. An increase in the risk premium leads funds with lower default costs to increase risk-taking, while funds with higher default costs reduce risk-taking. Without changes in the premium, low risk-free rates reduce risk-taking. My empirical analysis shows that these predictions are consistent with the risk-taking of MMFs during the 2006--2008 period. Chapter 2, co-authored with Fabrizio Lillo and published in Studies in Nonlinear Dynamics and Econometrics (2014), studies the effect of round-off error (or discretization) on stationary Gaussian long-memory process. For large lags, the autocovariance is rescaled by a factor smaller than one, and we compute this factor exactly. Hence, the discretized process has the same Hurst exponent as the underlying one. We show that in presence of round-off error, two common estimators of the Hurst exponent, the local Whittle (LW) estimator and the detrended fluctuation analysis (DFA), are severely negatively biased in finite samples. We derive conditions for consistency and asymptotic normality of the LW estimator applied to discretized processes and compute the asymptotic properties of the DFA for generic long-memory processes that encompass discretized processes. Chapter 3, co-authored with Fabrizio Lillo, studies the effect of round-off error on integrated Gaussian processes with possibly correlated increments. We derive the variance and kurtosis of the realized increment process in the limit of both "small" and "large" round-off errors, and its autocovariance for large lags. We propose novel estimators for the variance and lag-one autocorrelation of the underlying, unobserved increment process. We also show that for fractionally integrated processes, the realized increments have the same Hurst exponent as the underlying ones, but the LW estimator applied to the realized series is severely negatively biased in medium-sized samples.

  12. Evaluative studies in nuclear medicine research. Emission-computed tomography assessment. Progress report 1 January-15 August 1981

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potchen, E.J.

    Questions regarding what imaging performance goals need to be met to produce effective biomedical research using positron emission computer tomography, how near those performance goals are to being realized by imaging systems, and the dependence of currently-unachieved performance goals on design and operational factors have been addressed in the past year, along with refinement of economic estimates for the capital and operating costs of a PECT research facility. The two primary sources of information have been solicitations of expert opinion and review of current literature. (ACR)

  13. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.

  14. PET-CT in oncological patients: analysis of informal care costs in cost-benefit assessment.

    PubMed

    Orlacchio, Antonio; Ciarrapico, Anna Micaela; Schillaci, Orazio; Chegai, Fabrizio; Tosti, Daniela; D'Alba, Fabrizio; Guazzaroni, Manlio; Simonetti, Giovanni

    2014-04-01

    The authors analysed the impact of nonmedical costs (travel, loss of productivity) in an economic analysis of PET-CT (positron-emission tomography-computed tomography) performed with standard contrast-enhanced CT protocols (CECT). From October to November 2009, a total of 100 patients referred to our institute were administered a questionnaire to evaluate the nonmedical costs of PET-CT. In addition, the medical costs (equipment maintenance and depreciation, consumables and staff) related to PET-CT performed with CECT and PET-CT with low-dose nonenhanced CT and separate CECT were also estimated. The medical costs were 919.3 euro for PET-CT with separate CECT, and 801.3 euro for PET-CT with CECT. Therefore, savings of approximately 13% are possible. Moreover, savings in nonmedical costs can be achieved by reducing the number of hospital visits required by patients undergoing diagnostic imaging. Nonmedical costs heavily affect patients' finances as well as having an indirect impact on national health expenditure. Our results show that PET-CT performed with standard dose CECT in a single session provides benefits in terms of both medical and nonmedical costs.

  15. Statistical, economic and other tools for assessing natural aggregate

    USGS Publications Warehouse

    Bliss, J.D.; Moyle, P.R.; Bolm, K.S.

    2003-01-01

    Quantitative aggregate resource assessment provides resource estimates useful for explorationists, land managers and those who make decisions about land allocation, which may have long-term implications concerning cost and the availability of aggregate resources. Aggregate assessment needs to be systematic and consistent, yet flexible enough to allow updating without invalidating other parts of the assessment. Evaluators need to use standard or consistent aggregate classification and statistic distributions or, in other words, models with geological, geotechnical and economic variables or interrelationships between these variables. These models can be used with subjective estimates, if needed, to estimate how much aggregate may be present in a region or country using distributions generated by Monte Carlo computer simulations.

  16. Prediction and Estimation of Scaffold Strength with different pore size

    NASA Astrophysics Data System (ADS)

    Muthu, P.; Mishra, Shubhanvit; Sri Sai Shilpa, R.; Veerendranath, B.; Latha, S.

    2018-04-01

    This paper emphasizes the significance of prediction and estimation of the mechanical strength of 3D functional scaffolds before the manufacturing process. Prior evaluation of the mechanical strength and structural properties of the scaffold will reduce the cost fabrication and in fact ease up the designing process. Detailed analysis and investigation of various mechanical properties including shear stress equivalence have helped to estimate the effect of porosity and pore size on the functionality of the scaffold. The influence of variation in porosity was examined by computational approach via finite element analysis (FEA) and ANSYS application software. The results designate the adequate perspective of the evolutionary method for the regulation and optimization of the intricate engineering design process.

  17. Design synthesis and optimization of joined-wing transports

    NASA Technical Reports Server (NTRS)

    Gallman, John W.; Smith, Stephen C.; Kroo, Ilan M.

    1990-01-01

    A computer program for aircraft synthesis using a numerical optimizer was developed to study the application of the joined-wing configuration to transport aircraft. The structural design algorithm included the effects of secondary bending moments to investigate the possibility of tail buckling and to design joined wings resistant to buckling. The structural weight computed using this method was combined with a statistically-based method to obtain realistic estimates of total lifting surface weight and aircraft empty weight. A variety of 'optimum' joined-wing and conventional aircraft designs were compared on the basis of direct operating cost, gross weight, and cruise drag. The most promising joined-wing designs were found to have a joint location at about 70 percent of the wing semispan. The optimum joined-wing transport is shown to save 1.7 percent in direct operating cost and 11 percent in drag for a 2000 nautical mile transport mission.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pichara, Karim; Protopapas, Pavlos

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine howmore » classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.« less

  19. An economic evaluation of adaptive e-learning devices to promote weight loss via dietary change for people with obesity.

    PubMed

    Miners, Alec; Harris, Jody; Felix, Lambert; Murray, Elizabeth; Michie, Susan; Edwards, Phil

    2012-07-07

    The prevalence of obesity is over 25 % in many developed countries. Obesity is strongly associated with an increased risk of fatal and chronic conditions such as cardiovascular disease and type 2 diabetes. Therefore it has become a major public health concern for many economies. E-learning devices are a relatively novel approach to promoting dietary change. The new generation of devices are 'adaptive' and use interactive electronic media to facilitate teaching and learning. E-Learning has grown out of recent developments in information and communication technology, such as the Internet, interactive computer programmes, interactive television and mobile phones. The aim of this study is to assess the cost-effectiveness of e-learning devices as a method of promoting weight loss via dietary change. An economic evaluation was performed using decision modelling techniques. Outcomes were expressed in terms of Quality-Adjusted Life-Years (QALYs) and costs were estimated from a health services perspective. All parameter estimates were derived from the literature. A systematic review was undertaken to derive the estimate of relative treatment effect. The base case results from the e-Learning Economic Evaluation Model (e-LEEM) suggested that the incremental cost-effectiveness ratio was approximately £102,000 per Quality-Adjusted Life-Year (QALY) compared to conventional care. This finding was robust to most alternative assumptions, except a much lower fixed cost of providing e-learning devices. Expected value of perfect information (EVPI) analysis showed that while the individual level EVPI was arguably negligible, the population level value was between £37 M and £170 M at a willingness to pay between £20,000 to £30,000 per additional QALY. The current economic evidence base suggests that e-learning devices for managing the weight of obese individuals are unlikely to be cost-effective unless their fixed costs are much lower than estimated or future devices prove to be much more effective.

  20. Exploring the optimal economic timing for crop tree release treatments in hardwoods: results from simulation

    Treesearch

    Chris B. LeDoux; Gary W. Miller

    2008-01-01

    In this study we used data from 16 Appalachian hardwood stands, a growth and yield computer simulation model, and stump-to-mill logging cost-estimating software to evaluate the optimal economic timing of crop tree release (CTR) treatments. The simulated CTR treatments consisted of one-time logging operations at stand age 11, 23, 31, or 36 years, with the residual...

  1. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  2. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  3. Evaluative studies in nuclear medicine research: emission computed tomography assessment. Final report, January 1-December 31, 1981

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potchen, E.J.; Harris, G.I.; Gift, D.A.

    The report provides information on an assessment of the potential short and long term benefits of emission computed tomography (ECT) in biomedical research and patient care. Work during the past year has been augmented by the development and use of an opinion survey instrument to reach a wider representation of knowledgeable investigators and users of this technology. This survey instrument is reproduced in an appendix. Information derived from analysis of the opinion survey, and used in conjunction with results of independent staff studies of available sources, provides the basis for the discussions given in following sections of PET applications inmore » the brain, of technical factors, and of economic implications. Projections of capital and operating costs on a per study basis were obtained from a computerized, pro forma accounting model and are compared with the survey cost estimates for both research and clinical modes of application. The results of a cash-flow model analysis of the relationship between projected economic benefit of PET research to disease management and the costs associated with such research are presented and discussed.« less

  4. A validated methodology for determination of laboratory instrument computer interface efficacy

    NASA Astrophysics Data System (ADS)

    1984-12-01

    This report is intended to provide a methodology for determining when, and for which instruments, direct interfacing of laboratory instrument and laboratory computers is beneficial. This methodology has been developed to assist the Tri-Service Medical Information Systems Program Office in making future decisions regarding laboratory instrument interfaces. We have calculated the time savings required to reach a break-even point for a range of instrument interface prices and corresponding average annual costs. The break-even analyses used empirical data to estimate the number of data points run per day that are required to meet the break-even point. The results indicate, for example, that at a purchase price of $3,000, an instrument interface will be cost-effective if the instrument is utilized for at least 154 data points per day if operated in the continuous mode, or 216 points per day if operated in the discrete mode. Although this model can help to ensure that instrument interfaces are cost effective, additional information should be considered in making the interface decisions. A reduction in results transcription errors may be a major benefit of instrument interfacing.

  5. Ambient Human-to-Human Communication

    NASA Astrophysics Data System (ADS)

    Härmä, Aki

    In the current technological landscape colored by environmental and security concerns the logic of replacing traveling by technical means of communications is undisputable. For example, consider a comparison between a normal family car and a video conference system with two laptop computers connected over the Internet. The power consumption of the car is approximately 25 kW while the two computers and their share of the power consumption in the intermediate routers in total is in the range of 50 W. Therefore, to meet a person using a car at an one hour driving distance is equivalent to 1000 hours of video conference. The difference in the costs is also increasing. An estimate on the same cost difference between travel and video conference twenty years ago gave only three days of continuous video conference for the same situation [29]. The cost of video conference depends on the duration of the session while traveling depends only on the distance. However, in a strict economical and environmental sense even a five minute trip by a car in 2008 becomes more economical than a video conference only when the meeting lasts more than three and half days.

  6. Global Economic Burden of Norovirus Gastroenteritis

    PubMed Central

    Bartsch, Sarah M.; Lopman, Benjamin A.; Ozawa, Sachiko; Hall, Aron J.; Lee, Bruce Y.

    2016-01-01

    Background Despite accounting for approximately one fifth of all acute gastroenteritis illnesses, norovirus has received comparatively less attention than other infectious pathogens. With several candidate vaccines under development, characterizing the global economic burden of norovirus could help funders, policy makers, public health officials, and product developers determine how much attention and resources to allocate to advancing these technologies to prevent and control norovirus. Methods We developed a computational simulation model to estimate the economic burden of norovirus in every country/area (233 total) stratified by WHO region and globally, from the health system and societal perspectives. We considered direct costs of illness (e.g., clinic visits and hospitalization) and productivity losses. Results Globally, norovirus resulted in a total of $4.2 billion (95% UI: $3.2–5.7 billion) in direct health system costs and $60.3 billion (95% UI: $44.4–83.4 billion) in societal costs per year. Disease amongst children <5 years cost society $39.8 billion, compared to $20.4 billion for all other age groups combined. Costs per norovirus illness varied by both region and age and was highest among adults ≥55 years. Productivity losses represented 84–99% of total costs varying by region. While low and middle income countries and high income countries had similar disease incidence (10,148 vs. 9,935 illness per 100,000 persons), high income countries generated 62% of global health system costs. In sensitivity analysis, the probability of hospitalization had the largest impact on health system cost estimates ($2.8 billion globally, assuming no hospitalization costs), while the probability of missing productive days had the largest impact on societal cost estimates ($35.9 billion globally, with a 25% probability of missing productive days). Conclusions The total economic burden is greatest in young children but the highest cost per illness is among older age groups in some regions. These large costs overwhelmingly are from productivity losses resulting from acute illness. Low, middle, and high income countries all have a considerable economic burden, suggesting that norovirus gastroenteritis is a truly global economic problem. Our findings can help identify which age group(s) and/or geographic regions may benefit the most from interventions. PMID:27115736

  7. Global Economic Burden of Norovirus Gastroenteritis.

    PubMed

    Bartsch, Sarah M; Lopman, Benjamin A; Ozawa, Sachiko; Hall, Aron J; Lee, Bruce Y

    2016-01-01

    Despite accounting for approximately one fifth of all acute gastroenteritis illnesses, norovirus has received comparatively less attention than other infectious pathogens. With several candidate vaccines under development, characterizing the global economic burden of norovirus could help funders, policy makers, public health officials, and product developers determine how much attention and resources to allocate to advancing these technologies to prevent and control norovirus. We developed a computational simulation model to estimate the economic burden of norovirus in every country/area (233 total) stratified by WHO region and globally, from the health system and societal perspectives. We considered direct costs of illness (e.g., clinic visits and hospitalization) and productivity losses. Globally, norovirus resulted in a total of $4.2 billion (95% UI: $3.2-5.7 billion) in direct health system costs and $60.3 billion (95% UI: $44.4-83.4 billion) in societal costs per year. Disease amongst children <5 years cost society $39.8 billion, compared to $20.4 billion for all other age groups combined. Costs per norovirus illness varied by both region and age and was highest among adults ≥55 years. Productivity losses represented 84-99% of total costs varying by region. While low and middle income countries and high income countries had similar disease incidence (10,148 vs. 9,935 illness per 100,000 persons), high income countries generated 62% of global health system costs. In sensitivity analysis, the probability of hospitalization had the largest impact on health system cost estimates ($2.8 billion globally, assuming no hospitalization costs), while the probability of missing productive days had the largest impact on societal cost estimates ($35.9 billion globally, with a 25% probability of missing productive days). The total economic burden is greatest in young children but the highest cost per illness is among older age groups in some regions. These large costs overwhelmingly are from productivity losses resulting from acute illness. Low, middle, and high income countries all have a considerable economic burden, suggesting that norovirus gastroenteritis is a truly global economic problem. Our findings can help identify which age group(s) and/or geographic regions may benefit the most from interventions.

  8. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less

  9. Estimating the expected value of partial perfect information in health economic evaluations using integrated nested Laplace approximation.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2016-10-15

    The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  10. A model for the cost of doing a cost estimate

    NASA Technical Reports Server (NTRS)

    Remer, D. S.; Buchanan, H. R.

    1992-01-01

    A model for estimating the cost required to do a cost estimate for Deep Space Network (DSN) projects that range from $0.1 to $100 million is presented. The cost of the cost estimate in thousands of dollars, C(sub E), is found to be approximately given by C(sub E) = K((C(sub p))(sup 0.35)) where C(sub p) is the cost of the project being estimated in millions of dollars and K is a constant depending on the accuracy of the estimate. For an order-of-magnitude estimate, K = 24; for a budget estimate, K = 60; and for a definitive estimate, K = 115. That is, for a specific project, the cost of doing a budget estimate is about 2.5 times as much as that for an order-of-magnitude estimate, and a definitive estimate costs about twice as much as a budget estimate. Use of this model should help provide the level of resources required for doing cost estimates and, as a result, provide insights towards more accurate estimates with less potential for cost overruns.

  11. Impact of DRG billing system on health budget consumption in percutaneous treatment of mitral valve regurgitation in heart failure.

    PubMed

    Palmieri, Vittorio; Baldi, Cesare; Di Blasi, Paola E; Citro, Rodolfo; Di Lorenzo, Emilio; Bellino, Elisabetta; Preziuso, Feliciano; Ranaudo, Carlo; Sauro, Rosario; Rosato, Giuseppe

    2015-02-01

    Percutaneous correction of mitral regurgitation (MR) by MitraClip (Abbot Vascular, Abbot Park, Illinois, USA) trans-catheter procedure (MTP) may represent a treatment for an unmet need in heart failure (HF), but with a largely unclear economic impact. This study estimated the economic impact of the MTP in common practice using the disease-related group (DRG) billing system, duration and average cost per day of hospitalization as main drivers. Life expectancy was estimated based on the Seattle Heart Failure Model. Quality-of-life was derived by standard questionnaires to compute quality-adjusted year-life costs. Over 5535 discharges between 2012-2013, HF as DRG 127 was the main diagnosis in 20%, yielding a reimbursement of €3052.00/case; among the DRG 127, MR by ICD-9 coding was found in 12%. Duration of hospitalization was longer for DRG 127 with than without MR (9 vs 8 days, p < 0.05). HF in-hospital management generated most frequently deficit, in particular in the presence of MR, due to the high costs of hospitalization, higher than reimbursement. MTP to treat MR allowed DRG 104-related reimbursement of €24,675.00. In a cohort of 34 HF patients treated for MR by MTP, the global budget consumption was 2-fold higher compared to that simulated for those cases medically managed at 2-year follow-up. Extrapolated cost per quality-adjusted-life-years (QALY) for MTP at year-2 follow-up was ∼ €16,300. Based on DRG and hospitalization costing estimates, MTP might be cost-effective in selected HF patients with MR suitable for such a specific treatment, granted that those patients have a clinical profile predicting high likelihood of post-procedural clinical stability in sufficiently long follow-up.

  12. Operating Dedicated Data Centers - Is It Cost-Effective?

    NASA Astrophysics Data System (ADS)

    Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.

    2014-06-01

    The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.

  13. Cost Effectiveness of Interventions to Promote Screening for Colorectal Cancer: A Randomized Trial

    PubMed Central

    Misra, Swati; Chan, Wenyaw; Chang, Yu-Chia; Bartholomew, L. Kay; Greisinger, Anthony; McQueen, Amy; Vernon, Sally W.

    2011-01-01

    Objectives Screening for colorectal cancer is considered cost effective, but is underutilized in the U.S. Information on the efficiency of "tailored interventions" to promote colorectal cancer screening in primary care settings is limited. The paper reports the results of a cost effectiveness analysis that compared a survey-only control group to a Centers for Disease Control (CDC) web-based intervention (screen for life) and to a tailored interactive computer-based intervention. Methods A randomized controlled trial of people 50 and over, was conducted to test the interventions. The sample was 1224 partcipants 50-70 years of age, recruited from Kelsey-Seybold Clinic, a large multi-specialty clinic in Houston, Texas. Screening status was obtained by medical chart review after a 12-month follow-up period. An "intention to treat" analysis and micro costing from the patient and provider perspectives were used to estimate the costs and effects. Analysis of statistical uncertainty was conducted using nonparametric bootstrapping. Results The estimated cost of implementing the web-based intervention was $40 per person and the cost of the tailored intervention was $45 per person. The additional cost per person screened for the web-based intervention compared to no intervention was $2602 and the tailored intervention was no more effective than the web-based strategy. Conclusions The tailored intervention was less cost-effective than the web-based intervention for colorectal cancer screening promotion. The web-based intervention was less cost-effective than previous studies of in-reach colorectal cancer screening promotion. Researchers need to continue developing and evaluating the effectiveness and cost-effectiveness of interventions to increase colorectal cancer screening. PMID:21617335

  14. High-temperature molten salt solar thermal systems

    NASA Astrophysics Data System (ADS)

    Copeland, R. J.; Leach, J. W.; Stern, G.

    Conceptual designs of a solar thermal central receiver and a thermal storage subsystem were analyzed to estimate thermal losses and to assess the economics of high-temperature applications with molten salt transport fluids. Modifications to a receiver design being developed by the Martin Marietta Corporation were studied to investigate possible means for improving efficiency at high temperatures. Computations were made based on conceptual design of internally insulated high temperature storage tanks to estimate cost and performance. A study of a potential application of the system for thermochemical production of hydrogen indicates that thermal storage at 1100 C will be economically attractive.

  15. Design study of wind turbines 50 kW to 3000 kW for electric utility applications. Volume 2: Analysis and design

    NASA Technical Reports Server (NTRS)

    1976-01-01

    All possible overall system configurations, operating modes, and subsystem concepts for a wind turbine configuration for cost effective generation of electrical power were evaluated for both technical feasibility and compatibility with utility networks, as well as for economic attractiveness. A design optimization computer code was developed to determine the cost sensitivity of the various design features, and thus establish the configuration and design conditions that would minimize the generated energy costs. The preliminary designs of both a 500 kW unit and a 1500 kW unit operating in a 12 mph and 18 mph median wind speed respectively, were developed. The various design features and components evaluated are described, and the rationale employed to select the final design configuration is given. All pertinent technical performance data and component cost data is included. The costs of all major subassemblies are estimated and the resultant energy costs for both the 500 kW and 1500 kW units are calculated.

  16. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution.

    PubMed

    Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn

    2013-03-06

    Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.

  17. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution

    PubMed Central

    2013-01-01

    Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171

  18. Computational modelling of cellular level metabolism

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Heino, J.; Somersalo, E.

    2008-07-01

    The steady and stationary state inverse problems consist of estimating the reaction and transport fluxes, blood concentrations and possibly the rates of change of some of the concentrations based on data which are often scarce noisy and sampled over a population. The Bayesian framework provides a natural setting for the solution of this inverse problem, because a priori knowledge about the system itself and the unknown reaction fluxes and transport rates can compensate for the insufficiency of measured data, provided that the computational costs do not become prohibitive. This article identifies the computational challenges which have to be met when analyzing the steady and stationary states of multicompartment model for cellular metabolism and suggest stable and efficient ways to handle the computations. The outline of a computational tool based on the Bayesian paradigm for the simulation and analysis of complex cellular metabolic systems is also presented.

  19. A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zeng, L.; Wu, L.

    2016-12-01

    Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.

  20. An attempt to estimate the economic value of the loss of human life due to landslide and flood events in Italy

    NASA Astrophysics Data System (ADS)

    Salvati, Paola; Bianchi, Cinzia; Hussin, Haydar; Guzzetti, Fausto

    2013-04-01

    Landslide and flood events in Italy cause wide and severe damage to buildings and infrastructure, and are frequently involved in the loss of human life. The cost estimates of past natural disasters generally refer to the amount of public money used for the restoration of the direct damage, and most commonly do not account for all disaster impacts. Other cost components, including indirect losses, are difficult to quantify and, among these, the cost of human lives. The value of specific human life can be identified with the value of a statistical life (VLS), defined as the value that an individual places on a marginal change in their likelihood of death This is different from the value of an actual life. Based on information of fatal car accidents in Italy, we evaluate the cost that society suffers for the loss of life due to landslide and flood events. Using a catalogue of fatal landslide and flood events, for which information about gender and age of the fatalities is known, we determine the cost that society suffers for the loss of their life. For the purpose, we calculate the economic value in terms of the total income that the working-age population involved in the fatal events would have earned over the course of their life. For the computation, we use the pro-capita income calculated as the ratio between the GDP and the population value in Italy for each year, since 1980. Problems occur for children and retired people that we decided not to include in our estimates.

Top