A model for the cost of doing a cost estimate
NASA Technical Reports Server (NTRS)
Remer, D. S.; Buchanan, H. R.
1992-01-01
A model for estimating the cost required to do a cost estimate for Deep Space Network (DSN) projects that range from $0.1 to $100 million is presented. The cost of the cost estimate in thousands of dollars, C(sub E), is found to be approximately given by C(sub E) = K((C(sub p))(sup 0.35)) where C(sub p) is the cost of the project being estimated in millions of dollars and K is a constant depending on the accuracy of the estimate. For an order-of-magnitude estimate, K = 24; for a budget estimate, K = 60; and for a definitive estimate, K = 115. That is, for a specific project, the cost of doing a budget estimate is about 2.5 times as much as that for an order-of-magnitude estimate, and a definitive estimate costs about twice as much as a budget estimate. Use of this model should help provide the level of resources required for doing cost estimates and, as a result, provide insights towards more accurate estimates with less potential for cost overruns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, James M.; Prescott, Ryan; Dawson, Jericah M.
2014-11-01
Sandia National Laboratories has prepared a ROM cost estimate for budgetary planning for the IDC Reengineering Phase 2 & 3 effort, based on leveraging a fully funded, Sandia executed NDC Modernization project. This report provides the ROM cost estimate and describes the methodology, assumptions, and cost model details used to create the ROM cost estimate. ROM Cost Estimate Disclaimer Contained herein is a Rough Order of Magnitude (ROM) cost estimate that has been provided to enable initial planning for this proposed project. This ROM cost estimate is submitted to facilitate informal discussions in relation to this project and is NOTmore » intended to commit Sandia National Laboratories (Sandia) or its resources. Furthermore, as a Federally Funded Research and Development Center (FFRDC), Sandia must be compliant with the Anti-Deficiency Act and operate on a full-cost recovery basis. Therefore, while Sandia, in conjunction with the Sponsor, will use best judgment to execute work and to address the highest risks and most important issues in order to effectively manage within cost constraints, this ROM estimate and any subsequent approved cost estimates are on a 'full-cost recovery' basis. Thus, work can neither commence nor continue unless adequate funding has been accepted and certified by DOE.« less
CUECost workbook development documentation
This is a user's manual for the Coal Utility Environmental Cost (CUECost) workbook to estimate installed capital and annualized costs. The CUECost workbook produces rough-order-of-magnitude (ROM) cost estimates (+/-30% accuracy) of the installed capital and annualized operating c...
Government conceptual estimating for contracting and management
NASA Technical Reports Server (NTRS)
Brown, J. A.
1986-01-01
The use of the Aerospace Price Book, a cost index, and conceptual cost estimating for cost-effective design and construction of space facilities is discussed. The price book consists of over 200 commonly used conceptual elements and 100 systems summaries of projects such as launch pads, processing facilities, and air locks. The cost index is composed of three divisions: (1) bid summaries of major Shuttle projects, (2) budget cost data sheets, and (3) cost management summaries; each of these divisions is described. Conceptual estimates of facilities and ground support equipment are required to provide the most probable project cost for budget, funding, and project approval purposes. Similar buildings, systems, and elements already designed are located in the cost index in order to make the best rough order of magnitude conceptual estimates for development of Space Shuttle facilities. An example displaying the applicability of the conceptual cost estimating procedure for the development of the KSC facilities is presented.
Deep Borehole Disposal Remediation Costs for Off-Normal Outcomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finger, John T.; Cochran, John R.; Hardin, Ernest
2015-08-17
This memo describes rough-order-of-magnitude (ROM) cost estimates for a set of off-normal (accident) scenarios, as defined for two waste package emplacement method options for deep borehole disposal: drill-string and wireline. It summarizes the different scenarios and the assumptions made for each, with respect to fishing, decontamination, remediation, etc.
Goldstein, Joshua H.; Thogmartin, Wayne E.; Bagstad, Kenneth J.; Dubovsky, James A.; Mattsson, Brady J.; Semmens, Darius J.; López-Hoffman, Laura; Diffendorfer, James E.
2014-01-01
Migratory species provide economically beneficial ecosystem services to people throughout their range, yet often, information is lacking about the magnitude and spatial distribution of these benefits at regional scales. We conducted a case study for Northern Pintails (hereafter pintail) in which we quantified regional and sub-regional economic values of subsistence harvest to indigenous communities in Arctic and sub-Arctic North America. As a first step, we used the replacement cost method to quantify the cost of replacing pintail subsistence harvest with the most similar commercially available protein (chicken). For an estimated annual subsistence harvest of ˜15,000 pintail, our mean estimate of the total replacement cost was ˜$63,000 yr−1 ($2010 USD), with sub-regional values ranging from \\$263 yr−1 to \\$21,930 yr−1. Our results provide an order-of-magnitude, conservative estimate of one component of the regional ecosystem-service values of pintails, providing perspective on how spatially explicit values can inform migratory species conservation.
Man power/cost estimation model: Automated planetary projects
NASA Technical Reports Server (NTRS)
Kitchen, L. D.
1975-01-01
A manpower/cost estimation model is developed which is based on a detailed level of financial analysis of over 30 million raw data points which are then compacted by more than three orders of magnitude to the level at which the model is applicable. The major parameter of expenditure is manpower (specifically direct labor hours) for all spacecraft subsystem and technical support categories. The resultant model is able to provide a mean absolute error of less than fifteen percent for the eight programs comprising the model data base. The model includes cost saving inheritance factors, broken down in four levels, for estimating follow-on type programs where hardware and design inheritance are evident or expected.
X-1 to X-Wings: Developing a Parametric Cost Model
NASA Technical Reports Server (NTRS)
Sterk, Steve; McAtee, Aaron
2015-01-01
In todays cost-constrained environment, NASA needs an X-Plane database and parametric cost model that can quickly provide rough order of magnitude predictions of cost from initial concept to first fight of potential X-Plane aircraft. This paper takes a look at the steps taken in developing such a model and reports the results. The challenges encountered in the collection of historical data and recommendations for future database management are discussed. A step-by-step discussion of the development of Cost Estimating Relationships (CERs) is then covered.
NASA Technical Reports Server (NTRS)
Devito, D. M.
1981-01-01
A low-cost GPS civil-user mobile terminal whose purchase cost is substantially an order of magnitude less than estimates for the military counterpart is considered with focus on ground station requirements for position monitoring of civil users requiring this capability and the civil user navigation and location-monitoring requirements. Existing survey literature was examined to ascertain the potential users of a low-cost NAVSTAR receiver and to estimate their number, function, and accuracy requirements. System concepts are defined for low cost user equipments for in-situ navigation and the retransmission of low data rate positioning data via a geostationary satellite to a central computing facility.
Improved rapid magnitude estimation for a community-based, low-cost MEMS accelerometer network
Chung, Angela I.; Cochran, Elizabeth S.; Kaiser, Anna E.; Christensen, Carl M.; Yildirim, Battalgazi; Lawrence, Jesse F.
2015-01-01
Immediately following the Mw 7.2 Darfield, New Zealand, earthquake, over 180 Quake‐Catcher Network (QCN) low‐cost micro‐electro‐mechanical systems accelerometers were deployed in the Canterbury region. Using data recorded by this dense network from 2010 to 2013, we significantly improved the QCN rapid magnitude estimation relationship. The previous scaling relationship (Lawrence et al., 2014) did not accurately estimate the magnitudes of nearby (<35 km) events. The new scaling relationship estimates earthquake magnitudes within 1 magnitude unit of the GNS Science GeoNet earthquake catalog magnitudes for 99% of the events tested, within 0.5 magnitude units for 90% of the events, and within 0.25 magnitude units for 57% of the events. These magnitudes are reliably estimated within 3 s of the initial trigger recorded on at least seven stations. In this report, we present the methods used to calculate a new scaling relationship and demonstrate the accuracy of the revised magnitude estimates using a program that is able to retrospectively estimate event magnitudes using archived data.
Stringent Mitigation Policy Implied By Temperature Impacts on Economic Growth
NASA Astrophysics Data System (ADS)
Moore, F.; Turner, D.
2014-12-01
Integrated assessment models (IAMs) compare the costs of greenhouse gas mitigation with damages from climate change in order to evaluate the social welfare implications of climate policy proposals and inform optimal emissions reduction trajectories. However, these models have been criticized for lacking a strong empirical basis for their damage functions, which do little to alter assumptions of sustained GDP growth, even under extreme temperature scenarios. We implement empirical estimates of temperature effects on GDP growth-rates in the Dynamic Integrated Climate and Economy (DICE) model via two pathways, total factor productivity (TFP) growth and capital depreciation. Even under optimistic adaptation assumptions, this damage specification implies that optimal climate policy involves the elimination of emissions in the near future, the stabilization of global temperature change below 2°C, and a social cost of carbon (SCC) an order of magnitude larger than previous estimates. A sensitivity analysis shows that the magnitude of growth effects, the rate of adaptation, and the dynamic interaction between damages from warming and GDP are three critical uncertainties and an important focus for future research.
NASA Technical Reports Server (NTRS)
Ivanco, Marie L.; Domack, Marcia S.; Stoner, Mary Cecilia; Hehir, Austin R.
2016-01-01
Low Technology Readiness Levels (TRLs) and high levels of uncertainty make it challenging to develop cost estimates of new technologies in the R&D phase. It is however essential for NASA to understand the costs and benefits associated with novel concepts, in order to prioritize research investments and evaluate the potential for technology transfer and commercialization. This paper proposes a framework to perform a cost-benefit analysis of a technology in the R&D phase. This framework was developed and used to assess the Advanced Near Net Shape Technology (ANNST) manufacturing process for fabricating integrally stiffened cylinders. The ANNST method was compared with the conventional multi-piece metallic construction and composite processes for fabricating integrally stiffened cylinders. Following the definition of a case study for a cryogenic tank cylinder of specified geometry, data was gathered through interviews with Subject Matter Experts (SMEs), with particular focus placed on production costs and process complexity. This data served as the basis to produce process flowcharts and timelines, mass estimates, and rough order-of-magnitude cost and schedule estimates. The scalability of the results was subsequently investigated to understand the variability of the results based on tank size. Lastly, once costs and benefits were identified, the Analytic Hierarchy Process (AHP) was used to assess the relative value of these achieved benefits for potential stakeholders. These preliminary, rough order-of-magnitude results predict a 46 to 58 percent reduction in production costs and a 7-percent reduction in weight over the conventional metallic manufacturing technique used in this study for comparison. Compared to the composite manufacturing technique, these results predict cost savings of 35 to 58 percent; however, the ANNST concept was heavier. In this study, the predicted return on investment of equipment required for the ANNST method was ten cryogenic tank barrels when compared with conventional metallic manufacturing. The AHP study results revealed that decreased final cylinder mass and improved quality assurance were the most valued benefits of cylinder manufacturing methods, therefore emphasizing the relevance of the benefits achieved with the ANNST process for future projects.
Hadwin, Paul J; Peterson, Sean D
2017-04-01
The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.
Standardization in software conversion of (ROM) estimating
NASA Technical Reports Server (NTRS)
Roat, G. H.
1984-01-01
Technical problems and their solutions comprise by far the majority of work involved in space simulation engineering. Fixed price contracts with schedule award fees are becoming more and more prevalent. Accurate estimation of these jobs is critical to maintain costs within limits and to predict realistic contract schedule dates. Computerized estimating may hold the answer to these new problems, though up to now computerized estimating has been complex, expensive, and geared to the business world, not to technical people. The objective of this effort was to provide a simple program on a desk top computer capable of providing a Rough Order of Magnitude (ROM) estimate in a short time. This program is not intended to provide a highly detailed breakdown of costs to a customer, but to provide a number which can be used as a rough estimate on short notice. With more debugging and fine tuning, a more detailed estimate can be made.
Aguirre-von-Wobeser, Eneas; Eguiarte, Luis E; Souza, Valeria; Soberón-Chávez, Gloria
2015-01-01
Many strains of bacteria produce antagonistic substances that restrain the growth of others, and potentially give them a competitive advantage. These substances are commonly released to the surrounding environment, involving metabolic costs in terms of energy and nutrients. The rate at which these molecules need to be produced to maintain a certain amount of them close to the producing cell before they are diluted into the environment has not been explored so far. To understand the potential cost of production of antagonistic substances in water environments, we used two different theoretical approaches. Using a probabilistic model, we determined the rate at which a cell needs to produce individual molecules in order to keep on average a single molecule in its vicinity at all times. For this minimum protection, a cell would need to invest 3.92 × 10(-22) kg s(-1) of organic matter, which is 9 orders of magnitude lower than the estimated expense for growth. Next, we used a continuous model, based on Fick's laws, to explore the production rate needed to sustain minimum inhibitory concentrations around a cell, which would provide much more protection from competitors. In this scenario, cells would need to invest 1.20 × 10(-11) kg s(-1), which is 2 orders of magnitude higher than the estimated expense for growth, and thus not sustainable. We hypothesize that the production of antimicrobial compounds by bacteria in aquatic environments lies between these two extremes.
Kowalski, Amanda
2016-01-02
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.
Satellite servicing mission preliminary cost estimation model
NASA Technical Reports Server (NTRS)
1987-01-01
The cost model presented is a preliminary methodology for determining a rough order-of-magnitude cost for implementing a satellite servicing mission. Mission implementation, in this context, encompassess all activities associated with mission design and planning, including both flight and ground crew training and systems integration (payload processing) of servicing hardward with the Shuttle. A basic assumption made in developing this cost model is that a generic set of servicing hardware was developed and flight tested, is inventoried, and is maintained by NASA. This implies that all hardware physical and functional interfaces are well known and therefore recurring CITE testing is not required. The development of the cost model algorithms and examples of their use are discussed.
Kim, David D; Basu, Anirban
2016-01-01
The prevalence of adult obesity exceeds 30% in the United States, posing a significant public health concern as well as a substantial financial burden. Although the impact of obesity on medical spending is undeniably significant, the estimated magnitude of the cost of obesity has varied considerably, perhaps driven by different study methodologies. To document variations in study design and methodology in existing literature and to understand the impact of those variations on the estimated costs of obesity. We conducted a systematic review of the twelve recently published articles that reported costs of obesity and performed a meta-analysis to generate a pooled estimate across those studies. Also, we performed an original analysis to understand the impact of different age groups, statistical models, and confounder adjustment on the magnitude of estimated costs using the nationally representative Medical Expenditure Panel Surveys from 2008-2010. We found significant variations among cost estimates in the existing literature. The meta-analysis found that the annual medical spending attributable to an obese individual was $1901 ($1239-$2582) in 2014 USD, accounting for $149.4 billion at the national level. The two most significant drivers of variability in the cost estimates were age groups and adjustment for obesity-related comorbid conditions. It would be important to acknowledge variations in the magnitude of the medical cost of obesity driven by different study design and methodology. Researchers and policy-makers need to be cautious on determining appropriate cost estimates according to their scientific and political questions. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Kowalski, Amanda
2015-01-01
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member’s injury to induce variation in an individual’s own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from −0.76 to −1.49, which are an order of magnitude larger than previous estimates. PMID:26977117
Accurate characterisation of hole size and location by projected fringe profilometry
NASA Astrophysics Data System (ADS)
Wu, Yuxiang; Dantanarayana, Harshana G.; Yue, Huimin; Huntley, Jonathan M.
2018-06-01
The ability to accurately estimate the location and geometry of holes is often required in the field of quality control and automated assembly. Projected fringe profilometry is a potentially attractive technique on account of being non-contacting, of lower cost, and orders of magnitude faster than the traditional coordinate measuring machine. However, we demonstrate in this paper that fringe projection is susceptible to significant (hundreds of µm) measurement artefacts in the neighbourhood of hole edges, which give rise to errors of a similar magnitude in the estimated hole geometry. A mechanism for the phenomenon is identified based on the finite size of the imaging system’s point spread function and the resulting bias produced near to sample discontinuities in geometry and reflectivity. A mathematical model is proposed, from which a post-processing compensation algorithm is developed to suppress such errors around the holes. The algorithm includes a robust and accurate sub-pixel edge detection method based on a Fourier descriptor of the hole contour. The proposed algorithm was found to reduce significantly the measurement artefacts near the hole edges. As a result, the errors in estimated hole radius were reduced by up to one order of magnitude, to a few tens of µm for hole radii in the range 2–15 mm, compared to those from the uncompensated measurements.
Alternatives Analysis Amchitka Island Mud Pit Cap Repair, Amchitka, Alaska January 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darr, Paul S.
2016-01-01
The U.S. Department of Energy (DOE) Office of Legacy Management (LM) manages the Nevada Offsites program, which includes a series of reclaimed drilling mud impoundments on Amchitka Island, Alaska (Figure 1). Navarro Research and Engineering, Inc. is the Legacy Management Support contractor (the Contractor) for LM. The Contractor has procured Tetra Tech, Inc. to provide engineering support to the Amchitka mud pit reclamation project. The mud pit caps were damaged during a 7.9-magnitude earthquake that occurred in 2014. The goals of the current project are to investigate conditions at the mud pit impoundments, identify feasible alternatives for repair of themore » cover systems and the contents, and estimate relative costs of repair alternatives. This report presents descriptions of the sites and past investigations, existing conditions, summaries of various repair/mitigation alternatives, and direct, unburdened, order-of-magnitude (-15% to +50%) associated costs.« less
Cost, Energy, and Environmental Impact of Automated Electric Taxi Fleets in Manhattan.
Bauer, Gordon S; Greenblatt, Jeffery B; Gerke, Brian F
2018-04-17
Shared automated electric vehicles (SAEVs) hold great promise for improving transportation access in urban centers while drastically reducing transportation-related energy consumption and air pollution. Using taxi-trip data from New York City, we develop an agent-based model to predict the battery range and charging infrastructure requirements of a fleet of SAEVs operating on Manhattan Island. We also develop a model to estimate the cost and environmental impact of providing service and perform extensive sensitivity analysis to test the robustness of our predictions. We estimate that costs will be lowest with a battery range of 50-90 mi, with either 66 chargers per square mile, rated at 11 kW or 44 chargers per square mile, rated at 22 kW. We estimate that the cost of service provided by such an SAEV fleet will be $0.29-$0.61 per revenue mile, an order of magnitude lower than the cost of service of present-day Manhattan taxis and $0.05-$0.08/mi lower than that of an automated fleet composed of any currently available hybrid or internal combustion engine vehicle (ICEV). We estimate that such an SAEV fleet drawing power from the current NYC power grid would reduce GHG emissions by 73% and energy consumption by 58% compared to an automated fleet of ICEVs.
Keeler, Bonnie L.; Gourevitch, Jesse D.; Polasky, Stephen; Isbell, Forest; Tessum, Chris W.; Hill, Jason D.; Marshall, Julian D.
2016-01-01
Despite growing recognition of the negative externalities associated with reactive nitrogen (N), the damage costs of N to air, water, and climate remain largely unquantified. We propose a comprehensive approach for estimating the social cost of nitrogen (SCN), defined as the present value of the monetary damages caused by an incremental increase in N. This framework advances N accounting by considering how each form of N causes damages at specific locations as it cascades through the environment. We apply the approach to an empirical example that estimates the SCN for N applied as fertilizer. We track impacts of N through its transformation into atmospheric and aquatic pools and estimate the distribution of associated costs to affected populations. Our results confirm that there is no uniform SCN. Instead, changes in N management will result in different N-related costs depending on where N moves and the location, vulnerability, and preferences of populations affected by N. For example, we found that the SCN per kilogram of N fertilizer applied in Minnesota ranges over several orders of magnitude, from less than $0.001/kg N to greater than $10/kg N, illustrating the importance of considering the site, the form of N, and end points of interest rather than assuming a uniform cost for damages. Our approach for estimating the SCN demonstrates the potential of integrated biophysical and economic models to illuminate the costs and benefits of N and inform more strategic and efficient N management. PMID:27713926
Categories of Large Numbers in Line Estimation
ERIC Educational Resources Information Center
Landy, David; Charlesworth, Arthur; Ottmar, Erin
2017-01-01
How do people stretch their understanding of magnitude from the experiential range to the very large quantities and ranges important in science, geopolitics, and mathematics? This paper empirically evaluates how and whether people make use of numerical categories when estimating relative magnitudes of numbers across many orders of magnitude. We…
Econometric estimation of country-specific hospital costs.
Adam, Taghreed; Evans, David B; Murray, Christopher JL
2003-02-26
Information on the unit cost of inpatient and outpatient care is an essential element for costing, budgeting and economic-evaluation exercises. Many countries lack reliable estimates, however. WHO has recently undertaken an extensive effort to collect and collate data on the unit cost of hospitals and health centres from as many countries as possible; so far, data have been assembled from 49 countries, for various years during the period 1973-2000. The database covers a total of 2173 country-years of observations. Large gaps remain, however, particularly for developing countries. Although the long-term solution is that all countries perform their own costing studies, the question arises whether it is possible to predict unit costs for different countries in a standardized way for short-term use. The purpose of the work described in this paper, a modelling exercise, was to use the data collected across countries to predict unit costs in countries for which data are not yet available, with the appropriate uncertainty intervals.The model presented here forms part of a series of models used to estimate unit costs for the WHO-CHOICE project. The methods and the results of the model, however, may be used to predict a number of different types of country-specific unit costs, depending on the purpose of the exercise. They may be used, for instance, to estimate the costs per bed-day at different capacity levels; the "hotel" component of cost per bed-day; or unit costs net of particular components such as drugs.In addition to reporting estimates for selected countries, the paper shows that unit costs of hospitals vary within countries, sometimes by an order of magnitude. Basing cost-effectiveness studies or budgeting exercises on the results of a study of a single facility, or even a small group of facilities, is likely to be misleading.
Ecosystem services impacts associated with environmental ...
Nitrogen release to the environment from human activities can have important and costly impacts on human health, recreation, transportation, fisheries, and ecosystem health. Recent efforts to quantify these damage costs have identified annual damages associated with reactive nitrogen release to the EU and US in the hundreds of billions of US dollars (USD). The general approach used to estimate these damages associated with reactive nitrogen are derived from a variety of methods to estimate economic damages, for example, impacts to human respiratory health in terms of hospital visits and mortality, willingness to pay to improve a water body and costs to replace or treat drinking water systems affected by nitrate or cyanotoxin contamination. These values are then extrapolated to other areas to develop the damage cost estimates that are probably best seen as potential damage costs, particularly for aquatic ecosystems. We seek to provide an additional verification of these potential damages using data assembled by the US EPA for case studies of measured costs of nutrient impacts across the US from 2000-2012. We compare the spatial distribution and the magnitude of these costs with the spatial distribution and magnitude of costs from HUC8 watershed units across the US by Sobota et al. (2015). We anticipate that this analysis will provide a ground truthing of existing damage cost estimates, and continue to support the incorporation of cost and benefit informatio
Cost savings through implementation of an integrated home-based record: a case study in Vietnam.
Aiga, Hirotsugu; Pham Huy, Tuan Kiet; Nguyen, Vinh Duc
2018-03-01
In Vietnam, there are three major home-based records (HBRs) for maternal and child health (MCH) that have been already nationally scaled up, i.e., Maternal and Child Health Handbook (MCH Handbook), Child Vaccination Handbook, and Child Growth Monitoring Chart. The MCH Handbook covers all the essential recording items that are included in the other two. This overlapping of recording items between the HBRs is likely to result in inefficient use of both financial and human resources. This study is aimed at estimating the magnitude of cost savings that are expected to be realized through implementing exclusively the MCH Handbook by terminating the other two. Secondary data collection and analyses on HBR production and distribution costs and health workers' opportunity costs. Through multiplying the unit costs by their respective quantity multipliers, recurrent costs of operations of three HBRs were estimated. Moreover, magnitude of cost savings likely to be realized was estimated, by calculating recurrent costs overlapping between the three HBRs. It was estimated that implementing exclusively the MCH Handbook would lead to cost savings of United States dollar 3.01 million per annum. The amount estimated is minimum cost savings because only recurrent cost elements (HBR production and distribution costs and health workers' opportunity costs) were incorporated into the estimation. Further indirect cost savings could be expected through reductions in health expenditures, as the use of the MCH Handbook would contribute to prevention of maternal and child illnesses by increasing antenatal care visits and breastfeeding practices. To avoid wasting financial and human resources, the MCH Handbook should be exclusively implemented by abolishing the other two HBRs. This study is globally an initial attempt to estimate cost savings to be realized through avoiding overlapping operations between multiple HBRs for MCH. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Landslide Risk: Economic Valuation in The North-Eastern Zone of Medellin City
NASA Astrophysics Data System (ADS)
Vega, Johnny Alexander; Hidalgo, César Augusto; Johana Marín, Nini
2017-10-01
Natural disasters of a geodynamic nature can cause enormous economic and human losses. The economic costs of a landslide disaster include relocation of communities and physical repair of urban infrastructure. However, when performing a quantitative risk analysis, generally, the indirect economic consequences of such an event are not taken into account. A probabilistic approach methodology that considers several scenarios of hazard and vulnerability to measure the magnitude of the landslide and to quantify the economic costs is proposed. With this approach, it is possible to carry out a quantitative evaluation of the risk by landslides, allowing the calculation of the economic losses before a potential disaster in an objective, standardized and reproducible way, taking into account the uncertainty of the building costs in the study zone. The possibility of comparing different scenarios facilitates the urban planning process, the optimization of interventions to reduce risk to acceptable levels and an assessment of economic losses according to the magnitude of the damage. For the development and explanation of the proposed methodology, a simple case study is presented, located in north-eastern zone of the city of Medellín. This area has particular geomorphological characteristics, and it is also characterized by the presence of several buildings in bad structural conditions. The proposed methodology permits to obtain an estimative of the probable economic losses by earthquake-induced landslides, taking into account the uncertainty of the building costs in the study zone. The obtained estimative shows that the structural intervention of the buildings produces a reduction the order of 21 % in the total landslide risk.
FY 2001 and Beyond Program Plan
NASA Technical Reports Server (NTRS)
Bowles, Dave
2000-01-01
The scope of the project summarized in this viewgraph presentation is to develop and demonstrate third generation airframe technologies that provide significant reductions in cost of space transportation systems while dramatically improving the safety and higher operability of those systems. The Earth-to-orbit goal is to conduct research and technology development and demonstrations which will enable US industry to increase safety by four orders of magnitude (loss of vehicle/crew probability less than 1 in 1,000,000 missions) and reduce costs by two orders of magnitude within 25 years.
Pak, Theodore R.; Chacko, Kieran; O’Donnell, Timothy; Huprikar, Shirish; van Bakel, Harm; Kasarskis, Andrew; Scott, Erick R.
2018-01-01
Background Reported per-patient costs of Clostridium difficile infection (CDI) vary by two orders of magnitude among different hospitals, implying that infection control officers need precise, local analyses to guide rational decision-making between interventions. Objective We sought to comprehensively estimate changes in length of stay (LOS) attributable to CDI at one urban tertiary-care facility using only data automatically extractable from the electronic medical record (EMR). Methods We performed a retrospective cohort study of 171,938 visits spanning a 7-year period. 23,968 variables were extracted from EMR data recorded within 24 hours of admission to train elastic net regularized logistic regression models for propensity score matching. To address time-dependent bias (reverse causation), we separately stratified comparisons by time-of-infection and fit multistate models. Results The estimated difference in median LOS for propensity-matched cohorts varied from 3.1 days (95% CI, 2.2–3.9) to 10.1 days (95% CI, 7.3–12.2) depending on the case definition; however, dependency of the estimate on time-to-infection was observed. Stratification by time to first positive toxin assay, excluding probable community-acquired infections, showed a minimum excess LOS of 3.1 days (95% CI, 1.7–4.4). Under the same case definition, the multistate model averaged an excess LOS of 3.3 days (95% CI, 2.6–4.0). Conclusions Two independent time-to-infection adjusted methods converged on similar excess LOS estimates. Changes in LOS can be extrapolated to a marginal dollar costs by multiplying by average costs of an inpatient-day. Infection control officers can leverage automatically extractable EMR data to estimate costs of CDI at their own institution. PMID:29103378
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.
Boulanger, Guillaume; Bayeux, Thomas; Mandin, Corinne; Kirchner, Séverine; Vergriette, Benoit; Pernelet-Joly, Valérie; Kopp, Pierre
2017-07-01
An evaluation of the socio-economic costs of indoor air pollution can facilitate the development of appropriate public policies. For the first time in France, such an evaluation was conducted for six selected pollutants: benzene, trichloroethylene, radon, carbon monoxide, particles (PM 2.5 fraction), and environmental tobacco smoke (ETS). The health impacts of indoor exposure were either already available in published works or were calculated. For these calculations, two approaches were followed depending on the available data: the first followed the principles of quantitative health risk assessment, and the second was based on concepts and methods related to the health impact assessment. For both approaches, toxicological data and indoor concentrations related to each target pollutant were used. External costs resulting from mortality, morbidity (life quality loss) and production losses attributable to these health impacts were assessed. In addition, the monetary costs for the public were determined. Indoor pollution associated with the selected pollutants was estimated to have cost approximately €20 billion in France in 2004. Particles contributed the most to the total cost (75%), followed by radon. Premature death and the costs of the quality of life loss accounted for approximately 90% of the total cost. Despite the use of different methods and data, similar evaluations previously conducted in other countries yielded figures within the same order of magnitude. Copyright © 2017 Elsevier Ltd. All rights reserved.
Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations
NASA Astrophysics Data System (ADS)
Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.
2015-08-01
This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using borehole recordings had the smallest standard deviation among the estimated magnitudes and produced more stable and robust magnitude estimates. This suggests that incorporating borehole strong ground-motion records immediately available after the occurrence of large earthquakes can provide robust and accurate magnitude estimation.
Correlation, Cost Risk, and Geometry
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1992-01-01
The geometric viewpoint identifies the choice of a correlation matrix for the simulation of cost risk with the pairwise choice of data vectors corresponding to the parameters used to obtain cost risk. The correlation coefficient is the cosine of the angle between the data vectors after translation to an origin at the mean and normalization for magnitude. Thus correlation is equivalent to expressing the data in terms of a non orthogonal basis. To understand the many resulting phenomena requires the use of the tensor concept of raising the index to transform the measured and observed covariant components into contravariant components before vector addition can be applied. The geometric viewpoint also demonstrates that correlation and covariance are geometric properties, as opposed to purely statistical properties, of the variates. Thus, variates from different distributions may be correlated, as desired, after selection from independent distributions. By determining the principal components of the correlation matrix, variates with the desired mean, magnitude, and correlation can be generated through linear transforms which include the eigenvalues and the eigenvectors of the correlation matrix. The conversion of the data to a non orthogonal basis uses a compound linear transformation which distorts or stretches the data space. Hence, the correlated data does not have the same properties as the uncorrelated data used to generate it. This phenomena is responsible for seemingly strange observations such as the fact that the marginal distributions of the correlated data can be quite different from the distributions used to generate the data. The joint effect of statistical distributions and correlation remains a fertile area for further research. In terms of application to cost estimating, the geometric approach demonstrates that the estimator must have data and must understand that data in order to properly choose the correlation matrix appropriate for a given estimate. There is a general feeling by employers and managers that the field of cost requires little technical or mathematical background. Contrary to that opinion, this paper demonstrates that a background in mathematics equivalent to that needed for typical engineering and scientific disciplines at the masters or doctorate level is appropriate within the field of cost risk.
Costs of Juvenile Violence: Policy Implications.
ERIC Educational Resources Information Center
Miller, Ted; Fisher, Deborah A.; Cohen, Mark A.
2001-01-01
Investigated the magnitude of juvenile violence in Pennsylvania in terms of victimization and perpetration. Used archival data on violent crimes in Pennsylvania during 1993 to develop cost estimates reflecting the costs incurred by society for both victims and perpetrators. Overall, violence against children and adolescents proved to be a much…
Changes in Work Habits of Lifeguards in Relation to Florida Red Tide.
Nierenberg, Kate; Kirner, Karen; Hoagland, Porter; Ullmann, Steven; Leblanc, William G; Kirkpatrick, Gary; Fleming, Lora E; Kirkpatrick, Barbara
2010-05-01
The marine dinoflagellate, Karenia brevis, is responsible for Florida red tides. Brevetoxins, the neurotoxins produced by K. brevis blooms, can cause fish kills, contaminate shellfish, and lead to respiratory illness in humans. Although several studies have assessed different economic impacts from Florida red tide blooms, no studies to date have considered the impact on beach lifeguard work performance. Sarasota County experiences frequent Florida red tides and staffs lifeguards at its beaches 365 days a year. This study examined lifeguard attendance records during the time periods of March 1 to September 30 in 2004 (no bloom) and March 1 to September 30 in 2005 (bloom). The lifeguard attendance data demonstrated statistically significant absenteeism during a Florida red tide bloom. The potential economic costs resulting from red tide blooms were comprised of both lifeguard absenteeism and presenteeism. Our estimate of the costs of absenteeism due to the 2005 red tide in Sarasota County is about $3,000. On average, the capitalized costs of lifeguard absenteeism in Sarasota County may be on the order of $100,000 at Sarasota County beaches alone. When surveyed, lifeguards reported not only that they experienced adverse health effects of exposure to Florida red tide but also that their attentiveness and abilities to take preventative actions decrease when they worked during a bloom, implying presenteeism effects. The costs of presenteeism, which imply increased risks to beachgoers, arguably could exceed those of absenteeism by an order of magnitude. Due to the lack of data, however, we are unable to provide credible estimates of the costs of presenteeism or the potential increased risks to bathers.
Cost Estimate for Molybdenum and Tantalum Refractory Metal Alloy Flow Circuit Concepts
NASA Technical Reports Server (NTRS)
Hickman, Robert R.; Martin, James J.; Schmidt, George R.; Godfroy, Thomas J.; Bryhan, A.J.
2010-01-01
The Early Flight Fission-Test Facilities (EFF-TF) team at NASA Marshall Space Flight Center (MSFC) has been tasked by the Naval Reactors Prime Contract Team (NRPCT) to provide a cost and delivery rough order of magnitude estimate for a refractory metal-based lithium (Li) flow circuit. The design is based on the stainless steel Li flow circuit that is currently being assembled for an NRPCT task underway at the EFF-TF. While geometrically the flow circuit is not representative of a final flight prototype, knowledge has been gained to quantify (time and cost) the materials, manufacturing, fabrication, assembly, and operations to produce a testable configuration. This Technical Memorandum (TM) also identifies the following key issues that need to be addressed by the fabrication process: Alloy selection and forming, cost and availability, welding, bending, machining, assembly, and instrumentation. Several candidate materials were identified by NRPCT including molybdenum (Mo) alloy (Mo-47.5 %Re), tantalum (Ta) alloys (T-111, ASTAR-811C), and niobium (Nb) alloy (Nb-1 %Zr). This TM is focused only on the Mo and Ta alloys, since they are of higher concern to the ongoing effort. The initial estimate to complete a Mo-47%Re system ready for testing is =$9,000k over a period of 30 mo. The initial estimate to complete a T-111 or ASTAR-811C system ready for testing is =$12,000k over a period of 36 mo.
Cost estimate for a proposed GDF Suez LNG testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchat, Thomas K.; Brady, Patrick Dennis; Jernigan, Dann A.
2014-02-01
At the request of GDF Suez, a Rough Order of Magnitude (ROM) cost estimate was prepared for the design, construction, testing, and data analysis for an experimental series of large-scale (Liquefied Natural Gas) LNG spills on land and water that would result in the largest pool fires and vapor dispersion events ever conducted. Due to the expected cost of this large, multi-year program, the authors utilized Sandia's structured cost estimating methodology. This methodology insures that the efforts identified can be performed for the cost proposed at a plus or minus 30 percent confidence. The scale of the LNG spill, fire,more » and vapor dispersion tests proposed by GDF could produce hazard distances and testing safety issues that need to be fully explored. Based on our evaluations, Sandia can utilize much of our existing fire testing infrastructure for the large fire tests and some small dispersion tests (with some modifications) in Albuquerque, but we propose to develop a new dispersion testing site at our remote test area in Nevada because of the large hazard distances. While this might impact some testing logistics, the safety aspects warrant this approach. In addition, we have included a proposal to study cryogenic liquid spills on water and subsequent vaporization in the presence of waves. Sandia is working with DOE on applications that provide infrastructure pertinent to wave production. We present an approach to conduct repeatable wave/spill interaction testing that could utilize such infrastructure.« less
Fast 3D shape measurements with reduced motion artifacts
NASA Astrophysics Data System (ADS)
Feng, Shijie; Zuo, Chao; Chen, Qian; Gu, Guohua
2017-10-01
Fringe projection is an extensively used technique for high speed three-dimensional (3D) measurements of dynamic objects. However, the motion often leads to artifacts in reconstructions due to the sequential recording of the set of patterns. In order to reduce the adverse impact of the movement, we present a novel high speed 3D scanning technique combining the fringe projection and stereo. Firstly, promising measuring speed is achieved by modifying the traditional aperiodic sinusoidal patterns so that the fringe images can be cast at kilohertz with the widely used defocusing strategy. Next, a temporal intensity tracing algorithm is developed to further alleviate the influence of motion by accurately tracing the ideal intensity for stereo matching. Then, a combined cost measure is suggested to robustly estimate the cost for each pixel. In comparison with the traditional method where the effect of motion is not considered, experimental results show that the reconstruction accuracy for dynamic objects can be improved by an order of magnitude with the proposed method.
NASA Astrophysics Data System (ADS)
Daniell, James; Pomonis, Antonios; Gunasekera, Rashmin; Ishizawa, Oscar; Gaspari, Maria; Lu, Xijie; Aubrecht, Christoph; Ungar, Joachim
2017-04-01
In order to quantify disaster risk, there is a demand and need for determining consistent and reliable economic value of built assets at national or sub national level exposed to natural hazards. The value of the built stock in the context of a city or a country is critical for risk modelling applications as it allows for the upper bound in potential losses to be established. Under the World Bank probabilistic disaster risk assessment - Country Disaster Risk Profiles (CDRP) Program and rapid post-disaster loss analyses in CATDAT, key methodologies have been developed that quantify the asset exposure of a country. In this study, we assess the complementary methods determining value of building stock through capital investment data vs aggregated ground up values based on built area and unit cost of construction analyses. Different approaches to modelling exposure around the world, have resulted in estimated values of built assets of some countries differing by order(s) of magnitude. Using the aforementioned methodology of comparing investment data based capital stock and bottom-up unit cost of construction values per square meter of assets; a suitable range of capital stock estimates for built assets have been created. A blind test format was undertaken to compare the two types of approaches from top-down (investment) and bottom-up (construction cost per unit), In many cases, census data, demographic, engineering and construction cost data are key for bottom-up calculations from previous years. Similarly for the top-down investment approach, distributed GFCF (Gross Fixed Capital Formation) data is also required. Over the past few years, numerous studies have been undertaken through the World Bank Caribbean and Central America disaster risk assessment program adopting this methodology initially developed by Gunasekera et al. (2015). The range of values of the building stock is tested for around 15 countries. In addition, three types of costs - Reconstruction cost (building back to the standard required by building codes); Replacement cost (gross capital stock) and Book value (net capital stock - depreciated value of assets) are discussed and the differences in methodologies assessed. We then examine historical costs (reconstruction and replacement) and losses (book value) of natural disasters versus this upper bound of capital stock in various locations to examine the impact of a reasonable capital stock estimate. It is found that some historic loss estimates in publications are not reasonable given the value of assets at the time of the event. This has applications for quantitative disaster risk assessment and development of country disaster risk profiles, economic analyses and benchmarking upper loss limits of built assets damaged due to natural hazards.
Evaluation of the public health impacts of traffic congestion: a health risk assessment.
Levy, Jonathan I; Buonocore, Jonathan J; von Stackelberg, Katherine
2010-10-27
Traffic congestion is a significant issue in urban areas in the United States and around the world. Previous analyses have estimated the economic costs of congestion, related to fuel and time wasted, but few have quantified the public health impacts or determined how these impacts compare in magnitude to the economic costs. Moreover, the relative magnitudes of economic and public health impacts of congestion would be expected to vary significantly across urban areas, as a function of road infrastructure, population density, and atmospheric conditions influencing pollutant formation, but this variability has not been explored. In this study, we evaluate the public health impacts of ambient exposures to fine particulate matter (PM2.5) concentrations associated with a business-as-usual scenario of predicted traffic congestion. We evaluate 83 individual urban areas using traffic demand models to estimate the degree of congestion in each area from 2000 to 2030. We link traffic volume and speed data with the MOBILE6 model to characterize emissions of PM2.5 and particle precursors attributable to congestion, and we use a source-receptor matrix to evaluate the impact of these emissions on ambient PM2.5 concentrations. Marginal concentration changes are related to a concentration-response function for mortality, with a value of statistical life approach used to monetize the impacts. We estimate that the monetized value of PM2.5-related mortality attributable to congestion in these 83 cities in 2000 was approximately $31 billion (2007 dollars), as compared with a value of time and fuel wasted of $60 billion. In future years, the economic impacts grow (to over $100 billion in 2030) while the public health impacts decrease to $13 billion in 2020 before increasing to $17 billion in 2030, given increasing population and congestion but lower emissions per vehicle. Across cities and years, the public health impacts range from more than an order of magnitude less to in excess of the economic impacts. Our analyses indicate that the public health impacts of congestion may be significant enough in magnitude, at least in some urban areas, to be considered in future evaluations of the benefits of policies to mitigate congestion.
Costs and benefits of bicycling investments in Portland, Oregon.
Gotschi, Thomas
2011-01-01
Promoting bicycling has great potential to increase overall physical activity; however, significant uncertainty exists with regard to the amount and effectiveness of investment needed for infrastructure. The objective of this study is to assess how costs of Portland's past and planned investments in bicycling relate to health and other benefits. Costs of investment plans are compared with 2 types of monetized health benefits, health care cost savings and value of statistical life savings. Levels of bicycling are estimated using past trends, future mode share goals, and a traffic demand model. By 2040, investments in the range of $138 to $605 million will result in health care cost savings of $388 to $594 million, fuel savings of $143 to $218 million, and savings in value of statistical lives of $7 to $12 billion. The benefit-cost ratios for health care and fuel savings are between 3.8 and 1.2 to 1, and an order of magnitude larger when value of statistical lives is used. This first of its kind cost-benefit analysis of investments in bicycling in a US city shows that such efforts are cost-effective, even when only a limited selection of benefits is considered.
Starship Sails Propelled by Cost-Optimized Directed Energy
NASA Astrophysics Data System (ADS)
Benford, J.
Microwave and laser-propelled sails are a new class of spacecraft using photon acceleration. It is the only method of interstellar flight that has no physics issues. Laboratory demonstrations of basic features of beam-driven propulsion, flight, stability (`beam-riding'), and induced spin, have been completed in the last decade, primarily in the microwave. It offers much lower cost probes after a substantial investment in the launcher. Engineering issues are being addressed by other applications: fusion (microwave, millimeter and laser sources) and astronomy (large aperture antennas). There are many candidate sail materials: carbon nanotubes and microtrusses, beryllium, graphene, etc. For acceleration of a sail, what is the cost-optimum high power system? Here the cost is used to constrain design parameters to estimate system power, aperture and elements of capital and operating cost. From general relations for cost-optimal transmitter aperture and power, system cost scales with kinetic energy and inversely with sail diameter and frequency. So optimal sails will be larger, lower in mass and driven by higher frequency beams. Estimated costs include economies of scale. We present several starship point concepts. Systems based on microwave, millimeter wave and laser technologies are of equal cost at today's costs. The frequency advantage of lasers is cancelled by the high cost of both the laser and the radiating optic. Cost of interstellar sailships is very high, driven by current costs for radiation source, antennas and especially electrical power. The high speeds necessary for fast interstellar missions make the operating cost exceed the capital cost. Such sailcraft will not be flown until the cost of electrical power in space is reduced orders of magnitude below current levels.
Johansen, M P; Barnett, C L; Beresford, N A; Brown, J E; Černe, M; Howard, B J; Kamboj, S; Keum, D-K; Smodiš, B; Twining, J R; Vandenhove, H; Vives i Batlle, J; Wood, M D; Yu, C
2012-06-15
Radiological doses to terrestrial wildlife were examined in this model inter-comparison study that emphasised factors causing variability in dose estimation. The study participants used varying modelling approaches and information sources to estimate dose rates and tissue concentrations for a range of biota types exposed to soil contamination at a shallow radionuclide waste burial site in Australia. Results indicated that the dominant factor causing variation in dose rate estimates (up to three orders of magnitude on mean total dose rates) was the soil-to-organism transfer of radionuclides that included variation in transfer parameter values as well as transfer calculation methods. Additional variation was associated with other modelling factors including: how participants conceptualised and modelled the exposure configurations (two orders of magnitude); which progeny to include with the parent radionuclide (typically less than one order of magnitude); and dose calculation parameters, including radiation weighting factors and dose conversion coefficients (typically less than one order of magnitude). Probabilistic approaches to model parameterisation were used to encompass and describe variable model parameters and outcomes. The study confirms the need for continued evaluation of the underlying mechanisms governing soil-to-organism transfer of radionuclides to improve estimation of dose rates to terrestrial wildlife. The exposure pathways and configurations available in most current codes are limited when considering instances where organisms access subsurface contamination through rooting, burrowing, or using different localised waste areas as part of their habitual routines. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Florence, Curtis S; Zhou, Chao; Luo, Feijun; Xu, Likang
2016-10-01
It is important to understand the magnitude and distribution of the economic burden of prescription opioid overdose, abuse, and dependence to inform clinical practice, research, and other decision makers. Decision makers choosing approaches to address this epidemic need cost information to evaluate the cost effectiveness of their choices. To estimate the economic burden of prescription opioid overdose, abuse, and dependence from a societal perspective. Incidence of fatal prescription opioid overdose from the National Vital Statistics System, prevalence of abuse and dependence from the National Survey of Drug Use and Health. Fatal data are for the US population, nonfatal data are a nationally representative sample of the US civilian noninstitutionalized population ages 12 and older. Cost data are from various sources including health care claims data from the Truven Health MarketScan Research Databases, and cost of fatal cases from the WISQARS (Web-based Injury Statistics Query and Reporting System) cost module. Criminal justice costs were derived from the Justice Expenditure and Employment Extracts published by the Department of Justice. Estimates of lost productivity were based on a previously published study. Calendar year 2013. Monetized burden of fatal overdose and abuse and dependence of prescription opioids. The total economic burden is estimated to be $78.5 billion. Over one third of this amount is due to increased health care and substance abuse treatment costs ($28.9 billion). Approximately one quarter of the cost is borne by the public sector in health care, substance abuse treatment, and criminal justice costs. These estimates can assist decision makers in understanding the magnitude of adverse health outcomes associated with prescription opioid use such as overdose, abuse, and dependence.
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron
2015-02-01
This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.
The worldwide costs of marine protected areas
Balmford, Andrew; Gravestock, Pippa; Hockley, Neal; McClean, Colin J.; Roberts, Callum M.
2004-01-01
Declines in marine harvests, wildlife, and habitats have prompted calls at both the 2002 World Summit on Sustainable Development and the 2003 World Parks Congress for the establishment of a global system of marine protected areas (MPAs). MPAs that restrict fishing and other human activities conserve habitats and populations and, by exporting biomass, may sustain or increase yields of nearby fisheries. Here we provide an estimate of the costs of a global MPA network, based on a survey of the running costs of 83 MPAs worldwide. Annual running costs per unit area spanned six orders of magnitude, and were higher in MPAs that were smaller, closer to coasts, and in high-cost, developed countries. Models extrapolating these findings suggest that a global MPA network meeting the World Parks Congress target of conserving 20–30% of the world's seas might cost between $5 billion and $19 billion annually to run and would probably create around one million jobs. Although substantial, gross network costs are less than current government expenditures on harmful subsidies to industrial fisheries. They also ignore potential private gains from improved fisheries and tourism and are dwarfed by likely social gains from increasing the sustainability of fisheries and securing vital ecosystem services. PMID:15205483
Integrating legal liabilities in nanomanufacturing risk management.
Mohan, Mayank; Trump, Benjamin D; Bates, Matthew E; Monica, John C; Linkov, Igor
2012-08-07
Among other things, the wide-scale development and use of nanomaterials is expected to produce costly regulatory and civil liabilities for nanomanufacturers due to lingering uncertainties, unanticipated effects, and potential toxicity. The life-cycle environmental, health, and safety (EHS) risks of nanomaterials are currently being studied, but the corresponding legal risks have not been systematically addressed. With the aid of a systematic approach that holistically evaluates and accounts for uncertainties about the inherent properties of nanomaterials, it is possible to provide an order of magnitude estimate of liability risks from regulatory and litigious sources based on current knowledge. In this work, we present a conceptual framework for integrating estimated legal liabilities with EHS risks across nanomaterial life-cycle stages using empirical knowledge in the field, scientific and legal judgment, probabilistic risk assessment, and multicriteria decision analysis. Such estimates will provide investors and operators with a basis to compare different technologies and practices and will also inform regulatory and legislative bodies in determining standards that balance risks with technical advancement. We illustrate the framework through the hypothetical case of a manufacturer of nanoscale titanium dioxide and use the resulting expected legal costs to evaluate alternative risk-management actions.
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
NASA Astrophysics Data System (ADS)
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
[Nurses' professional prestige: estimate of magnitudes and expanded categories].
Sousa, F A; da Silva, J A
2001-01-01
The prestige of professionals such as social workers, biologists, dentists, nurses, engineers, pharmacists, physicists, physical therapists, speech-language pathologists, physicians, psychologists, chemists and sociologists was scaled by the psychophysical methods of estimation of magnitudes and expanded categories. Results showed that: 1) when we increase the limited amplitude of categories, this method has the same characteristics as those of the estimation of magnitudes. 2) the relationship between the estimations of magnitudes and estimations of expanded categories is a power function with an exponent that is not significantly different from 1.0. These data enabled the following conclusions: 1--The nursing profession is in the seventh or eighth position regarding the prestige of the 13 professions whereas physicians are in the first position in the scale obtained by the used methods; 2--the orders resulting from the methods produce positions of prestige that highly agree for the different professions.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
Earthquake magnitude estimation using the τ c and P d method for earthquake early warning systems
NASA Astrophysics Data System (ADS)
Jin, Xing; Zhang, Hongcai; Li, Jun; Wei, Yongxiang; Ma, Qiang
2013-10-01
Earthquake early warning (EEW) systems are one of the most effective ways to reduce earthquake disaster. Earthquake magnitude estimation is one of the most important and also the most difficult parts of the entire EEW system. In this paper, based on 142 earthquake events and 253 seismic records that were recorded by the KiK-net in Japan, and aftershocks of the large Wenchuan earthquake in Sichuan, we obtained earthquake magnitude estimation relationships using the τ c and P d methods. The standard variances of magnitude calculation of these two formulas are ±0.65 and ±0.56, respectively. The P d value can also be used to estimate the peak ground motion of velocity, then warning information can be released to the public rapidly, according to the estimation results. In order to insure the stability and reliability of magnitude estimation results, we propose a compatibility test according to the natures of these two parameters. The reliability of the early warning information is significantly improved though this test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, Sarah; Bowman, Daniel; Rodgers, Arthur
Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less
Albert, Sarah; Bowman, Daniel; Rodgers, Arthur; ...
2018-04-23
Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less
NASA Astrophysics Data System (ADS)
Goldberg, D.; Bock, Y.; Melgar, D.
2017-12-01
Rapid seismic magnitude assessment is a top priority for earthquake and tsunami early warning systems. For the largest earthquakes, seismic instrumentation tends to underestimate the magnitude, leading to an insufficient early warning, particularly in the case of tsunami evacuation orders. GPS instrumentation provides more accurate magnitude estimations using near-field stations, but isn't sensitive enough to detect the first seismic wave arrivals, thereby limiting solution speed. By optimally combining collocated seismic and GPS instruments, we demonstrate improved solution speed of earthquake magnitude for the largest seismic events. We present a real-time implementation of magnitude-scaling relations that adapts to consider the length of the recording, reflecting the observed evolution of ground motion with time.
Health, Human Capital, and Development*
Bleakley, Hoyt
2013-01-01
How much does disease depress development in human capital and income around the world? I discuss a range of micro evidence, which finds that health is both human capital itself and an input to producing other forms of human capital. I use a standard model to integrate these results, and suggest a re-interpretation of much of the micro literature. I then discuss the aggregate implications of micro estimates, but note the complications in extrapolating to general equilibrium, especially because of health’s effect on population size. I also review the macro evidence on this topic, which consists of either cross-country comparisons or measuring responses to health shocks. Micro estimates are 1–2 orders of magnitude smaller than the cross-country relationship, but nevertheless imply high benefit-to-cost ratios from improving certain forms of health. PMID:24147187
Near-Earth Object Astrometric Interferometry
NASA Technical Reports Server (NTRS)
Werner, Martin R.
2005-01-01
Using astrometric interferometry on near-Earth objects (NEOs) poses many interesting and difficult challenges. Poor reflectance properties and potentially no significant active emissions lead to NEOs having intrinsically low visual magnitudes. Using worst case estimates for signal reflection properties leads to NEOs having visual magnitudes of 27 and higher. Today the most sensitive interferometers in operation have limiting magnitudes of 20 or less. The main reason for this limit is due to the atmosphere, where turbulence affects the light coming from the target, limiting the sensitivity of the interferometer. In this analysis, the interferometer designs assume no atmosphere, meaning they would be placed at a location somewhere in space. Interferometer configurations and operational uncertainties are looked at in order to parameterize the requirements necessary to achieve measurements of low visual magnitude NEOs. This analysis provides a preliminary estimate of what will be required in order to take high resolution measurements of these objects using interferometry techniques.
A new dataset of Wood Anderson magnitude from the Trieste (Italy) seismic station
NASA Astrophysics Data System (ADS)
Sandron, Denis; Gentile, G. Francesco; Gentili, Stefania; Rebez, Alessandro; Santulin, Marco; Slejko, Dario
2014-05-01
The standard torsion Wood Anderson (WA) seismograph owes its fame to the fact that historically it has been used for the definition of the magnitude of an earthquake (Richter, 1935). With the progress of the technology, digital broadband (BB) seismographs replaced it. However, for historical consistency and homogeneity with the old seismic catalogues, it is still important continuing to compute the so called Wood Anderson magnitude. In order to evaluate WA magnitude, the synthetic seismograms WA equivalent are simulated convolving the waveforms recorded by a BB instrument with a suitable transfer function. The value of static magnification that should be applied in order to simulate correctly the WA instrument is debated. The original WA instrument in Trieste operated from 1971 to 1992 and the WA magnitude (MAW) estimates were regularly reported in the seismic station bulletins. The calculation of the local magnitude was performed following the Richter's formula (Richter, 1935), using the table of corrections factor unmodified from those calibrated for California and without station correction applied (Finetti, 1972). However, the WA amplitudes were computed as vector sum rather than arithmetic average of the horizontal components, resulting in a systematic overestimation of approximately 0.25, depending on the azimuth. In this work, we have retrieved the E-W and N-S components of the original recordings and re-computed MAW according to the original Richter (1935) formula. In 1992, the WA recording were stopped, due to the long time required for the daily development of the photographic paper, the costs of the photographic paper and the progress of the technology. After a decade of interruption, the WA was recovered and modernized by replacing the recording on photographic paper with an electronic device and it continues presently to record earthquakes. The E-W and N-S components records were memorized, but not published till now. Since 2004, next to the WA (few decimeters apart), a Guralp 40-T BB seismometer was installed, with a proper period extended to 60 s. Aim of the present work is twofold: from one side to recover the whole data set of MAW values recorded from 1971 until now, with the correct estimate of magnitude, and from the other side to verify the WA static magnification, comparing the real WA data with the ones simulated from broadband seismometer recordings.
Wide field imaging problems in radio astronomy
NASA Astrophysics Data System (ADS)
Cornwell, T. J.; Golap, K.; Bhatnagar, S.
2005-03-01
The new generation of synthesis radio telescopes now being proposed, designed, and constructed face substantial problems in making images over wide fields of view. Such observations are required either to achieve the full sensitivity limit in crowded fields or for surveys. The Square Kilometre Array (SKA Consortium, Tech. Rep., 2004), now being developed by an international consortium of 15 countries, will require advances well beyond the current state of the art. We review the theory of synthesis radio telescopes for large fields of view. We describe a new algorithm, W projection, for correcting the non-coplanar baselines aberration. This algorithm has improved performance over those previously used (typically an order of magnitude in speed). Despite the advent of W projection, the computing hardware required for SKA wide field imaging is estimated to cost up to $500M (2015 dollars). This is about half the target cost of the SKA. Reconfigurable computing is one way in which the costs can be decreased dramatically.
Fast and accurate genotype imputation in genome-wide association studies through pre-phasing
Howie, Bryan; Fuchsberger, Christian; Stephens, Matthew; Marchini, Jonathan; Abecasis, Gonçalo R.
2013-01-01
Sequencing efforts, including the 1000 Genomes Project and disease-specific efforts, are producing large collections of haplotypes that can be used for genotype imputation in genome-wide association studies (GWAS). Imputing from these reference panels can help identify new risk alleles, but the use of large panels with existing methods imposes a high computational burden. To keep imputation broadly accessible, we introduce a strategy called “pre-phasing” that maintains the accuracy of leading methods while cutting computational costs by orders of magnitude. In brief, we first statistically estimate the haplotypes for each GWAS individual (“pre-phasing”) and then impute missing genotypes into these estimated haplotypes. This reduces the computational cost because: (i) the GWAS samples must be phased only once, whereas standard methods would implicitly re-phase with each reference panel update; (ii) it is much faster to match a phased GWAS haplotype to one reference haplotype than to match unphased GWAS genotypes to a pair of reference haplotypes. This strategy will be particularly valuable for repeated imputation as reference panels evolve. PMID:22820512
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
NASA Astrophysics Data System (ADS)
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
Malone, Amelia S.; Fuchs, Lynn S.
2016-01-01
The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of committing errors. Students (n = 227) completed a 9-item ordering test. A high proportion (81%) of problems were completed incorrectly. Most (65% of) errors were due to students misapplying whole number logic to fractions. Fraction-magnitude estimation skill, but not part-whole understanding, significantly predicted the probability of committing this type of error. Implications for practice are discussed. PMID:26966153
Ruhl, James F.; Kanivetsky, Roman; Shmagin, Boris
2002-01-01
Recharge estimates, which generally varied within 10 in./yr for each of the methods, generally were largest based on the precipitation, ground-water level fluctuation, and age dating of shallow ground water methods, slightly smaller based on the streamflow-recession displacement method, and smallest based on the watershed characteristics method. Leakage, which was less than 1 in./yr, varied within 1 order of magnitude based on the ground-water level fluctuation method and as much as 4 orders of magnitude based on analyses of vertical-hydraulic gradients.
NASA Astrophysics Data System (ADS)
Smith, Harlan J.
1989-10-01
Many design and technical innovations over the past ten or fifteen years have reduced the costs of very large telescopes by nearly an order of magnitude over those of classical designs. Still a further order of magnitude reduction is possible if the telescope is specialized for on-axis spectroscopy, giving up especially the luxuries of wide field, multiple focal positions, and access to all the sky at will. The SST (Spectroscopic Survey Telescope) will use eighty-five 1-m circular mirrors mounted in a steel frame composed of hundreds of interlocking tetrahedrons, keeping a fixed elevation angle of 60 deg with rotation only in azimuth. Using an optical fiber it will feed as much light to spectrographs as can be done by a conventional 8-m telescope, yet has a target basic completion cost of only $6 million.
Carbon Dioxide Observational Platform System (CO-OPS), feasibility study
NASA Technical Reports Server (NTRS)
Bouquet, D. L.; Hall, D. W.; Mcelveen, R. P.
1987-01-01
The Carbon Dioxide Observational Platform System (CO-OPS) is a near-space, geostationary, multi-user, unmanned microwave powered monitoring platform system. This systems engineering feasibility study addressed identified existing requirements such as: carbon dioxide observational data requirements, communications requirements, and eye-in-the-sky requirements of other groups like the Defense Department, the Forestry Service, and the Coast Guard. In addition, potential applications in: earth system science, space system sciences, and test and verification (satellite sensors and data management techniques) were considered. The eleven month effort is summarized. Past work and methods of gathering the required observational data were assessed and rough-order-of magnitude cost estimates have shown the CO-OPS system to be most cost effective (less than $30 million within a 10 year lifetime). It was also concluded that there are no technical, schedule, or obstacles that would prevent achieving the objectives of the total 5-year CO-OPS program.
Benefits of Government Incentives for Reusable Launch Vehicle Development
NASA Technical Reports Server (NTRS)
Shaw, Eric J.; Hamaker, Joseph W.; Prince, Frank A.
1998-01-01
Many exciting new opportunities in space, both government missions and business ventures, could be realized by a reduction in launch prices. Reusable launch vehicle (RLV) designs have the potential to lower launch costs dramatically from those of today's expendable and partially-expendable vehicles. Unfortunately, governments must budget to support existing launch capability, and so lack the resources necessary to completely fund development of new reusable systems. In addition, the new commercial space markets are too immature and uncertain to motivate the launch industry to undertake a project of this magnitude and risk. Low-cost launch vehicles will not be developed without a mature market to service; however, launch prices must be reduced in order for a commercial launch market to mature. This paper estimates and discusses the various benefits that may be reaped from government incentives for a commercial reusable launch vehicle program.
Methods for estimating magnitude and frequency of peak flows for natural streams in Utah
Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.
2007-01-01
Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.
NASA Astrophysics Data System (ADS)
Sparks, L. E.; Ramsey, G. H.; Daniel, B. E.
The results of pilot plant experiments of particulate collection by a venturi scrubber downstream from an electrostatic precipitator (ESP) are presented. The data, which cover a range of scrubber operating conditions and ESP efficiencies, show that particle collection by the venturi scrubber is not affected by the upstream ESP; i.e., for a given scrubber pressure drop, particle collection efficiency as a function of particle diameter is the same for both ESP on and ESP off. The experimental results are in excellent agreement with theoretical predictions. Order of magnitude cost estimates indicate that particle collection by ESP scrubber systems may be economically attractive when scrubbers must be used for SO x control.
Software cost/resource modeling: Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. J.
1980-01-01
A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
Evaluation of the public health impacts of traffic congestion: a health risk assessment
2010-01-01
Background Traffic congestion is a significant issue in urban areas in the United States and around the world. Previous analyses have estimated the economic costs of congestion, related to fuel and time wasted, but few have quantified the public health impacts or determined how these impacts compare in magnitude to the economic costs. Moreover, the relative magnitudes of economic and public health impacts of congestion would be expected to vary significantly across urban areas, as a function of road infrastructure, population density, and atmospheric conditions influencing pollutant formation, but this variability has not been explored. Methods In this study, we evaluate the public health impacts of ambient exposures to fine particulate matter (PM2.5) concentrations associated with a business-as-usual scenario of predicted traffic congestion. We evaluate 83 individual urban areas using traffic demand models to estimate the degree of congestion in each area from 2000 to 2030. We link traffic volume and speed data with the MOBILE6 model to characterize emissions of PM2.5 and particle precursors attributable to congestion, and we use a source-receptor matrix to evaluate the impact of these emissions on ambient PM2.5 concentrations. Marginal concentration changes are related to a concentration-response function for mortality, with a value of statistical life approach used to monetize the impacts. Results We estimate that the monetized value of PM2.5-related mortality attributable to congestion in these 83 cities in 2000 was approximately $31 billion (2007 dollars), as compared with a value of time and fuel wasted of $60 billion. In future years, the economic impacts grow (to over $100 billion in 2030) while the public health impacts decrease to $13 billion in 2020 before increasing to $17 billion in 2030, given increasing population and congestion but lower emissions per vehicle. Across cities and years, the public health impacts range from more than an order of magnitude less to in excess of the economic impacts. Conclusions Our analyses indicate that the public health impacts of congestion may be significant enough in magnitude, at least in some urban areas, to be considered in future evaluations of the benefits of policies to mitigate congestion. PMID:20979626
The Psychology of Cost Estimating
NASA Technical Reports Server (NTRS)
Price, Andy
2016-01-01
Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.
Computing return times or return periods with rare event algorithms
NASA Astrophysics Data System (ADS)
Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy
2018-04-01
The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.
Exploration, Sampling, And Reconstruction of Free Energy Surfaces with Gaussian Process Regression.
Mones, Letif; Bernstein, Noam; Csányi, Gábor
2016-10-11
Practical free energy reconstruction algorithms involve three separate tasks: biasing, measuring some observable, and finally reconstructing the free energy surface from those measurements. In more than one dimension, adaptive schemes make it possible to explore only relatively low lying regions of the landscape by progressively building up the bias toward the negative of the free energy surface so that free energy barriers are eliminated. Most schemes use the final bias as their best estimate of the free energy surface. We show that large gains in computational efficiency, as measured by the reduction of time to solution, can be obtained by separating the bias used for dynamics from the final free energy reconstruction itself. We find that biasing with metadynamics, measuring a free energy gradient estimator, and reconstructing using Gaussian process regression can give an order of magnitude reduction in computational cost.
NASA Astrophysics Data System (ADS)
Wang, Delin
In this thesis, we develop the basics of the Passive Ocean Acoustic Waveguide Remote Sensing (POAWRS) technique for the instantaneous continental-shelf scale detection, localization and species classification of marine mammal vocalizations. POAWRS uses a large-aperture, densely sampled coherent hydrophone array system with orders of magnitude higher array gain to enhance signal-to-noise ratios (SNR) by coherent beamforming, enabling detection of underwater acoustic signals either two orders of magnitude more distant in range or lower in SNR than a single hydrophone. The ability to employ coherent spatial processing of signals with the POAWRS technology significantly improves areal coverage, enabling detection of oceanic sound sources over instantaneous wide areas spanning 100 km or more in diameter. The POAWRS approach was applied to analyze marine mammal vocalizations from diverse species received on a 160-element Office Naval Research Five Octave Research Array (ONR-FORA) deployed during their feeding season in Fall 2006 in the Gulf of Maine. The species-dependent temporal-spatial distribution of marine mammal vocalizations and correlation to the prey fish distributions have been determined. Furthermore, the probability of detection regions, source level distributions and pulse compression gains of the vocalization signals from diverse marine mammal species have been estimated. We also develop an approach for enhancing the angular resolution and improving bearing estimates of acoustic signals received on a coherent hydrophone array with multiple-nested uniformly-spaced subapertures, such as the ONR-FORA, by nonuniform array beamforming. Finally we develop a low-cost non-oil-filled towable prototype hydrophone array that consists of eight hydrophone elements with real-time data acquisition and 100 m tow cable. The approach demonstrated here will be applied in the development of a full 160 element POAWRS-type low-cost coherent hydrophone array system in the future.
Serrier, Hassan; Sultan-Taieb, Hélène; Luce, Danièle; Bejean, Sophie
2014-07-01
The objective of this article was to estimate the social cost of respiratory cancer cases attributable to occupational risk factors in France in 2010. According to the attributable fraction method and based on available epidemiological data from the literature, we estimated the number of respiratory cancer cases due to each identified risk factor. We used the cost-of-illness method with a prevalence-based approach. We took into account the direct and indirect costs. We estimated the cost of production losses due to morbidity (absenteeism and presenteeism) and mortality costs (years of production losses) in the market and nonmarket spheres. The social cost of lung, larynx, sinonasal and mesothelioma cancer caused by exposure to asbestos, chromium, diesel engine exhaust, paint, crystalline silica, wood and leather dust in France in 2010 were estimated at between 917 and 2,181 million euros. Between 795 and 2,011 million euros (87-92%) of total costs were due to lung cancer alone. Asbestos was by far the risk factor representing the greatest cost to French society in 2010 at between 531 and 1,538 million euros (58-71%), ahead of diesel engine exhaust, representing an estimated social cost of between 233 and 336 million euros, and crystalline silica (119-229 million euros). Indirect costs represented about 66% of total costs. Our assessment shows the magnitude of the economic impact of occupational respiratory cancers. It allows comparisons between countries and provides valuable information for policy-makers responsible for defining public health priorities.
Spacecraft Complexity Subfactors and Implications on Future Cost Growth
NASA Technical Reports Server (NTRS)
Leising, Charles J.; Wessen, Randii; Ellyin, Ray; Rosenberg, Leigh; Leising, Adam
2013-01-01
During the last ten years the Jet Propulsion Laboratory has used a set of cost-risk subfactors to independently estimate the magnitude of development risks that may not be covered in the high level cost models employed during early concept development. Within the last several years the Laboratory has also developed a scale of Concept Maturity Levels with associated criteria to quantitatively assess a concept's maturity. This latter effort has been helpful in determining whether a concept is mature enough for accurate costing but it does not provide any quantitative estimate of cost risk. Unfortunately today's missions are significantly more complex than when the original cost-risk subfactors were first formulated. Risks associated with complex missions are not being adequately evaluated and future cost growth is being underestimated. The risk subfactor process needed to be updated.
NASA Technical Reports Server (NTRS)
Castruccio, P.; Loats, H.; Lloyd, D.; Newman, P.
1981-01-01
The results of the OASSO ASVT's were used to estimate the benefits accruing from the added information available from satellite snowcover area measurement. Estimates of the improvement in runoff prediction due to addition of SATSCAM were made by the Colorado ASVT personnel. The improvement estimate is 6-10%. Data were applied to subregions covering the Western States snow area amended by information from the ASVT and other watershed experts to exclude areas which are not impacted by snowmelt runoff. Benefit models were developed for irrigation and hydroenergy uses. The benefit/cost ratio is 72:1. Since only two major benefit contributors were used and since the forecast improvement estimate does not take into account future satellite capabilities these estimates are considered to be conservative. The large magnitude of the benefit/cost ratio supports the utility and applicability of SATSCAM.
NASA Astrophysics Data System (ADS)
Rhodes, Russel E.; Byrd, Raymond J.
1998-01-01
This paper presents a ``back of the envelope'' technique for fast, timely, on-the-spot, assessment of affordability (profitability) of commercial space transportation architectural concepts. The tool presented here is not intended to replace conventional, detailed costing methodology. The process described enables ``quick look'' estimations and assumptions to effectively determine whether an initial concept (with its attendant cost estimating line items) provides focus for major leapfrog improvement. The Cost Charts Users Guide provides a generic sample tutorial, building an approximate understanding of the basic launch system cost factors and their representative magnitudes. This process will enable the user to develop a net ``cost (and price) per payload-mass unit to orbit'' incorporating a variety of significant cost drivers, supplemental to basic vehicle cost estimates. If acquisition cost and recurring cost factors (as a function of cost per payload-mass unit to orbit) do not meet the predetermined system-profitability goal, the concept in question will be clearly seen as non-competitive. Multiple analytical approaches, and applications of a variety of interrelated assumptions, can be examined in a quick, (on-the-spot) cost approximation analysis as this tool has inherent flexibility. The technique will allow determination of concept conformance to system objectives.
Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel
2014-06-05
Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.
Joint inversion of acoustic and resistivity data for the estimation of gas hydrate concentration
Lee, Myung W.
2002-01-01
Downhole log measurements, such as acoustic or electrical resistivity logs, are frequently used to estimate in situ gas hydrate concentrations in the pore space of sedimentary rocks. Usually the gas hydrate concentration is estimated separately based on each log measurement. However, measurements are related to each other through the gas hydrate concentration, so the gas hydrate concentrations can be estimated by jointly inverting available logs. Because the magnitude of slowness of acoustic and resistivity values differs by more than an order of magnitude, a least-squares method, weighted by the inverse of the observed values, is attempted. Estimating the resistivity of connate water and gas hydrate concentration simultaneously is problematic, because the resistivity of connate water is independent of acoustics. In order to overcome this problem, a coupling constant is introduced in the Jacobian matrix. In the use of different logs to estimate gas hydrate concentration, a joint inversion of different measurements is preferred to the averaging of each inversion result.
Benefits of invasion prevention: Effect of time lags, spread rates, and damage persistence
Rebecca S. Epanchin-Niell; Andrew M. Liebhold
2015-01-01
Quantifying economic damages caused by invasive species is crucial for cost-benefit analyses of biosecurity measures. Most studies focus on short-term damage estimates, but evaluating exclusion or prevention measures requires estimates of total anticipated damages from the time of establishment onward. The magnitude of such damages critically depends on the timing of...
Over the past decade, a range of sensor technologies became available on the market, enabling a revolutionary shift in air pollution monitoring and assessment. With their cost of up to three orders of magnitude lower than standard/reference instruments, many avenues for applicati...
Manzi, Fatuma; Hutton, Guy; Schellenberg, Joanna; Tanner, Marcel; Alonso, Pedro; Mshinda, Hassan; Schellenberg, David
2008-01-01
Background Achieving the Millennium Development Goals for health requires a massive scaling-up of interventions in Sub Saharan Africa. Intermittent Preventive Treatment in infants (IPTi) is a promising new tool for malaria control. Although efficacy information is available for many interventions, there is a dearth of data on the resources required for scaling up of health interventions. Method We worked in partnership with the Ministry of Health and Social Welfare (MoHSW) to develop an IPTi strategy that could be implemented and managed by routine health services. We tracked health system and other costs of (1) developing the strategy and (2) maintaining routine implementation of the strategy in five districts in southern Tanzania. Financial costs were extracted and summarized from a costing template and semi-structured interviews were conducted with key informants to record time and resources spent on IPTi activities. Results The estimated financial cost to start-up and run IPTi in the whole of Tanzania in 2005 was US$1,486,284. Start-up costs of US$36,363 were incurred at the national level, mainly on the development of Behaviour Change Communication (BCC) materials, stakeholders' meetings and other consultations. The annual running cost at national level for intervention management and monitoring and drug purchase was estimated at US$459,096. Start-up costs at the district level were US$7,885 per district, mainly expenditure on training. Annual running costs were US$170 per district, mainly for printing of BCC materials. There was no incremental financial expenditure needed to deliver the intervention in health facilities as supplies were delivered alongside routine vaccinations and available health workers performed the activities without working overtime. The economic cost was estimated at 23 US cents per IPTi dose delivered. Conclusion The costs presented here show the order of magnitude of expenditures needed to initiate and to implement IPTi at national scale in settings with high Expanded Programme on Immunization (EPI) coverage. The IPTi intervention appears to be affordable even within the budget constraints of Ministries of Health of most sub-Saharan African countries. PMID:18671874
Using Landsat to Diagnose Trends in Disturbance Magnitude Across the National Forest System
NASA Astrophysics Data System (ADS)
Hernandez, A. J.; Healey, S. P.; Stehman, S. V.; Ramsey, R. D.
2014-12-01
The Landsat archive is increasingly being used to detect trends in the occurrence of forest disturbance. Beyond information about the amount of area affected, forest managers need to know if and how disturbance severity is changing. For example, the United States National Forest System (NFS) has developed a comprehensive plan for carbon monitoring, which requires a detailed temporal mapping of forest disturbance magnitudes across 75 million hectares. To meet this need, we have prepared multitemporal models of percent canopy cover that were calibrated with extensive field data from the USFS Forest Inventory and Analysis Program (FIA). By applying these models to pre- and post-event Landsat images at the site of known disturbances, we develop maps showing first-order estimates of disturbance magnitude on the basis of cover removal. However, validation activities consistently show that these initial estimates under-estimate disturbance magnitude. We have developed an approach, which quantifies this under-prediction at the landscape level and uses empirical validation data to adjust change magnitude estimates derived from initial disturbance maps. In an assessment of adjusted magnitude trends of NFS' Northern Region from 1990 to the present, we observed significant declines since 1990 (p < .01) in harvest magnitude, likely related to known reduction of clearcutting practices in the region. Fire, conversely, did not show strongly significant trends in magnitude, despite an increase in the overall area affected. As Landsat is used to provide increasingly precise maps of the timing and location of historical forest disturbance, a logical next step is to use the archive to generate widely interpretable and objective estimates of disturbance magnitude.
DOT National Transportation Integrated Search
2001-07-01
International trade occurs in physical space and moving goods requires time. This paper examines the importance of time as a trade barrier, estimates the magnitude of time costs, and relates these to patterns of trade and the international organizati...
ERIC Educational Resources Information Center
Keenan, J. M.; And Others
This report attempts to assess the economic benefits and costs of the PACE (Projects to Advance Creativity in Education) Early Identification and Intervention Project. The report is structured to provide an indication of impacts generated by the Project, with some estimates of its direction and relative magnitude. The first section on impact…
Murphy, Shannon M E; Hough, Douglas E; Sylvia, Martha L; Dunbar, Linda J; Frick, Kevin D
2018-02-08
To illustrate the impact of key quasi-experimental design elements on cost savings measurement for population health management (PHM) programs. Population health management program records and Medicaid claims and enrollment data from December 2011 through March 2016. The study uses a difference-in-difference design to compare changes in cost and utilization outcomes between program participants and propensity score-matched nonparticipants. Comparisons of measured savings are made based on (1) stable versus dynamic population enrollment and (2) all eligible versus enrolled-only participant definitions. Options for the operationalization of time are also discussed. Individual-level Medicaid administrative and claims data and PHM program records are used to match study groups on baseline risk factors and assess changes in costs and utilization. Savings estimates are statistically similar but smaller in magnitude when eliminating variability based on duration of population enrollment and when evaluating program impact on the entire target population. Measurement in calendar time, when possible, simplifies interpretability. Program evaluation design elements, including population stability and participant definitions, can influence the estimated magnitude of program savings for the payer and should be considered carefully. Time specifications can also affect interpretability and usefulness. © Health Research and Educational Trust.
The potential cost savings of implementing an inter-utility NO{sub x} trading program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siegel, S.; Kalagnanam, J.
1995-12-31
Technology based standards such as RACT, which require the installation of a Reasonably Available Control Technology on a boiler by boiler basis have been the dominant factor driving electric utility NO{sub x} compliance plans. In this paper, the authors examine the cost savings of implementing NO{sub x} trading, an alternative market based strategy for reducing the emissions of nitrogen oxides (NO{sub x}) to achieve NO{sub x} reduction goals set under Title IV of the 1990 Clean Air Act. In order to estimate the potential cost savings of inter-utility NO{sub x} trading, the authors have used a combinatorial optimization approach tomore » identify boiler retrofits and operating parameters which yield efficient (i.e., the most cost effective) NO{sub x} abatement. In the formulation, annual emissions at individual boilers which are expensive to abate may exceed RACT levels by up to a factor of two thus allowing for trades with boilers which can abate in a more cost effective manner. The authors constrain total emissions in a trading region to be at or below the level obtained had all the boilers adopted RACT. Increasing the flexibility with which trades can occur has two main effects: (1) the cost effectiveness of meeting an aggregate reduction goal increases and (2) the spatial distribution of emissions shift relative to what it would have been under a strict RACT based compliance strategy. The authors estimate the magnitude of these effects for two Eastern electric utilities making intra and inter-utility NO{sub x} trades. Results indicate that the cost effectiveness of meeting RACT level reduction can be increased by as much as 38% under certain trading regimes.« less
The potential cost savings of implementing an inter-utility NO{sub x} trading program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siegel, S.; Kalagnanam, J.
1995-10-01
Technology based standards such as RACT, which require the installation of a (R)easonably (A)vailable (C)ontrol (T)echnology on a boiler by boiler basis have been the dominant factor driving electric utility NO{sub x} compliance plans. In this paper, the authors examine the cost savings of implementing NO{sub x} trading, an alternative market based strategy for reducing the emissions of nitrogen oxides (NO{sub x}) to achieve NO{sub x} reduction goals set under Title IV of the 1990 Clean Air Act. In order to estimate the potential cost savings of inter-utility NO{sub x} trading, they use a combinatorial optimization approach to identify boilermore » retrofits and operating parameters which yield efficient (i.e., the most cost effective) NO{sub x} abatement strategies. In their formulation, annual emissions at individual boilers which are expensive to abate may exceed RACT levels by up to a factor of two thus allowing for trades with boilers which can abate in a more cost effective manner. They constrain total emissions in a trading region to be at or below the level obtained had all the boilers adopted RACT. Increasing the flexibility with which trades can occur has two main effects: (1) the cost effectiveness of meeting an aggregate reduction goal increases and (2) the spatial distribution of emissions shift relative to what it would have been under a strict RACT based compliance strategy. They estimate the magnitude of these effects for two Eastern electric utilities making intra- and inter-utility NO{sub x} trades. Results indicate that the cost effectiveness of meeting RACT level reduction can be increased by as much as 38% under certain trading regimes.« less
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers
Thompson, Clarissa A.; Opfer, John E.
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers.
Thompson, Clarissa A; Opfer, John E
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children's representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.
Code of Federal Regulations, 2013 CFR
2013-04-01
... construction project cost information available to them in order to facilitate reaching agreement on an overall fair and reasonable price for the project or part thereof. In order to enhance this communication, the... cost estimate shall be an independent cost estimate based on such information as the following: (1...
Code of Federal Regulations, 2014 CFR
2014-04-01
... construction project cost information available to them in order to facilitate reaching agreement on an overall fair and reasonable price for the project or part thereof. In order to enhance this communication, the... cost estimate shall be an independent cost estimate based on such information as the following: (1...
Code of Federal Regulations, 2012 CFR
2012-04-01
... construction project cost information available to them in order to facilitate reaching agreement on an overall fair and reasonable price for the project or part thereof. In order to enhance this communication, the... cost estimate shall be an independent cost estimate based on such information as the following: (1...
NASA Astrophysics Data System (ADS)
Sutton, Patrick T.; Ginn, Timothy R.
2014-12-01
A sustainable in-well vapor stripping system is designed as a cost-effective alternative for remediation of shallow chlorinated solvent groundwater plumes. A solar-powered air compressor is used to inject air bubbles into a monitoring well to strip volatile organic compounds from a liquid to vapor phase while simultaneously inducing groundwater circulation around the well screen. An analytical model of the remediation process is developed to estimate contaminant mass flow and removal rates. The model was calibrated based on a one-day pilot study conducted in an existing monitoring well at a former dry cleaning site. According to the model, induced groundwater circulation at the study site increased the contaminant mass flow rate into the well by approximately two orders of magnitude relative to ambient conditions. Modeled estimates for 5 h of pulsed air injection per day at the pilot study site indicated that the average effluent concentrations of dissolved tetrachloroethylene and trichloroethylene can be reduced by over 90% relative to the ambient concentrations. The results indicate that the system could be used cost-effectively as either a single- or multi-well point technology to substantially reduce the mass of dissolved chlorinated solvents in groundwater.
Low-Cost Sensor System Design for In-Home Physical Activity Tracking.
Nambiar, Siddhartha; Nikolaev, Alexander; Greene, Melissa; Cavuoto, Lora; Bisantz, Ann
2016-01-01
An aging and more sedentary population requires interventions aimed at monitoring physical activity, particularly within the home. This research uses simulation, optimization, and regression analyses to assess the feasibility of using a small number of sensors to track movement and infer physical activity levels of older adults. Based on activity data from the American Time Use Survey and assisted living apartment layouts, we determined that using three to four doorway sensors can be used to effectively capture a sufficient amount of movements in order to estimate activity. The research also identified preferred approaches for assigning sensor locations, evaluated the error magnitude inherent in the approach, and developed a methodology to identify which apartment layouts would be best suited for these technologies.
Low-Cost Sensor System Design for In-Home Physical Activity Tracking
Nikolaev, Alexander; Greene, Melissa; Cavuoto, Lora; Bisantz, Ann
2016-01-01
An aging and more sedentary population requires interventions aimed at monitoring physical activity, particularly within the home. This research uses simulation, optimization, and regression analyses to assess the feasibility of using a small number of sensors to track movement and infer physical activity levels of older adults. Based on activity data from the American Time Use Survey and assisted living apartment layouts, we determined that using three to four doorway sensors can be used to effectively capture a sufficient amount of movements in order to estimate activity. The research also identified preferred approaches for assigning sensor locations, evaluated the error magnitude inherent in the approach, and developed a methodology to identify which apartment layouts would be best suited for these technologies. PMID:28560118
Atomistic determination of flexoelectric properties of crystalline dielectrics
NASA Astrophysics Data System (ADS)
Maranganti, R.; Sharma, P.
2009-08-01
Upon application of a uniform strain, internal sublattice shifts within the unit cell of a noncentrosymmetric dielectric crystal result in the appearance of a net dipole moment: a phenomenon well known as piezoelectricity. A macroscopic strain gradient on the other hand can induce polarization in dielectrics of any crystal structure, even those which possess a centrosymmetric lattice. This phenomenon, called flexoelectricity, has both bulk and surface contributions: the strength of the bulk contribution can be characterized by means of a material property tensor called the bulk flexoelectric tensor. Several recent studies suggest that strain-gradient induced polarization may be responsible for a variety of interesting and anomalous electromechanical phenomena in materials including electromechanical coupling effects in nonuniformly strained nanostructures, “dead layer” effects in nanocapacitor systems, and “giant” piezoelectricity in perovskite nanostructures among others. In this work, adopting a lattice dynamics based microscopic approach we provide estimates of the flexoelectric tensor for certain cubic crystalline ionic salts, perovskite dielectrics, III-V and II-VI semiconductors. We compare our estimates with experimental/theoretical values wherever available and also revisit the validity of an existing empirical scaling relationship for the magnitude of flexoelectric coefficients in terms of material parameters. It is interesting to note that two independent groups report values of flexoelectric properties for perovskite dielectrics that are orders of magnitude apart: Cross and co-workers from Penn State have carried out experimental studies on a variety of materials including barium titanate while Catalan and co-workers from Cambridge used theoretical ab initio techniques as well as experimental techniques to study paraelectric strontium titanate as well as ferroelectric barium titanate and lead titanate. We find that, in the case of perovskite dielectrics, our estimates agree to an order of magnitude with the experimental and theoretical estimates for strontium titanate. For barium titanate however, while our estimates agree to an order of magnitude with existing ab initio calculations, there exists a large discrepancy with experimental estimates. The possible reasons for the observed deviations are discussed.
NASA Astrophysics Data System (ADS)
Wakeley, Heather L.
Alternative fuels could replace a significant portion of the 140 billion gallons of annual US gasoline use. Considerable attention is being paid to processes and technologies for producing alternative fuels, but an enormous investment in new infrastructure will be needed to have substantial impact on the demand for petroleum. The economics of production, distribution, and use, along with environmental impacts of these fuels, will determine the success or failure of a transition away from US petroleum dependence. This dissertation evaluates infrastructure requirements for ethanol and hydrogen as alternative fuels. It begins with an economic case study for ethanol and hydrogen in Iowa. A large-scale linear optimization model is developed to estimate average transportation distances and costs for nationwide ethanol production and distribution systems. Environmental impacts of transportation in the ethanol life cycle are calculated using the Economic Input-Output Life Cycle Assessment (EIO-LCA) model. An EIO-LCA Hybrid method is developed to evaluate impacts of future fuel production technologies. This method is used to estimate emissions for hydrogen production and distribution pathways. Results from the ethanol analyses indicate that the ethanol transportation cost component is significant and is the most variable. Costs for ethanol sold in the Midwest, near primary production centers, are estimated to be comparable to or lower than gasoline costs. Along with a wide range of transportation costs, environmental impacts for ethanol range over three orders of magnitude, depending on the transport required. As a result, intensive ethanol use should be encouraged near ethanol production areas. Fossil fuels are likely to remain the primary feedstock sources for hydrogen production in the near- and mid-term. Costs and environmental impacts of hydrogen produced from natural gas and transported by pipeline are comparable to gasoline. However, capital costs are prohibitive and a significant increase in natural gas demand will likely raise both prices and import quantities. There is an added challenge of developing hydrogen fuel cell vehicles at costs comparable to conventional vehicles. Two models developed in this thesis have proven useful for evaluating alternative fuels. The linear programming models provide representative estimates of distribution distances for regional fuel use, and thus can be used to estimate costs and environmental impacts. The EIO-LCA Hybrid method is useful for estimating emissions from hydrogen production. This model includes upstream impacts in the LCA, and has the benefit of a lower time and data requirements than a process-based LCA.
Improving DLA Aviation Engineering’s Support to its Customers and the DoD Supply Chain
2014-10-01
costs and first article test costs) and (2) DLA supply chain responsiveness as measured in terms of the days required to satisfy unfilled orders ( UFOs ...135,000 UFOs or requisitions at any time. This number of UFOs overstates the magnitude of the backorder problem since many of these backorders are...backorders or long term unfilled orders ( UFOs ). This can have serious implications for the materiel readiness of those weapon systems that utilize
NASA Astrophysics Data System (ADS)
Loomis, John
2003-04-01
Past recreation studies have noted that on-site or visitor intercept surveys are subject to over-sampling of avid users (i.e., endogenous stratification) and have offered econometric solutions to correct for this. However, past papers do not estimate the empirical magnitude of the bias in benefit estimates with a real data set, nor do they compare the corrected estimates to benefit estimates derived from a population sample. This paper empirically examines the magnitude of the recreation benefits per trip bias by comparing estimates from an on-site river visitor intercept survey to a household survey. The difference in average benefits is quite large, with the on-site visitor survey yielding 24 per day trip, while the household survey yields 9.67 per day trip. A simple econometric correction for endogenous stratification in our count data model lowers the benefit estimate to $9.60 per day trip, a mean value nearly identical and not statistically different from the household survey estimate.
Patel, Sanjay V; Cemalovic, Sabina; Tolley, William K; Hobson, Stephen T; Anderson, Ryan; Fruhberger, Bernd
2018-03-23
The effect of thermal treatments, on the benzene vapor sensitivity of polyethylene (co-)vinylacetate (PEVA)/graphene nanocomposite threads, used as chemiresistive sensors, was investigated using DC resistance measurements, differential scanning calorimetry (DSC), and scanning electron microscopy (SEM). These flexible threads are being developed as low-cost, easy-to-measure chemical sensors that can be incorporated into smart clothing or disposable sensing patches. Chemiresistive threads were solution-cast or extruded from PEVA and <10% graphene nanoplatelets (by mass) in toluene. Threads were annealed at various temperatures and showed up to 2 orders of magnitude decrease in resistance with successive anneals. Threads heated to ≥80 °C showed improved limits of detection, resulting from improved signal-noise, when exposed to benzene vapor in dry air. In addition, annealing increased the speed of response and recovery upon exposure to and removal of benzene vapor. DSC results showed that the presence of graphene raises the freezing point, and may allow greater crystallinity, in the nanocomposite after annealing. SEM images confirm increased surface roughness/area, which may account for the increase response speed after annealing. Benzene vapor detection at 5 ppm is demonstrated with limits of detection estimated to be as low as 1.5 ppm, reflecting an order of magnitude improvement over unannealed threads.
NASA Astrophysics Data System (ADS)
Lizurek, Grzegorz; Marmureanu, Alexandru; Wiszniowski, Jan
2017-03-01
Bucharest, with a population of approximately 2 million people, has suffered damage from earthquakes in the Vrancea seismic zone, which is located about 170 km from Bucharest, at a depth of 80-200 km. Consequently, an earthquake early warning system (Bucharest Rapid earthquake Early Warning System or BREWS) was constructed to provide some warning about impending shaking from large earthquakes in the Vrancea zone. In order to provide quick estimates of magnitude, seismic moment was first determined from P-waves and then a moment magnitude was determined from the moment. However, this magnitude may not be consistent with previous estimates of magnitude from the Romanian Seismic Network. This paper introduces the algorithm using P-wave spectral levels and compares them with catalog estimates. The testing procedure used waveforms from about 90 events with catalog magnitudes from 3.5 to 5.4. Corrections to the P-wave determined magnitudes according to dominant intermediate depth events mechanism were tested for November 22, 2014, M5.6 and October 17, M6 events. The corrections worked well, but unveiled overestimation of the average magnitude result of about 0.2 magnitude unit in the case of shallow depth event ( H < 60 km). The P-wave spectral approach allows for the relatively fast estimates of magnitude for use in BREWS. The average correction taking into account the most common focal mechanism for radiation pattern coefficient may lead to overestimation of the magnitude for shallow events of about 0.2 magnitude unit. However, in case of events of intermediate depth of M6 the resulting M w is underestimated at about 0.1-0.2. We conclude that our P-wave spectral approach is sufficiently robust for the needs of BREWS for both shallow and intermediate depth events.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Schiper, Andre; Stephenson, Pat
1990-01-01
A new protocol is presented that efficiently implements a reliable, causally ordered multicast primitive and is easily extended into a totally ordered one. Intended for use in the ISIS toolkit, it offers a way to bypass the most costly aspects of ISIS while benefiting from virtual synchrony. The facility scales with bounded overhead. Measured speedups of more than an order of magnitude were obtained when the protocol was implemented within ISIS. One conclusion is that systems such as ISIS can achieve performance competitive with the best existing multicast facilities--a finding contradicting the widespread concern that fault-tolerance may be unacceptably costly.
Deep mantle: Enriched carbon source detected
NASA Astrophysics Data System (ADS)
Barry, Peter H.
2017-09-01
Estimates of carbon in the deep mantle vary by more than an order of magnitude. Coupled volcanic CO2 emission data and magma supply rates reveal a carbon-rich mantle plume source region beneath Hawai'i with 40% more carbon than previous estimates.
NASA Astrophysics Data System (ADS)
Olsen, S.; Zaliapin, I.
2008-12-01
We establish positive correlation between the local spatio-temporal fluctuations of the earthquake magnitude distribution and the occurrence of regional earthquakes. In order to accomplish this goal, we develop a sequential Bayesian statistical estimation framework for the b-value (slope of the Gutenberg-Richter's exponential approximation to the observed magnitude distribution) and for the ratio a(t) between the earthquake intensities in two non-overlapping magnitude intervals. The time-dependent dynamics of these parameters is analyzed using Markov Chain Models (MCM). The main advantage of this approach over the traditional window-based estimation is its "soft" parameterization, which allows one to obtain stable results with realistically small samples. We furthermore discuss a statistical methodology for establishing lagged correlations between continuous and point processes. The developed methods are applied to the observed seismicity of California, Nevada, and Japan on different temporal and spatial scales. We report an oscillatory dynamics of the estimated parameters, and find that the detected oscillations are positively correlated with the occurrence of large regional earthquakes, as well as with small events with magnitudes as low as 2.5. The reported results have important implications for further development of earthquake prediction and seismic hazard assessment methods.
Economic value of U.S. fossil fuel electricity health impacts.
Machol, Ben; Rizk, Sarah
2013-02-01
Fossil fuel energy has several externalities not accounted for in the retail price, including associated adverse human health impacts, future costs from climate change, and other environmental damages. Here, we quantify the economic value of health impacts associated with PM(2.5) and PM(2.5) precursors (NO(x) and SO(2)) on a per kilowatt hour basis. We provide figures based on state electricity profiles, national averages and fossil fuel type. We find that the economic value of improved human health associated with avoiding emissions from fossil fuel electricity in the United States ranges from a low of $0.005-$0.013/kWh in California to a high of $0.41-$1.01/kWh in Maryland. When accounting for the adverse health impacts of imported electricity, the California figure increases to $0.03-$0.07/kWh. Nationally, the average economic value of health impacts associated with fossil fuel usage is $0.14-$0.35/kWh. For coal, oil, and natural gas, respectively, associated economic values of health impacts are $0.19-$0.45/kWh, $0.08-$0.19/kWh, and $0.01-$0.02/kWh. For coal and oil, these costs are larger than the typical retail price of electricity, demonstrating the magnitude of the externality. When the economic value of health impacts resulting from air emissions is considered, our analysis suggests that on average, U.S. consumers of electricity should be willing to pay $0.24-$0.45/kWh for alternatives such as energy efficiency investments or emission-free renewable sources that avoid fossil fuel combustion. The economic value of health impacts is approximately an order of magnitude larger than estimates of the social cost of carbon for fossil fuel electricity. In total, we estimate that the economic value of health impacts from fossil fuel electricity in the United States is $361.7-886.5 billion annually, representing 2.5-6.0% of the national GDP. Published by Elsevier Ltd.
A Miniaturized On-Chip Colorimeter for Detecting NPK Elements
Liu, Rui-Tao; Tao, Lu-Qi; Liu, Bo; Tian, Xiang-Guang; Mohammad, Mohammad Ali; Yang, Yi; Ren, Tian-Ling
2016-01-01
Recently, precision agriculture has become a globally attractive topic. As one of the most important factors, the soil nutrients play an important role in estimating the development of precision agriculture. Detecting the content of nitrogen, phosphorus and potassium (NPK) elements more efficiently is one of the key issues. In this paper, a novel chip-level colorimeter was fabricated to detect the NPK elements for the first time. A light source–microchannel photodetector in a sandwich structure was designed to realize on-chip detection. Compared with a commercial colorimeter, all key parts are based on MEMS (Micro-Electro-Mechanical System) technology so that the volume of this on-chip colorimeter can be minimized. Besides, less error and high precision are achieved. The cost of this colorimeter is two orders of magnitude less than that of a commercial one. All these advantages enable a low-cost and high-precision sensing operation in a monitoring network. The colorimeter developed herein has bright prospects for environmental and biological applications. PMID:27527177
A Miniaturized On-Chip Colorimeter for Detecting NPK Elements.
Liu, Rui-Tao; Tao, Lu-Qi; Liu, Bo; Tian, Xiang-Guang; Mohammad, Mohammad Ali; Yang, Yi; Ren, Tian-Ling
2016-08-04
Recently, precision agriculture has become a globally attractive topic. As one of the most important factors, the soil nutrients play an important role in estimating the development of precision agriculture. Detecting the content of nitrogen, phosphorus and potassium (NPK) elements more efficiently is one of the key issues. In this paper, a novel chip-level colorimeter was fabricated to detect the NPK elements for the first time. A light source-microchannel photodetector in a sandwich structure was designed to realize on-chip detection. Compared with a commercial colorimeter, all key parts are based on MEMS (Micro-Electro-Mechanical System) technology so that the volume of this on-chip colorimeter can be minimized. Besides, less error and high precision are achieved. The cost of this colorimeter is two orders of magnitude less than that of a commercial one. All these advantages enable a low-cost and high-precision sensing operation in a monitoring network. The colorimeter developed herein has bright prospects for environmental and biological applications.
NASA Astrophysics Data System (ADS)
Marrero, J. M.; García, A.; Llinares, A.; Rodriguez-Losada, J. A.; Ortiz, R.
2012-03-01
One of the critical issues in managing volcanic crises is making the decision to evacuate a densely-populated region. In order to take a decision of such importance it is essential to estimate the cost in lives for each of the expected eruptive scenarios. One of the tools that assist in estimating the number of potential fatalities for such decision-making is the calculation of the FN-curves. In this case the FN-curve is a graphical representation that relates the frequency of the different hazards to be expected for a particular volcano or volcanic area, and the number of potential fatalities expected for each event if the zone of impact is not evacuated. In this study we propose a method for assessing the impact that a possible eruption from the Tenerife Central Volcanic Complex (CVC) would have on the population at risk. Factors taken into account include the spatial probability of the eruptive scenarios (susceptibility) and the temporal probability of the magnitudes of the eruptive scenarios. For each point or cell of the susceptibility map with greater probability, a series of probability-scaled hazard maps is constructed for the whole range of magnitudes expected. The number of potential fatalities is obtained from the intersection of the hazard maps with the spatial map of population distribution. The results show that the Emergency Plan for Tenerife must provide for the evacuation of more than 100,000 persons.
Bousefsaf, F; Maaoui, C; Pruski, A
2016-11-25
Vasoconstriction and vasodilation phenomena reflect the relative changes in the vascular bed. They induce particular modifications in the pulse wave magnitude. Webcams correspond to remote sensors that can be employed to measure the pulse wave in order to compute the pulse frequency. Record and analyze pulse wave signal with a low-cost webcam to extract the amplitude information and assess the vasomotor activity of the participant. Photoplethysmographic signals obtained from a webcam are analyzed through a continuous wavelet transform. The performance of the proposed filtering technique was evaluated using approved contact probes on a set of 12 healthy subjects after they perform a short but intense physical exercise. During the rest period, a cutaneous vasodilation is observable. High degrees of correlation between the webcam and a reference sensor were obtained. Webcams are low-cost and non-contact devices that can be used to reliably estimate both heart rate and peripheral vasomotor activity, notably during physical exertion.
Planets as background noise sources in free space optical communications
NASA Technical Reports Server (NTRS)
Katz, J.
1986-01-01
Background noise generated by planets is the dominant noise source in most deep space direct detection optical communications systems. Earlier approximate analyses of this problem are based on simplified blackbody calculations and can yield results that may be inaccurate by up to an order of magnitude. Various other factors that need to be taken into consideration, such as the phase angle and the actual spectral dependence of the planet albedo, in order to obtain a more accurate estimate of the noise magnitude are examined.
Slight, Sarah P; Seger, Diane L; Franz, Calvin; Wong, Adrian; Bates, David W
2018-06-22
To estimate the national cost of ADEs resulting from inappropriate medication-related alert overrides in the U.S. inpatient setting. We used three different regression models (Basic, Model 1, Model 2) with model inputs taken from the medical literature. A random sample of 40 990 adult inpatients at the Brigham and Women's Hospital (BWH) in Boston with a total of 1 639 294 medication orders was taken. We extrapolated BWH medication orders using 2014 National Inpatient Sample (NIS) data. Using three regression models, we estimated that 29.7 million adult inpatient discharges in 2014 resulted in between 1.02 billion and 1.07 billion medication orders, which in turn generated between 75.1 million and 78.8 million medication alerts, respectively. Taking the basic model (78.8 million), we estimated that 5.5 million medication-related alerts might have been inappropriately overridden, resulting in approximately 196 600 ADEs nationally. This was projected to cost between $871 million and $1.8 billion for treating preventable ADEs. We also estimated that clinicians and pharmacists would have jointly spent 175 000 hours responding to 78.8 million alerts with an opportunity cost of $16.9 million. These data suggest that further optimization of hospitals computerized provider order entry systems and their associated clinical decision support is needed and would result in substantial savings. We have erred on the side of caution in developing this range, taking two conservative cost estimates for a preventable ADE that did not include malpractice or litigation costs, or costs of injuries to patients.
Hanford Tank Farm Vapors Abatement Technology and Vendor Proposals Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, H. H.; Farrar, M. E.; Fink, S. D.
2016-09-20
Suspected chemical vapor releases from the Hanford nuclear waste tank system pose concerns for worker exposure. Washington River Protection Solutions (WRPS) contracted the Savannah River National Laboratory (SRNL) to explore abatement technologies and strategies to remediate the vapors emitted through the ventilation system. In response, SRNL conducted an evaluation of technologies to abate, or reduce, vapor emissions to below 10% of the recognized occupational exposure limits (OELs). The evaluation included a review of published literature and a broadly communicated Request for Information to commercial vendors through a Federal Business Opportunities (Fed Biz Opps) web posting. In addition, SRNL conducted amore » workshop and post-workshop conference calls with interested suppliers (vendors) to assess proposals of relevant technologies. This report reviews applicable technologies and summarizes the approaches proposed by the vendors who participated in the workshop and teleconference interviews. In addition, the report evaluates the estimated performance of the individual technologies for the various classes of chemical compounds present in the Hanford Chemicals of Potential Concern (COPCs) list. Similarly, the report provides a relative evaluation of the vendor proposed approaches against criteria of: technical feasibility (and maturity), design features, operational considerations, secondary waste generation, safety/regulatory, and cost / schedule. These rough order-of-magnitude (ROM) cost estimates are intended to provide a comparison basis between technologies and are not intended to be actual project estimates.« less
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.
Sousa, F A; da Silva, J A
2000-04-01
The purpose of this study was to verify the relationship between professional prestige scaled through estimations and the professional prestige scaled through estimation of the number of minimum salaries attributed to professions in function of their prestige in society. Results showed: 1--the relationship between the estimation of magnitudes and the estimation of the number of minimum salaries attributed to the professions in function of their prestige is characterized by a function of potence with an exponent lower than 1,0,2--the orders of degrees of prestige of the professions resultant from different experiments involving different samples of subjects are highly concordant (W = 0.85; p < 0.001), considering the modality used as a number (estimation of magnitudes of minimum salaries).
Subramanian, Sujha; Hoover, Sonja; Wagner, Joann L; Donovan, Jennifer L; Kanaan, Abir O; Rochon, Paula A; Gurwitz, Jerry H; Field, Terry S
2012-01-01
In a randomized trial of a clinical decision support system for drug prescribing for residents with renal insufficiency in a large long-term care facility, analyses were conducted to estimate the system's immediate, direct financial impact. We determined the costs that would have been incurred if drug orders that triggered the alert system had actually been completed compared to the costs of the final submitted orders and then compared intervention units to control units. The costs incurred by additional laboratory testing that resulted from alerts were also estimated. Drug orders were conservatively assigned a duration of 30 days of use for a chronic drug and 10 days for antibiotics. It was determined that there were modest reductions in drug costs, partially offset by an increase in laboratory-related costs. Overall, there was a reduction in direct costs (US$1391.43, net 7.6% reduction). However, sensitivity analyses based on alternative estimates of duration of drug use suggested a reduction as high as US$7998.33 if orders for non-antibiotic drugs were assumed to be continued for 180 days. The authors conclude that the immediate and direct financial impact of a clinical decision support system for medication ordering for residents with renal insufficiency is modest and that the primary motivation for such efforts must be to improve the quality and safety of medication ordering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauk, F.J.; Christensen, D.H.
1980-09-01
Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less
What is Swanson's Law & why Should you Care?
NASA Astrophysics Data System (ADS)
Hansen, S. F.; Partain, L.; Hansen, R. T.
2015-12-01
For 40 years the cost of Solar Photovoltaics (PV) has decreased by a factor of 2 for every 10X increase in its cumulative-installed electric-generating capacity (CC). The straight line, log-log, experimental and historical data fit of cost versus CC is called Swanson's Law for its accurate fit of the rapid decrease in cost over 6 orders of magnitude increase in CC with time. Now Solar PV is cost competitive with coal and natural gas in some regions and provides 1% of the world's electric generating capacity. The Law can next be tested to predict the future. With 2 more orders of magnitude increase in CC, Solar PV could provide 10% and then 100% of the world's current electric capacity, as the Law projects costs falling by another factor of 4. For the last 10 years CC has doubled every 2 years under strong public policy support. If this doubling and policy support are extended, an order-of-magnitude increase (10X) will occur every 6.6 yrs and installed solar PV capacity could reach 100% of the current world's consumption in 13 years or by 2028. The world's solar resource, accessible indefinitely and yearly to PV, is over 1000 times current consumption while coal, uranium, petroleum and natural gas are finite, limited resources, destined to be depleted within our lifetimes or the lives of our children or grandchildren. In 2015 a 56 MW fossil fueled power plant was shut down at Stanford University and replaced with Solar PV and geothermal to save money and eliminate greenhouse gas emissions. If more such shut downs could follow this same 2 year doubling time as Solar PV, then the replacements could exceed 14,000 within 26 years or by 2041, including all 7000 current coal-fired plants plus an equivalent number fueled by uranium, petroleum and natural gas. These shut-downs, including all current fossil-fueled-power plants, could start reversing the human-generated, greenhouse-gas-induced, global climate changes by 2041.
A first-order seismotectonic regionalization of Mexico for seismic hazard and risk estimation
NASA Astrophysics Data System (ADS)
Zúñiga, F. Ramón; Suárez, Gerardo; Figueroa-Soto, Ángel; Mendoza, Avith
2017-11-01
The purpose of this work is to define a seismic regionalization of Mexico for seismic hazard and risk analyses. This seismic regionalization is based on seismic, geologic, and tectonic characteristics. To this end, a seismic catalog was compiled using the more reliable sources available. The catalog was made homogeneous in magnitude in order to avoid the differences in the way this parameter is reported by various agencies. Instead of using a linear regression to converts from m b and M d to M s or M w , using only events for which estimates of both magnitudes are available (i.e., paired data), we used the frequency-magnitude relations relying on the a and b values of the Gutenberg-Richter relation. The seismic regions are divided into three main categories: seismicity associated with the subduction process along the Pacific coast of Mexico, in-slab events within the down-going COC and RIV plates, and crustal seismicity associated to various geologic and tectonic regions. In total, 18 seismic regions were identified and delimited. For each, the a and b values of the Gutenberg-Richter relation were determined using a maximum likelihood estimation. The a and b parameters were repeatedly estimated as a function of time for each region, in order to confirm their reliability and stability. The recurrence times predicted by the resulting Gutenberg-Richter relations obtained are compared with the observed recurrence times of the larger events in each region of both historical and instrumental earthquakes.
Prioritizing Chemicals and Data Requirements for Screening-Level Exposure and Risk Assessment
Brown, Trevor N.; Wania, Frank; Breivik, Knut; McLachlan, Michael S.
2012-01-01
Background: Scientists and regulatory agencies strive to identify chemicals that may cause harmful effects to humans and the environment; however, prioritization is challenging because of the large number of chemicals requiring evaluation and limited data and resources. Objectives: We aimed to prioritize chemicals for exposure and exposure potential and obtain a quantitative perspective on research needs to better address uncertainty in screening assessments. Methods: We used a multimedia mass balance model to prioritize > 12,000 organic chemicals using four far-field human exposure metrics. The propagation of variance (uncertainty) in key chemical information used as model input for calculating exposure metrics was quantified. Results: Modeled human concentrations and intake rates span approximately 17 and 15 orders of magnitude, respectively. Estimates of exposure potential using human concentrations and a unit emission rate span approximately 13 orders of magnitude, and intake fractions span 7 orders of magnitude. The actual chemical emission rate contributes the greatest variance (uncertainty) in exposure estimates. The human biotransformation half-life is the second greatest source of uncertainty in estimated concentrations. In general, biotransformation and biodegradation half-lives are greater sources of uncertainty in modeled exposure and exposure potential than chemical partition coefficients. Conclusions: Mechanistic exposure modeling is suitable for screening and prioritizing large numbers of chemicals. By including uncertainty analysis and uncertainty in chemical information in the exposure estimates, these methods can help identify and address the important sources of uncertainty in human exposure and risk assessment in a systematic manner. PMID:23008278
Connected, disconnected and strange quark contributions to HVP
NASA Astrophysics Data System (ADS)
Bijnens, Johan; Relefors, Johan
2016-11-01
We calculate all neutral vector two-point functions in Chiral Perturbation Theory (ChPT) to two-loop order and use these to estimate the ratio of disconnected to connected contributions as well as contributions involving the strange quark. We extend the ratio of -1/10 derived earlier in two flavour ChPT at one-loop order to a large part of the higher order contributions and discuss corrections to it. Our final estimate of the ratio disconnected to connected is negative and a few % in magnitude.
The value of knowing better - Losses from natural hazards
NASA Astrophysics Data System (ADS)
Mysiak, J.; Galarraga, I.,; Garrido, A.; Interwies, E.; van Bers, C.; Vandenberghe, V.; Farinosi, F.; Foudi, S.; Görlitz, S.; Hernández-Mora, N.; Gil, M.; Grambow, C.
2012-04-01
In a highly emotional speech delivered last year after a series of strikes, Julia Gilbert, the Australian PM, noted that Australia has watched in horror as day after day a new chapter in natural disaster history has been written. And so did the whole world. 2011 went on to become the costliest year in terms of natural hazard losses in the recent history, with the total costs topping 380 billion US dollars. Almost a half of the insured losses were caused by a single event - the Fukushima Dai'ichi nuclear power plant accident triggered by a tsunami that followed an earthquake of MW 6.6 (Richer 9.0) magnitude. The Fukushima disaster has taught a costly lesson, once again: What you least expect, happens. The estimates of losses inflicted by natural hazards are, to put it mildly, incomplete and hardly representative of the ripple effects on regional and global economy, and the wider effects on social fabric, wellbeing and ecosystems that are notoriously difficult to monetise. The knowledge of the full magnitude of losses is not an end in itself. The economics of disasters is an emerging academic field, struggling to uncover the patterns of vulnerability to natural hazards, and provide insights useful for designing effective disaster risk reduction measures and policies. Yet the costly lessons learned are often neglected. In this paper we analyse selected significantly damaging events caused by hydrometeorological and climatologic events (floods and droughts) in four river basins/countries: Ebro/Spain, Po/Italy, Weser/Germany and Scheldt/Flanders-Belgium. Our analysis is focussed on identifying the gaps in reported damage estimates, and conducting additional original research and assessment that contribute to filling those gaps. In the case of drought, all the reference cases except the Ebro refer to the exceptionally hot and dry summer 2003. The drought event examined in the Ebro river basin is the prolonged period of deficient precipitation between 2004 and 2008. The flood reference cases are more uniformly distributed both intra- and interannually. They include Jan-Feb 2003 and Mar-Apr 2007 flood in the Ebro basin, the Oct 2000 flood in Po basin, Jul 2002 flood in Weser basin and Nov 2010 flood in the Scheldt. We have identified significant knowledge gaps in the current accounts of the impacts inflicted by the above disaster strikes. Almost no information is available about intangible, indirect and environmental costs. The structural damage is only partly examined. The existing assessment studies are based either on self-reported losses of the affected subjects and methodologies yielding divergent results about the extent (or even order of magnitude) of the losses suffered. The studies are rarely subjected to a critical analysis and quality check. Uncertainty surrounding the damage estimates is either omitted or reported only as a range of the likely magnitude of the disaster costs. Our analysis offers a systematic review of the damage across the affected sectors and communities. A number of assessment techniques were applied and their, pros and cons discussed. The paper highlights the value of an in-depth assessment of significantly damaging events for a better understanding of vulnerability, that is likely to be amplified as a result of anthropogenic climate change and economic development in the hazard-prone areas.
Hanigan, Ivan C; Williamson, Grant J; Knibbs, Luke D; Horsley, Joshua; Rolfe, Margaret I; Cope, Martin; Barnett, Adrian G; Cowie, Christine T; Heyworth, Jane S; Serre, Marc L; Jalaludin, Bin; Morgan, Geoffrey G
2017-11-07
Exposure to traffic related nitrogen dioxide (NO 2 ) air pollution is associated with adverse health outcomes. Average pollutant concentrations for fixed monitoring sites are often used to estimate exposures for health studies, however these can be imprecise due to difficulty and cost of spatial modeling at the resolution of neighborhoods (e.g., a scale of tens of meters) rather than at a coarse scale (around several kilometers). The objective of this study was to derive improved estimates of neighborhood NO 2 concentrations by blending measurements with modeled predictions in Sydney, Australia (a low pollution environment). We implemented the Bayesian maximum entropy approach to blend data with uncertainty defined using informative priors. We compiled NO 2 data from fixed-site monitors, chemical transport models, and satellite-based land use regression models to estimate neighborhood annual average NO 2 . The spatial model produced a posterior probability density function of estimated annual average concentrations that spanned an order of magnitude from 3 to 35 ppb. Validation using independent data showed improvement, with root mean squared error improvement of 6% compared with the land use regression model and 16% over the chemical transport model. These estimates will be used in studies of health effects and should minimize misclassification bias.
Michaelidis, Constantinos I; Fine, Michael J; Lin, Chyongchiou Jeng; Linder, Jeffrey A; Nowalk, Mary Patricia; Shields, Ryan K; Zimmerman, Richard K; Smith, Kenneth J
2016-11-08
Ambulatory antibiotic prescribing contributes to the development of antibiotic resistance and increases societal costs. Here, we estimate the hidden societal cost of antibiotic resistance per antibiotic prescribed in the United States. In an exploratory analysis, we used published data to develop point and range estimates for the hidden societal cost of antibiotic resistance (SCAR) attributable to each ambulatory antibiotic prescription in the United States. We developed four estimation methods that focused on the antibiotic-resistance attributable costs of hospitalization, second-line inpatient antibiotic use, second-line outpatient antibiotic use, and antibiotic stewardship, then summed the estimates across all methods. The total SCAR attributable to each ambulatory antibiotic prescription was estimated to be $13 (range: $3-$95). The greatest contributor to the total SCAR was the cost of hospitalization ($9; 69 % of the total SCAR). The costs of second-line inpatient antibiotic use ($1; 8 % of the total SCAR), second-line outpatient antibiotic use ($2; 15 % of the total SCAR) and antibiotic stewardship ($1; 8 %). This apperars to be an error.; of the total SCAR) were modest contributors to the total SCAR. Assuming an average antibiotic cost of $20, the total SCAR attributable to each ambulatory antibiotic prescription would increase antibiotic costs by 65 % (range: 15-475 %) if incorporated into antibiotic costs paid by patients or payers. Each ambulatory antibiotic prescription is associated with a hidden SCAR that substantially increases the cost of an antibiotic prescription in the United States. This finding raises concerns regarding the magnitude of misalignment between individual and societal antibiotic costs.
Daily accumulation rates of marine debris on sub-Antarctic island beaches.
Eriksson, Cecilia; Burton, Harry; Fitch, Stuart; Schulz, Martin; van den Hoff, John
2013-01-15
The worlds' oceans contain a large but unknown amount of plastic debris. We made daily collections of marine debris stranded at two sub-Antarctic islands to establish (a) physical causes of strandings, and (b) a sampling protocol to better estimate the oceans' plastic loading. Accumulation rates at some beaches were dependent on tide and onshore winds. Most of the 6389 items collected were plastic (Macquarie 95%, Heard 94%) and discarded or lost fishing gear comprised 22% of those plastic items. Stalked barnacles (Lepas spp.) were a regular attachment on Macquarie debris but not at Heard Island. The daily accumulation rate of plastic debris on Macquarie Island was an order of magnitude higher than that estimated from monthly surveys during the same 4 months in the previous 5 years. This finding suggests that estimates of the oceans' plastic loading are an order of magnitude too low. Copyright © 2012 Elsevier Ltd. All rights reserved.
Dewey, H M; Thrift, A G; Mihalopoulos, C; Carter, R; Macdonell, R A; McNeil, J J; Donnan, G A
2001-10-01
Accurate information about resource use and costs of stroke is necessary for informed health service planning. The purpose of this study was to determine the patterns of resource use among stroke patients and to estimate the total costs (direct service use and indirect production losses) of stroke (excluding SAH) in Australia for 1997. An incidence-based cost-of-illness model was developed, incorporating data obtained from the North East Melbourne Stroke Incidence Study (NEMESIS). The costs of stroke during the first year after stroke and the present value of total lifetime costs of stroke were estimated. The total first-year costs of all first-ever-in-a lifetime strokes (SAH excluded) that occurred in Australia during 1997 were estimated to be A$555 million (US$420 million), and the present value of lifetime costs was estimated to be A$1.3 billion (US$985 million). The average cost per case during the first 12 months and over a lifetime was A$18 956 (US$14 361) and A$44 428 (US$33 658), respectively. The most important categories of cost during the first year were acute hospitalization (A$154 million), inpatient rehabilitation (A$150 million), and nursing home care (A$63 million). The present value of lifetime indirect costs was estimated to be A$34 million. Similar to other studies, hospital and nursing home costs contributed most to the total cost of stroke (excluding SAH) in Australia. Inpatient rehabilitation accounts for approximately 27% of total first-year costs. Given the magnitude of these costs, investigation of the cost-effectiveness of rehabilitation services should become a priority in this community.
NASA Astrophysics Data System (ADS)
Sendzimir, Jan; Dubel, Anna; Linnerooth-Bayer, Joanne; Damurski, Jakub; Schroeter, Dagmar
2014-05-01
Historically large reservoirs have been the dominant strategy to counter flood and drought risk in Europe. However, a number of smaller-scale approaches have emerged as alternative strategies. To compare the cost effectiveness of reservoirs and these alternatives, we calculated the Investment & maintenance costs in terms of (euros) /m3 water stored or annual runoff reduced for five different strategies: large reservoirs (1.68 euros), large on-farm ponds (5.88 euros), small on-farm ponds (558.00 euros), shelterbelts (6.86 euros), switching to conservation tillage (-9.20 euros). The most cost effective measure for reducing runoff is switching to conservation tillage practices because this switch reduces machinery and labor costs in addition to reducing water runoff. Although shelterbelts that reduce annual runoff cannot be directly compared to ponds and reservoirs that store water, our estimates show that they likely compare favorably as a natural water retention measure, especially when taking account of their co-benefits in terms of erosion control, biodiversity and pollination. Another useful result is our demonstration of the economies of scale among reservoirs and ponds for storing water. Small ponds are two orders of magnitude more costly to construct and maintain as a flood and drought prevention measure than large reservoirs. Here, again, there are large co-benefits that should be factored into the cost-benefit equation, including especially the value of small ponds in promoting corridors for migration. This analysis shows the importance of carrying out more extensive cost-benefit estimates across on-farm and off-farm measures for tackling drought and flood risk in the context of a changing climate. While concrete recommendations for supporting water retention measures will depend on a more detailed investigation of their costs and benefits, this research highlights the potential of natural water retention measures as a complement to conventional investments in large reservoirs.
Material and shape perception based on two types of intensity gradient information
Nishida, Shin'ya
2018-01-01
Visual estimation of the material and shape of an object from a single image includes a hard ill-posed computational problem. However, in our daily life we feel we can estimate both reasonably well. The neural computation underlying this ability remains poorly understood. Here we propose that the human visual system uses different aspects of object images to separately estimate the contributions of the material and shape. Specifically, material perception relies mainly on the intensity gradient magnitude information, while shape perception relies mainly on the intensity gradient order information. A clue to this hypothesis was provided by the observation that luminance-histogram manipulation, which changes luminance gradient magnitudes but not the luminance-order map, effectively alters the material appearance but not the shape of an object. In agreement with this observation, we found that the simulated physical material changes do not significantly affect the intensity order information. A series of psychophysical experiments further indicate that human surface shape perception is robust against intensity manipulations provided they do not disturb the intensity order information. In addition, we show that the two types of gradient information can be utilized for the discrimination of albedo changes from highlights. These findings suggest that the visual system relies on these diagnostic image features to estimate physical properties in a distal world. PMID:29702644
Building a kinetic Monte Carlo model with a chosen accuracy.
Bhute, Vijesh J; Chatterjee, Abhijit
2013-06-28
The kinetic Monte Carlo (KMC) method is a popular modeling approach for reaching large materials length and time scales. The KMC dynamics is erroneous when atomic processes that are relevant to the dynamics are missing from the KMC model. Recently, we had developed for the first time an error measure for KMC in Bhute and Chatterjee [J. Chem. Phys. 138, 084103 (2013)]. The error measure, which is given in terms of the probability that a missing process will be selected in the correct dynamics, requires estimation of the missing rate. In this work, we present an improved procedure for estimating the missing rate. The estimate found using the new procedure is within an order of magnitude of the correct missing rate, unlike our previous approach where the estimate was larger by orders of magnitude. This enables one to find the error in the KMC model more accurately. In addition, we find the time for which the KMC model can be used before a maximum error in the dynamics has been reached.
Stey, Anne M; Brook, Robert H; Needleman, Jack; Hall, Bruce L; Zingmond, David S; Lawson, Elise H; Ko, Clifford Y
2015-02-01
This study aims to describe the magnitude of hospital costs among patients undergoing elective colectomy, cholecystectomy, and pancreatectomy, determine whether these costs relate as expected to duration of care, patient case-mix severity and comorbidities, and whether risk-adjusted costs vary significantly by hospital. Correctly estimating the cost of production of surgical care may help decision makers design mechanisms to improve the efficiency of surgical care. Patient data from 202 hospitals in the ACS-NSQIP were linked to Medicare inpatient claims. Patient charges were mapped to cost center cost-to-charge ratios in the Medicare cost reports to estimate costs. The association of patient case-mix severity and comorbidities with cost was analyzed using mixed effects multivariate regression. Cost variation among hospitals was quantified by estimating risk-adjusted hospital cost ratios and 95% confidence intervals from the mixed effects multivariate regression. There were 21,923 patients from 202 hospitals who underwent an elective colectomy (n = 13,945), cholecystectomy (n = 5,569), or pancreatectomy (n = 2,409). Median cost was lowest for cholecystectomy ($15,651) and highest for pancreatectomy ($37,745). Room and board costs accounted for the largest proportion (49%) of costs and were correlated with length of stay, R = 0.89, p < 0.001. The patient case-mix severity and comorbidity variables most associated with cost were American Society of Anesthesiologists (ASA) class IV (estimate 1.72, 95% CI 1.57 to 1.87) and fully dependent functional status (estimate 1.63, 95% CI 1.53 to 1.74). After risk-adjustment, 66 hospitals had significantly lower costs than the average hospital and 57 hospitals had significantly higher costs. The hospital costs estimates appear to be consistent with clinical expectations of hospital resource use and differ significantly among 202 hospitals after risk-adjustment for preoperative patient characteristics and procedure type. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Societal costs of underage drinking.
Miller, Ted R; Levy, David T; Spicer, Rebecca S; Taylor, Dexter M
2006-07-01
Despite minimum-purchase-age laws, young people regularly drink alcohol. This study estimated the magnitude and costs of problems resulting from underage drinking by category-traffic crashes, violence, property crime, suicide, burns, drownings, fetal alcohol syndrome, high-risk sex, poisonings, psychoses, and dependency treatment-and compared those costs with associated alcohol sales. Previous studies did not break out costs of alcohol problems by age. For each category of alcohol-related problems, we estimated fatal and nonfatal cases attributable to underage alcohol use. We multiplied alcohol-attributable cases by estimated costs per case to obtain total costs for each problem. Underage drinking accounted for at least 16% of alcohol sales in 2001. It led to 3,170 deaths and 2.6 million other harmful events. The estimated $61.9 billion bill (relative SE = 18.5%) included $5.4 billion in medical costs, $14.9 billion in work loss and other resource costs, and $41.6 billion in lost quality of life. Quality-of-life costs, which accounted for 67% of total costs, required challenging indirect measurement. Alcohol-attributable violence and traffic crashes dominated the costs. Leaving aside quality of life, the societal harm of $1 per drink consumed by an underage drinker exceeded the average purchase price of $0.90 or the associated $0.10 in tax revenues. Recent attention has focused on problems resulting from youth use of illicit drugs and tobacco. In light of the associated substantial injuries, deaths, and high costs to society, youth drinking behaviors merit the same kind of serious attention.
Economic burden of acute pesticide poisoning in South Korea.
Choi, Yeongchull; Kim, Younhee; Ko, Yousun; Cha, Eun S; Kim, Jaeyoung; Lee, Won J
2012-12-01
To investigate the magnitude and characteristics of the economic burden resulting from acute pesticide poisoning (APP) in South Korea. The total costs of APP from a societal perspective were estimated by summing the direct medical and non-medical costs together with the indirect costs. Direct medical costs for patients assigned a disease code of pesticide poisoning were extracted from the Korean National Health Insurance Reimbursement Data. Direct non-medical costs were estimated using the average transportation and caregiving costs from the Korea Health Panel Survey. Indirect costs, incurred by pre-mature deaths and work loss, were obtained using 2009 Life Tables for Korea and other relevant literature. In 2009, a total of 11,453 patients were treated for APP and 1311 died, corresponding to an incidence of 23.1 per 100,000 population and a mortality rate of 2.6 per 100,000 population in South Korea. The total costs of APP were estimated at approximately US$ 150 million, 0.3% of the costs of total diseases. Costs due to pre-mature mortality accounted for 90.6% of the total costs, whereas the contribution of direct medical costs was relatively small. Costs from APP demonstrate a unique characteristic of a large proportion of the indirect costs originating from pre-mature mortality. This finding suggests policy implications for restrictions on lethal pesticides and safe storage to reduce fatality and cost due to APP. © 2012 Blackwell Publishing Ltd.
Pathogen and indicator concentrations normally vary by several orders of magnitude in raw waters, and to an even greater extent during hazardous event periods. This variation in concentration typically dominate the estimate of infection generated in a quantitative microbial risk ...
Esperato, Alexo; Bishai, David; Hyder, Adnan A
2012-01-01
The Road Safety in 10 Countries (RS-10) project will implement 12 different road safety interventions at specific sites within 10 low- and middle-income countries (LMICs). This evaluation reports the number of lives that RS-10 is projected to save in those locations, the economic value of the risk reduction, and the maximum level of investment that a public health intervention of this magnitude would be able to incur before its costs outweigh its health benefits. We assumed a 5-year time implementation horizon corresponding to the duration of RS-10. Based on a preliminary literature review, we estimated the effectiveness for each of the RS-10 interventions. Applying these effectiveness estimates to the size of the population at risk at RS-10 sites, we calculated the number of lives and life years saved (LYS) by RS-10. We projected the value of a statistical life (VSL) in each RS-10 country based on gross national income (GNI) and estimated the value of the lives saved using each country's VSL. Sensitivity analysis addressed robustness to assumptions about elasticity, discount rates, and intervention effectiveness. From the evidence base reviewed, only 13 studies met our selection criteria. Such a limited base presents uncertainties about the potential impact of the modeled interventions. We tried to account for these uncertainties by allowing effectiveness to vary ± 20 percent for each intervention. Despite this variability, RS-10 remains likely to be worth the investment. RS-10 is expected to save 10,310 lives over 5 years (discounted at 3%). VSL and $/LYS methods provide concordant results. Based on our estimates of each country's VSL, the respective countries would be willing to pay $2.45 billion to lower these fatality risks (varying intervention effectiveness by ± 20 percent, the corresponding range is $2.0-$2.9 billion). Analysis based on $/LYS shows that the RS-10 project will be cost-effective as long as its costs do not exceed $5.14 billion (under ± 20% intervention effectiveness, the range = $4.1-$6.2 billion). Even at low efficacy, these estimates are still several orders of magnitude above the $125 million projected investment. RS-10 is likely to yield high returns for invested resources. The study's chief limitation was the reliance on the world's limited evidence base on how effective the road safety interventions will be. Planned evaluation of RS-10 will enhance planners' ability to conduct economic assessments of road safety in developing countries.
Rapid solidification of metallic particulates
NASA Technical Reports Server (NTRS)
Grant, N. J.
1982-01-01
In order to maximize the heat transfer coefficient the most important variable in rapid solidification is the powder particle size. The finer the particle size, the higher the solidification rate. Efforts to decrease the particle size diameter offer the greatest payoff in attained quench rate. The velocity of the liquid droplet in the atmosphere is the second most important variable. Unfortunately the choices of gas atmospheres are sharply limited both because of conductivity and cost. Nitrogen and argon stand out as the preferred gases, nitrogen where reactions are unimportant and argon where reaction with nitrogen may be important. In gas atomization, helium offers up to an order of magnitude increase in solidification rate over argon and nitrogen. By contrast, atomization in vacuum drops the quench rate several orders of magnitude.
GPU computing of compressible flow problems by a meshless method with space-filling curves
NASA Astrophysics Data System (ADS)
Ma, Z. H.; Wang, H.; Pu, S. H.
2014-04-01
A graphic processing unit (GPU) implementation of a meshless method for solving compressible flow problems is presented in this paper. Least-square fit is used to discretize the spatial derivatives of Euler equations and an upwind scheme is applied to estimate the flux terms. The compute unified device architecture (CUDA) C programming model is employed to efficiently and flexibly port the meshless solver from CPU to GPU. Considering the data locality of randomly distributed points, space-filling curves are adopted to re-number the points in order to improve the memory performance. Detailed evaluations are firstly carried out to assess the accuracy and conservation property of the underlying numerical method. Then the GPU accelerated flow solver is used to solve external steady flows over aerodynamic configurations. Representative results are validated through extensive comparisons with the experimental, finite volume or other available reference solutions. Performance analysis reveals that the running time cost of simulations is significantly reduced while impressive (more than an order of magnitude) speedups are achieved.
A Multidisciplinary, Science-Based Approach to the Economics of Climate Change
Carlin, Alan
2011-01-01
Economic analyses of environmental mitigation and other interdisciplinary public policy issues can be much more useful if they critically examine what other disciplines have to say, insist on using the most relevant observational data and the scientific method, and examine lower cost alternatives to the change proposed. These general principles are illustrated by applying them to the case of climate change mitigation, one of the most interdisciplinary of public policy issues. The analysis shows how use of these principles leads to quite different conclusions than those of most previous such economic analyses, as follows: The economic benefits of reducing CO2 emissions may be about two orders of magnitude less than those estimated by most economists because the climate sensitivity factor (CSF) is much lower than assumed by the United Nations because feedback is negative rather than positive and the effects of CO2 emissions reductions on atmospheric CO2 appear to be short rather than long lasting.The costs of CO2 emissions reductions are very much higher than usually estimated because of technological and implementation problems recently identified.Geoengineering such as solar radiation management is a controversial alternative to CO2 emissions reductions that offers opportunities to greatly decrease these large costs, change global temperatures with far greater assurance of success, and eliminate the possibility of low probability, high consequence risks of rising temperatures, but has been largely ignored by economists.CO2 emissions reductions are economically unattractive since the very modest benefits remaining after the corrections for the above effects are quite unlikely to economically justify the much higher costs unless much lower cost geoengineering is used.The risk of catastrophic anthropogenic global warming appears to be so low that it is not currently worth doing anything to try to control it, including geoengineering. PMID:21695026
Carvalho, Natalie; Gutiérrez-Delgado, Cristina; Orozco, Ricardo; Mancuso, Anna; Hogan, Daniel R; Lee, Diana; Murakami, Yuki; Sridharan, Lakshmi; Medina-Mora, María Elena; González-Pier, Eduardo
2012-01-01
Objective To inform decision making regarding intervention strategies against non-communicable diseases in Mexico, in the context of health reform. Design Cost effectiveness analysis based on epidemiological modelling. Interventions 101 intervention strategies relating to nine major clusters of non-communicable disease: depression, heavy alcohol use, tobacco use, cataracts, breast cancer, cervical cancer, chronic obstructive pulmonary disease, cardiovascular disease, and diabetes. Data sources Mexican data sources were used for most key input parameters, including administrative registries; disease burden and population estimates; household surveys; and drug price databases. These sources were supplemented as needed with estimates for Mexico from the WHO-CHOICE unit cost database or with estimates extrapolated from the published literature. Main outcome measures Population health outcomes, measured in disability adjusted life years (DALYs); costs in 2005 international dollars ($Int); and costs per DALY. Results Across 101 intervention strategies examined in this study, average yearly costs at the population level would range from around ≤$Int1m (such as for cataract surgeries) to >$Int1bn for certain strategies for primary prevention in cardiovascular disease. Wide variation also appeared in total population health benefits, from <1000 DALYs averted a year (for some components of cancer treatments or aspirin for acute ischaemic stroke) to >300 000 averted DALYs (for aggressive combinations of interventions to deal with alcohol use or cardiovascular risks). Interventions in this study spanned a wide range of average cost effectiveness ratios, differing by more than three orders of magnitude between the lowest and highest ratios. Overall, community and public health interventions such as non-personal interventions for alcohol use, tobacco use, and cardiovascular risks tended to have lower cost effectiveness ratios than many clinical interventions (of varying complexity). Even within the community and public health interventions, however, there was a 200-fold difference between the most and least cost effective strategies examined. Likewise, several clinical interventions appeared among the strategies with the lowest average cost effectiveness ratios—for example, cataract surgeries. Conclusions Wide variations in costs and effects exist within and across intervention categories. For every major disease area examined, at least some strategies provided excellent value for money, including both population based and personal interventions. PMID:22389335
Darvasi, A.; Soller, M.
1994-01-01
Selective genotyping is a method to reduce costs in marker-quantitative trait locus (QTL) linkage determination by genotyping only those individuals with extreme, and hence most informative, quantitative trait values. The DNA pooling strategy (termed: ``selective DNA pooling'') takes this one step further by pooling DNA from the selected individuals at each of the two phenotypic extremes, and basing the test for linkage on marker allele frequencies as estimated from the pooled samples only. This can reduce genotyping costs of marker-QTL linkage determination by up to two orders of magnitude. Theoretical analysis of selective DNA pooling shows that for experiments involving backcross, F(2) and half-sib designs, the power of selective DNA pooling for detecting genes with large effect, can be the same as that obtained by individual selective genotyping. Power for detecting genes with small effect, however, was found to decrease strongly with increase in the technical error of estimating allele frequencies in the pooled samples. The effect of technical error, however, can be markedly reduced by replication of technical procedures. It is also shown that a proportion selected of 0.1 at each tail will be appropriate for a wide range of experimental conditions. PMID:7896115
Sutton, Patrick T; Ginn, Timothy R
2014-12-15
A sustainable in-well vapor stripping system is designed as a cost-effective alternative for remediation of shallow chlorinated solvent groundwater plumes. A solar-powered air compressor is used to inject air bubbles into a monitoring well to strip volatile organic compounds from a liquid to vapor phase while simultaneously inducing groundwater circulation around the well screen. An analytical model of the remediation process is developed to estimate contaminant mass flow and removal rates. The model was calibrated based on a one-day pilot study conducted in an existing monitoring well at a former dry cleaning site. According to the model, induced groundwater circulation at the study site increased the contaminant mass flow rate into the well by approximately two orders of magnitude relative to ambient conditions. Modeled estimates for 5h of pulsed air injection per day at the pilot study site indicated that the average effluent concentrations of dissolved tetrachloroethylene and trichloroethylene can be reduced by over 90% relative to the ambient concentrations. The results indicate that the system could be used cost-effectively as either a single- or multi-well point technology to substantially reduce the mass of dissolved chlorinated solvents in groundwater. Copyright © 2014 Elsevier B.V. All rights reserved.
Plumes in the mantle. [free air and isostatic gravity anomalies for geophysical interpretation
NASA Technical Reports Server (NTRS)
Khan, M. A.
1973-01-01
Free air and isostatic gravity anomalies for the purposes of geophysical interpretation are presented. Evidence for the existance of hotspots in the mantle is reviewed. The prosposed locations of these hotspots are not always associated with positive gravity anomalies. Theoretical analysis based on simplified flow models for the plumes indicates that unless the frictional viscosities are several orders of magnitude smaller than the present estimates of mantle viscosity or alternately, the vertical flows are reduced by about two orders of magnitude, the plume flow will generate implausibly high temperatures.
Uncertainty in gridded CO 2 emissions estimates
Hogue, Susannah; Marland, Eric; Andres, Robert J.; ...
2016-05-19
We are interested in the spatial distribution of fossil-fuel-related emissions of CO 2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO 2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from themore » use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. In conclusion, uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.« less
Public Policy and Economic Efficiency in Ontario's Electricity Market: 2002 to 2011
NASA Astrophysics Data System (ADS)
Olmstead, Derek E. H.
A competitive wholesale electricity market began operation in Ontario in 2002. The institutional features and development process are described, and the outcomes associated with certain features are assessed. First, a six-equation model of the market is specified and estimated. The results are used to undertake analysis of the province's renewable energy program. The impacts of the program on consumers' and producers' surplus, as well as the resulting degree of carbon dioxide (CO2) emission-abatement, are estimated. These results are used to infer the per-unit cost of CO 2 abatement resulting from the program. Under the assumption that the renewable-fuelled energy displaces coal-fuelled energy from the market, the estimated cost is approximately 93/tonne of CO2; under the alternative assumption that natural gas-fuelled generation is displaced, the estimated cost is 207/tonne of CO2. Comparison to costs observed in other markets and jurisdictions reveals the program to cost approximately one order of magnitude greater than elsewhere. It is concluded that Ontario pays substantially more for emission abatement than is necessary or, alternatively, that Ontario achieves substantially less abatement than is feasible for each dollar of economic resources expended. Second, the market model is also used to assess the treatment of electricity exports with respect to the so-called global adjustment charge. The analysis reveals that the current practise of exempting exports from the charge is not socially optimal from a total surplus-maximisation standpoint. That objective would be achieved if global adjustment was allocated to exports at approximately 32% of the rate at which it is applied to Ontario-based consumers, a result consistent with a Ramsey-type inverse elasticity rule. Third, the forward market unbiasedness hypothesis is assessed in the context of the market for financial transmission rights (FTR). Issues related to left-censoring of payouts at $0 and overlapping observations are dealt with. The analysis reveals little evidence in favour of the hypothesis, but finds less biasedness in long-term rights as compared to short-term rights. Analysis of bidder behaviour reveals greater levels of participation in auctions of FTRs that link Ontario to similarly competitive neighbouring jurisdictions as opposed to non-competitive jurisdictions.
Grewal, Simrun; Ramsey, Scott; Balu, Sanjeev; Carlson, Josh J
2018-05-18
Biosimilars can directly reduce the cost of treating patients for whom a reference biologic is indicated by offering a highly similar, lower priced alternative. We examine factors related to biosimilar regulatory approval, uptake, pricing, and financing and the potential impact on drug expenditures in the U.S. We developed a framework to illustrate how key factors including regulatory policies, provider and patient perception, pricing, and payer policies impact biosimilar cost-savings. Further, we developed a budget impact cost model to estimate savings from filgrastim biosimilars under various scenarios. The model uses publicly available data on disease incidence, treatment patterns, market share, and drug prices to estimate the cost-savings over a 5-year time horizon. We estimate five-year cost savings of $256 million, of which 18% ($47 million) are from reduced patient out-of-pocket costs, 34% ($86 million) are savings to commercial payers, and 48% ($123 million) are savings for Medicare. Additional scenarios demonstrate the impact of uncertain factors, including price, uptake, and financing policies. A variety or interrelated factors influence the development, uptake, and cost-savings for Biosimilars use in the U.S. The filgrastim case is a useful example that illustrates these factors and the potential magnitude of costs savings.
Super-Resolution Image Reconstruction Applied to Medical Ultrasound
NASA Astrophysics Data System (ADS)
Ellis, Michael
Ultrasound is the preferred imaging modality for many diagnostic applications due to its real-time image reconstruction and low cost. Nonetheless, conventional ultrasound is not used in many applications because of limited spatial resolution and soft tissue contrast. Most commercial ultrasound systems reconstruct images using a simple delay-and-sum architecture on receive, which is fast and robust but does not utilize all information available in the raw data. Recently, more sophisticated image reconstruction methods have been developed that make use of far more information in the raw data to improve resolution and contrast. One such method is the Time-Domain Optimized Near-Field Estimator (TONE), which employs a maximum a priori estimation to solve a highly underdetermined problem, given a well-defined system model. TONE has been shown to significantly improve both the contrast and resolution of ultrasound images when compared to conventional methods. However, TONE's lack of robustness to variations from the system model and extremely high computational cost hinder it from being readily adopted in clinical scanners. This dissertation aims to reduce the impact of TONE's shortcomings, transforming it from an academic construct to a clinically viable image reconstruction algorithm. By altering the system model from a collection of individual hypothetical scatterers to a collection of weighted, diffuse regions, dTONE is able to achieve much greater robustness to modeling errors. A method for efficient parallelization of dTONE is presented that reduces reconstruction time by more than an order of magnitude with little loss in image fidelity. An alternative reconstruction algorithm, called qTONE, is also developed and is able to reduce reconstruction times by another two orders of magnitude while simultaneously improving image contrast. Each of these methods for improving TONE are presented, their limitations are explored, and all are used in concert to reconstruct in vivo images of a human testicle. In all instances, the methods presented here outperform conventional image reconstruction methods by a significant margin. As TONE and its variants are general image reconstruction techniques, the theories and research presented here have the potential to significantly improve not only ultrasound's clinical utility, but that of other imaging modalities as well.
NASA Astrophysics Data System (ADS)
Soh, I.; Chang, C.
2017-12-01
The techniques for estimating present-day stress states by inverting multiple earthquake focal mechanism solutions (FMS) provide orientations of the three principal stresses and their relative magnitudes. In order to estimate absolute magnitudes of the stresses that are generally required to analyze faulting mechanics, we combine the relative stress magnitude parameter (R-value) derived from the inversion process and the concept of frictional equilibrium of stress state defined by Coulomb friction law. The stress inversion in Korean Peninsula using 152 FMS data (magnitude≥2.5) conducted at regularly spaced grid points yields a consistent strike-slip faulting regime in which the maximum (S1) and the minimum (S3) principal stresses act in horizontal planes (with an S1 azimuth in ENE-WSW) and the intermediate principal stress (S2) close to vertical. However, R-value varies from 0.28 to 0.75 depending on locations, systematically increasing eastward. Based on the assumptions that the vertical stress is lithostatic, pore pressure is hydrostatic, and the maximum differential stress (S1-S3) is limited by Byerlee's friction of optimally oriented faults for slip, we estimate absolute magnitudes of the two horizontal principal stresses using R-value. As R-value increases, so do the magnitudes of the horizontal stresses. Our estimation of the stress magnitudes shows that the maximum horizontal principal stress (S1) normalized by vertical stress tends to increase from 1.3 in the west to 1.8 in the east. The estimated variation of stress magnitudes is compatible with distinct clustering of faulting types in different regions. Normal faulting events are densely populated in the west region where the horizontal stress is relatively low, whereas numerous reverse faulting events prevail in the east offshore where the horizontal stress is relatively high. Such a characteristic distribution of distinct faulting types in different regions can only be explained in terms of stress magnitude variation.
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
Preconditioned conjugate gradient wave-front reconstructors for multiconjugate adaptive optics
NASA Astrophysics Data System (ADS)
Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.
2003-09-01
Multiconjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wave-front control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10-2 Hz, i.e., 4-5 orders of magnitude lower than the typical 103 Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.
NASA Astrophysics Data System (ADS)
Saavedra, Juan Alejandro
Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.
LANDSAT D user data processing study
NASA Technical Reports Server (NTRS)
1976-01-01
The major expected users of the LANDSAT D system and a preliminary system design of their required facilities are investigated. This system design will then be costed in order to provide an estimate of the incremental user costs necessitated by LANDSAT D. One major use of these cost estimates is as part of an overall economic cost/benefit argument being developed for the LANDSAT D system. The implication of this motive is key; the system design (and corresponding cost estimates) must be a credible one, but not necessarily an optimum one.
Barnabe, Cheryl; Thanh, Nguyen Xuan; Ohinmaa, Arto; Homik, Joanne; Barr, Susan G; Martin, Liam; Maksymowych, Walter P
2014-08-01
Sustained remission in rheumatoid arthritis (RA) results in healthcare utilization cost savings. We evaluated the variation in estimates of savings when different definitions of remission [2011 American College of Rheumatology/European League Against Rheumatism Boolean Definition, Simplified Disease Activity Index (SDAI) ≤ 3.3, Clinical Disease Activity Index (CDAI) ≤ 2.8, and Disease Activity Score-28 (DAS28) ≤ 2.6] are applied. The annual mean healthcare service utilization costs were estimated from provincial physician billing claims, outpatient visits, and hospitalizations, with linkage to clinical data from the Alberta Biologics Pharmacosurveillance Program (ABioPharm). Cost savings in patients who had a 1-year continuous period of remission were compared to those who did not, using 4 definitions of remission. In 1086 patients, sustained remission rates were 16.1% for DAS28, 8.8% for Boolean, 5.5% for CDAI, and 4.2% for SDAI. The estimated mean annual healthcare cost savings per patient achieving remission (relative to not) were SDAI $1928 (95% CI 592, 3264), DAS28 $1676 (95% CI 987, 2365), and Boolean $1259 (95% CI 417, 2100). The annual savings by CDAI remission per patient were not significant at $423 (95% CI -1757, 2602). For patients in DAS28, Boolean, and SDAI remission, savings were seen both in costs directly related to RA and its comorbidities, and in costs for non-RA-related conditions. The magnitude of the healthcare cost savings varies according to the remission definition used in classifying patient disease status. The highest point estimate for cost savings was observed in patients attaining SDAI remission and the least with the CDAI; confidence intervals for these estimates do overlap. Future pharmacoeconomic analyses should employ all response definitions in assessing the influence of treatment.
A USEPA-sponsored field demonstration program was conducted to gather technically reliable cost and performance information on the electro-scan (FELL -41) pipeline condition assessment technology. Electro-scan technology can be used to estimate the magnitude and location of pote...
NASA Astrophysics Data System (ADS)
Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.
2017-03-01
We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient advanced quantum-computation techniques are developed, they nevertheless provide a valid baseline for research targeting a reduction of the algorithmic-level resource requirements, implying that a reduction by many orders of magnitude is necessary for the algorithm to become practical.
Modeling of the dispersion of depleted uranium aerosol.
Mitsakou, C; Eleftheriadis, K; Housiadas, C; Lazaridis, M
2003-04-01
Depleted uranium is a low-cost radioactive material that, in addition to other applications, is used by the military in kinetic energy weapons against armored vehicles. During the Gulf and Balkan conflicts concern has been raised about the potential health hazards arising from the toxic and radioactive material released. The aerosol produced during impact and combustion of depleted uranium munitions can potentially contaminate wide areas around the impact sites or can be inhaled by civilians and military personnel. Attempts to estimate the extent and magnitude of the dispersion were until now performed by complex modeling tools employing unclear assumptions and input parameters of high uncertainty. An analytical puff model accommodating diffusion with simultaneous deposition is developed, which can provide a reasonable estimation of the dispersion of the released depleted uranium aerosol. Furthermore, the period of the exposure for a given point downwind from the release can be estimated (as opposed to when using a plume model). The main result is that the depleted uranium mass is deposited very close to the release point. The deposition flux at a couple of kilometers from the release point is more than one order of magnitude lower than the one a few meters near the release point. The effects due to uncertainties in the key input variables are addressed. The most influential parameters are found to be atmospheric stability, height of release, and wind speed, whereas aerosol size distribution is less significant. The output from the analytical model developed was tested against the numerical model RPM-AERO. Results display satisfactory agreement between the two models.
Detection of Catalysis by Taste.
ERIC Educational Resources Information Center
Richman, Robert M.; Villaescusa, Warren
1998-01-01
Outlines the development of a kinetic study using the enzyme Lactaid to hydrolyze the synthetic substrate of lactose into an undergraduate laboratory experiment. Provides students with experience doing order-of-magnitude estimates. (DDR)
Limiting technologies for particle beams and high energy physics
NASA Astrophysics Data System (ADS)
Panofsky, W. K. H.
1985-07-01
Since 1930 the energy of accelerators had grown by an order of magnitude roughly every 7 years. Like all exponential growths, be they human population, the size of computers, or anything else, this eventually will have to come to an end. When will this happen to the growth of the energy of particle accelerators and colliders? Fortunately, as the energy of accelerators has grown the cost per unit energy has decreased almost as fast as has the increase in energy. The result is that while the energy has increased so dramatically the cost per new installation has increased only by roughly an order of magnitude since the 1930's (corrected for inflation), while the number of accelerators operating at the frontier of the field has shrunk. As is shown in the by now familiar Livingston chart this dramatic decrease in cost has been achieved largely by a succession of new technologies, in addition to the more moderate gains in efficiency due to improved design, economies of scale, etc. We are therefore facing two questions: (1) Is there good reason scientifically to maintain the exponential growth, and (2) Are there new technologies in sight which promise continued decreases in unit costs. The answer to the first question is definitely yes; the answer to the second question is maybe.
Empirical Assessment of Spatial Prediction Methods for Location Cost Adjustment Factors
Migliaccio, Giovanni C.; Guindani, Michele; D'Incognito, Maria; Zhang, Linlin
2014-01-01
In the feasibility stage, the correct prediction of construction costs ensures that budget requirements are met from the start of a project's lifecycle. A very common approach for performing quick-order-of-magnitude estimates is based on using Location Cost Adjustment Factors (LCAFs) that compute historically based costs by project location. Nowadays, numerous LCAF datasets are commercially available in North America, but, obviously, they do not include all locations. Hence, LCAFs for un-sampled locations need to be inferred through spatial interpolation or prediction methods. Currently, practitioners tend to select the value for a location using only one variable, namely the nearest linear-distance between two sites. However, construction costs could be affected by socio-economic variables as suggested by macroeconomic theories. Using a commonly used set of LCAFs, the City Cost Indexes (CCI) by RSMeans, and the socio-economic variables included in the ESRI Community Sourcebook, this article provides several contributions to the body of knowledge. First, the accuracy of various spatial prediction methods in estimating LCAF values for un-sampled locations was evaluated and assessed in respect to spatial interpolation methods. Two Regression-based prediction models were selected, a Global Regression Analysis and a Geographically-weighted regression analysis (GWR). Once these models were compared against interpolation methods, the results showed that GWR is the most appropriate way to model CCI as a function of multiple covariates. The outcome of GWR, for each covariate, was studied for all the 48 states in the contiguous US. As a direct consequence of spatial non-stationarity, it was possible to discuss the influence of each single covariate differently from state to state. In addition, the article includes a first attempt to determine if the observed variability in cost index values could be, at least partially explained by independent socio-economic variables. PMID:25018582
NASA Technical Reports Server (NTRS)
1973-01-01
Flight tests are evaluated of an avionics system which aids the pilot in making two-segment approaches for noise abatement. The implications are discussed of equipping United's fleet of Boeing 727-200 aircraft with two-segment avionics for use down to Category 2 weather operating minima. The experience is reported of incorporating two-segment approach avionics systems on two different aircraft. The cost of installing dual two-segment approach systems is estimated to be $37,015 per aircraft, including parts, labor, and spares. This is based on the assumption that incremental out-of-service and training costs could be minimized by incorporating the system at airframe overhaul cycle and including training in regular recurrent training. Accelerating the modification schedule could add up to 50 percent to the modification costs. Recurring costs of maintenance of the installation are estimated to be of about the same magnitude as the potential recurrent financial benefits due to fuel savings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.
Safety tests were conducted on fuel compacts from AGR-1, the first irradiation experiment of the Advanced Gas Reactor (AGR) Fuel Development and Qualification program, at temperatures ranging from 1600 to 1800 °C to determine fission product release at temperatures that bound reactor accident conditions. The PARFUME (PARticle FUel ModEl) code was used to predict the release of fission products silver, cesium, strontium, and krypton from fuel compacts containing tristructural isotropic (TRISO) coated particles during 15 of these safety tests. Comparisons between PARFUME predictions and post-irradiation examination results of the safety tests were conducted on two types of AGR-1 compacts: compactsmore » containing only intact particles and compacts containing one or more particles whose SiC layers failed during safety testing. In both cases, PARFUME globally over-predicted the experimental release fractions by several orders of magnitude: more than three (intact) and two (failed SiC) orders of magnitude for silver, more than three and up to two orders of magnitude for strontium, and up to two and more than one orders of magnitude for krypton. The release of cesium from intact particles was also largely over-predicted (by up to five orders of magnitude) but its release from particles with failed SiC was only over-predicted by a factor of about 3. These over-predictions can be largely attributed to an over-estimation of the diffusivities used in the modeling of fission product transport in TRISO-coated particles. The integral release nature of the data makes it difficult to estimate the individual over-estimations in the kernel or each coating layer. Nevertheless, a tentative assessment of correction factors to these diffusivities was performed to enable a better match between the modeling predictions and the safety testing results. The method could only be successfully applied to silver and cesium. In the case of strontium, correction factors could not be assessed because potential release during the safety tests could not be distinguished from matrix content released during irradiation. Furthermore, in the case of krypton, all the coating layers are partly retentive and the available data did not allow the level of retention in individual layers to be determined, hence preventing derivation of any correction factors.« less
Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.; ...
2016-04-07
Safety tests were conducted on fuel compacts from AGR-1, the first irradiation experiment of the Advanced Gas Reactor (AGR) Fuel Development and Qualification program, at temperatures ranging from 1600 to 1800 °C to determine fission product release at temperatures that bound reactor accident conditions. The PARFUME (PARticle FUel ModEl) code was used to predict the release of fission products silver, cesium, strontium, and krypton from fuel compacts containing tristructural isotropic (TRISO) coated particles during 15 of these safety tests. Comparisons between PARFUME predictions and post-irradiation examination results of the safety tests were conducted on two types of AGR-1 compacts: compactsmore » containing only intact particles and compacts containing one or more particles whose SiC layers failed during safety testing. In both cases, PARFUME globally over-predicted the experimental release fractions by several orders of magnitude: more than three (intact) and two (failed SiC) orders of magnitude for silver, more than three and up to two orders of magnitude for strontium, and up to two and more than one orders of magnitude for krypton. The release of cesium from intact particles was also largely over-predicted (by up to five orders of magnitude) but its release from particles with failed SiC was only over-predicted by a factor of about 3. These over-predictions can be largely attributed to an over-estimation of the diffusivities used in the modeling of fission product transport in TRISO-coated particles. The integral release nature of the data makes it difficult to estimate the individual over-estimations in the kernel or each coating layer. Nevertheless, a tentative assessment of correction factors to these diffusivities was performed to enable a better match between the modeling predictions and the safety testing results. The method could only be successfully applied to silver and cesium. In the case of strontium, correction factors could not be assessed because potential release during the safety tests could not be distinguished from matrix content released during irradiation. Furthermore, in the case of krypton, all the coating layers are partly retentive and the available data did not allow the level of retention in individual layers to be determined, hence preventing derivation of any correction factors.« less
The impact of land use on estimates of pesticide leaching potential: Assessments and uncertainties
NASA Astrophysics Data System (ADS)
Loague, Keith
1991-11-01
This paper illustrates the magnitude of uncertainty which can exist for pesticide leaching assessments, due to data uncertainties, both between soil orders and within a single soil order. The current work differs from previous efforts because the impact of uncertainty in recharge estimates is considered. The examples are for diuron leaching in the Pearl Harbor Basin. The results clearly indicate that land use has a significant impact on both estimates of pesticide leaching potential and the uncertainties associated with those estimates. It appears that the regulation of agricultural chemicals in the future should include consideration for changing land use.
Cost Risk Analysis Based on Perception of the Engineering Process
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Wood, Darrell A.; Moore, Arlene A.; Bogart, Edward H.
1986-01-01
In most cost estimating applications at the NASA Langley Research Center (LaRC), it is desirable to present predicted cost as a range of possible costs rather than a single predicted cost. A cost risk analysis generates a range of cost for a project and assigns a probability level to each cost value in the range. Constructing a cost risk curve requires a good estimate of the expected cost of a project. It must also include a good estimate of expected variance of the cost. Many cost risk analyses are based upon an expert's knowledge of the cost of similar projects in the past. In a common scenario, a manager or engineer, asked to estimate the cost of a project in his area of expertise, will gather historical cost data from a similar completed project. The cost of the completed project is adjusted using the perceived technical and economic differences between the two projects. This allows errors from at least three sources. The historical cost data may be in error by some unknown amount. The managers' evaluation of the new project and its similarity to the old project may be in error. The factors used to adjust the cost of the old project may not correctly reflect the differences. Some risk analyses are based on untested hypotheses about the form of the statistical distribution that underlies the distribution of possible cost. The usual problem is not just to come up with an estimate of the cost of a project, but to predict the range of values into which the cost may fall and with what level of confidence the prediction is made. Risk analysis techniques that assume the shape of the underlying cost distribution and derive the risk curve from a single estimate plus and minus some amount usually fail to take into account the actual magnitude of the uncertainty in cost due to technical factors in the project itself. This paper addresses a cost risk method that is based on parametric estimates of the technical factors involved in the project being costed. The engineering process parameters are elicited from the engineer/expert on the project and are based on that expert's technical knowledge. These are converted by a parametric cost model into a cost estimate. The method discussed makes no assumptions about the distribution underlying the distribution of possible costs, and is not tied to the analysis of previous projects, except through the expert calibrations performed by the parametric cost analyst.
NASA Astrophysics Data System (ADS)
Tabak, M.
2016-10-01
There is a need to develop alternate energy sources in the coming century because fossil fuels will become depleted and their use may lead to global climate change. Inertial fusion can become such an energy source, but significant progress must be made before its promise is realized. The high-density approach to inertial fusion suggested by Nuckolls et al. leads reaction chambers compatible with civilian power production. Methods to achieve the good control of hydrodynamic stability and implosion symmetry required to achieve these high fuel densities will be discussed. Fast Ignition, a technique that achieves fusion ignition by igniting fusion fuel after it is assembled, will be described along with its gain curves. Fusion costs of energy for conventional hotspot ignition will be compared with those of Fast Ignition and their capital costs compared with advanced fission plants. Finally, techniques that may improve possible Fast Ignition gains by an order of magnitude and reduce driver scales by an order of magnitude below conventional ignition requirements are described.
Quality of Life and Cost of Care at the End of Life: The Role of Advance Directives
Garrido, Melissa M.; Balboni, Tracy A.; Maciejewski, Paul K.; Bao, Yuhua; Prigerson, Holly G.
2014-01-01
Context Advance directives (ADs) are expected to improve patients’ end-of-life outcomes, but retrospective analyses, surrogate recall of patients’ preferences, and selection bias have hampered efforts to determine ADs’ effects on patient outcomes. Objectives To examine associations among ADs, quality of life, and estimated costs of care in the week before death. Methods We used prospective data from interviews of 336 patients with advanced cancer and their caregivers, and analyzed patient baseline interview and caregiver and provider post-mortem evaluation data from the Coping with Cancer study. Cost estimates were from the Healthcare Cost and Utilization Project Nationwide Inpatient Sample and published Medicare payment rates and cost estimates. Outcomes were quality of life (range 0-10) and estimated costs of care received in the week before death. Because patient end-of-life care preferences influence both AD completion and care use, analyses were stratified by preferences regarding heroic endof-life measures (everything possible to remain alive). Results Most patients did not want heroic measures (76%). Do-not-resuscitate (DNR) orders were associated with higher quality of life (β=0.75, standard error=0.30, P=0.01) across the entire sample. There were no statistically significant relationships between DNR orders and outcomes among patients when we stratified by patient preference, or between living wills/durable powers of attorney and outcomes in any of the patient groups. Conclusion The associations between DNR orders and better quality of life in the week before death indicate that documenting preferences against resuscitation in medical orders may be beneficial to many patients. PMID:25498855
Hanly, Paul; Timmons, Aileen; Walsh, Paul M; Sharp, Linda
2012-05-01
Productivity costs constitute a substantial proportion of the total societal costs associated with cancer. We compared the results of applying two different analytical methods--the traditional human capital approach (HCA) and the emerging friction cost approach (FCA)--to estimate breast and prostate cancer productivity costs in Ireland in 2008. Data from a survey of breast and prostate cancer patients were combined with population-level survival estimates and a national wage data set to calculate costs of temporary disability (cancer-related work absence), permanent disability (workforce departure, reduced working hours), and premature mortality. For breast cancer, productivity costs per person using the HCA were € 193,425 and those per person using the FCA were € 8,103; for prostate cancer, the comparable estimates were € 109,154 and € 8,205, respectively. The HCA generated higher costs for younger patients (breast cancer) because of greater lifetime earning potential. In contrast, the FCA resulted in higher productivity costs for older male patients (prostate cancer) commensurate with higher earning capacity over a shorter time period. Reduced working hours postcancer was a key driver of total HCA productivity costs. HCA costs were sensitive to assumptions about discount and growth rates. FCA costs were sensitive to assumptions about the friction period. The magnitude of the estimates obtained in this study illustrates the importance of including productivity costs when considering the economic impact of illness. Vastly different results emerge from the application of the HCA and the FCA, and this finding emphasizes the importance of choosing the study perspective carefully and being explicit about assumptions that underpin the methods. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Facile fabrication of CNT-based chemical sensor operating at room temperature
NASA Astrophysics Data System (ADS)
Sheng, Jiadong; Zeng, Xian; Zhu, Qi; Yang, Zhaohui; Zhang, Xiaohua
2017-12-01
This paper describes a simple, low cost and effective route to fabricate CNT-based chemical sensors, which operate at room temperature. Firstly, the incorporation of silk fibroin in vertically aligned CNT arrays (CNTA) obtained through a thermal chemical vapor deposition (CVD) method makes the direct removal of CNT arrays from substrates without any rigorous acid or sonication treatment feasible. Through a simple one-step in situ polymerization of anilines, the functionalization of CNT arrays with polyaniline (PANI) significantly improves the sensing performance of CNT-based chemical sensors in detecting ammonia (NH3) and hydrogen chloride (HCl) vapors. Chemically modified CNT arrays also show responses to organic vapors like menthol, ethyl acetate and acetone. Although the detection limits of chemically modified CNT-based chemical sensors are of the same orders of magnitudes reported in previous studies, these CNT-based chemical sensors show advantages of simplicity, low cost and energy efficiency in preparation and fabrication of devices. Additionally, a linear relationship between the relative sensitivity and concentration of analyte makes precise estimations on the concentrations of trace chemical vapors possible.
Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments
NASA Astrophysics Data System (ADS)
Munsky, Brian; Shepherd, Douglas
2014-03-01
Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.
DeFelice, Nicholas B; Johnston, Jill E; Gibson, Jacqueline MacDonald
2015-08-18
The magnitude and spatial variability of acute gastrointestinal illness (AGI) cases attributable to microbial contamination of U.S. community drinking water systems are not well characterized. We compared three approaches (drinking water attributable risk, quantitative microbial risk assessment, and population intervention model) to estimate the annual number of emergency department visits for AGI attributable to microorganisms in North Carolina community water systems. All three methods used 2007-2013 water monitoring and emergency department data obtained from state agencies. The drinking water attributable risk method, which was the basis for previous U.S. Environmental Protection Agency national risk assessments, estimated that 7.9% of annual emergency department visits for AGI are attributable to microbial contamination of community water systems. However, the other methods' estimates were more than 2 orders of magnitude lower, each attributing 0.047% of annual emergency department visits for AGI to community water system contamination. The differences in results between the drinking water attributable risk method, which has been the main basis for previous national risk estimates, and the other two approaches highlight the need to improve methods for estimating endemic waterborne disease risks, in order to prioritize investments to improve community drinking water systems.
Nielsen, Martha G.
2002-01-01
In 2002, the U.S. Geological Survey, in cooperation with the town of Bar Harbor, Maine, and the National Park Service, conducted a study to assess the quantity of water in the bedrock units underlying Mt. Desert Island, and to estimate water use, recharge, and dilution of nutrients from domestic septic systems overlying the bedrock units in several watersheds in rural Bar Harbor. Water quantity was calculated as the static volume of water in the top 600 feet of saturated thickness of the bedrock units. Volumes of water were estimated on the basis of effective fracture porosities for the five different rock types found on Mt. Desert Island. Values of porosities for the various bedrock units from the literature range more than five orders of magnitude, although the possible range in porosities for most individual rock types is on the order of three orders of magnitude. The static volume of water in the various units may range from a low of 4,000 gallons per acre for intrusive igneous rocks (primarily granites) to 20 million gallons per acre for the Cranberry Island Volcanics, but given the range in porosity estimates, these numbers can vary by orders of magnitude. Water-use data for the municipal water supply in the Town of Bar Harbor (1998-2000) indicate that residential usage averages 225 gallons per household per day. Recharge to the bedrock units in rural Bar Harbor was bracketed using low, medium, and high estimates, which were 3, 9, and 14 inches per year, respectively. Water use in 2001 was about 2.5 percent of the total estimated medium recharge (9 inches per year) in the study area. Dilution of nitrogen in septic effluent discharging to the bedrock aquifer was evaluated for the development density in 2001. On the basis of an assumed concentration of 47 mg/L of nitrogen in septic system discharge, dilution factors in populated rural Bar Harbor watersheds ranged from 4 to 151, for the housing density in 2001. Understanding that ground water in this fractured bedrock system mixes slowly, the fully mixed average nitrate-nitrogen concentrations in ground water estimated for the watersheds ranged from 0.1 to 11 mg/L.
Fast Plane Wave 2-D Vector Flow Imaging Using Transverse Oscillation and Directional Beamforming.
Jensen, Jonas; Villagomez Hoyos, Carlos Armando; Stuart, Matthias Bo; Ewertsen, Caroline; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt
2017-07-01
Several techniques can estimate the 2-D velocity vector in ultrasound. Directional beamforming (DB) estimates blood flow velocities with a higher precision and accuracy than transverse oscillation (TO), but at the cost of a high beamforming load when estimating the flow angle. In this paper, it is proposed to use TO to estimate an initial flow angle, which is then refined in a DB step. Velocity magnitude is estimated along the flow direction using cross correlation. It is shown that the suggested TO-DB method can improve the performance of velocity estimates compared with TO, and with a beamforming load, which is 4.6 times larger than for TO and seven times smaller than for conventional DB. Steered plane wave transmissions are employed for high frame rate imaging, and parabolic flow with a peak velocity of 0.5 m/s is simulated in straight vessels at beam-to-flow angles from 45° to 90°. The TO-DB method estimates the angle with a bias and standard deviation (SD) less than 2°, and the SD of the velocity magnitude is less than 2%. When using only TO, the SD of the angle ranges from 2° to 17° and for the velocity magnitude up to 7%. Bias of the velocity magnitude is within 2% for TO and slightly larger but within 4% for TO-DB. The same trends are observed in measurements although with a slightly larger bias. Simulations of realistic flow in a carotid bifurcation model provide visualization of complex flow, and the spread of velocity magnitude estimates is 7.1 cm/s for TO-DB, while it is 11.8 cm/s using only TO. However, velocities for TO-DB are underestimated at peak systole as indicated by a regression value of 0.97 for TO and 0.85 for TO-DB. An in vivo scanning of the carotid bifurcation is used for vector velocity estimations using TO and TO-DB. The SD of the velocity profile over a cardiac cycle is 4.2% for TO and 3.2% for TO-DB.
Power, Propulsion, and Communications for Microspacecraft Missions
NASA Technical Reports Server (NTRS)
deGroot, W. A.; Maloney, T. M.; Vanderaar, M. J.
1998-01-01
The development of small sized, low weight spacecraft should lead to reduced scientific mission costs by lowering fabrication and launch costs. An order of magnitude reduction in spacecraft size can be obtained by miniaturizing components. Additional reductions in spacecraft weight, size, and cost can be obtained by utilizing the synergy that exists between different spacecraft systems. The state-of-the-art of three major systems, spacecraft power, propulsion, and communications is discussed. Potential strategies to exploit the synergy between these systems and/or the payload are identified. Benefits of several of these synergies are discussed.
Gear materials for high-production light-deputy service
NASA Technical Reports Server (NTRS)
Townsend, D. P.
1973-01-01
The selection of a material for high volume, low cost gears requires careful consideration of all the requirements and the processes used to manufacture the gears. The wrong choice in material selection could very well mean the difference between success and failure. A summary of the cost that might be expected for different materials and processes is presented; it can be seen that the cost can span nearly three order of magnitudes from the molded plastic gear to the machined gear with stamped and powder metal gears falling in between these extremes.
Applying stochastic small-scale damage functions to German winter storms
NASA Astrophysics Data System (ADS)
Prahl, B. F.; Rybski, D.; Kropp, J. P.; Burghoff, O.; Held, H.
2012-03-01
Analyzing insurance-loss data we derive stochastic storm-damage functions for residential buildings. On district level we fit power-law relations between daily loss and maximum wind speed, typically spanning more than 4 orders of magnitude. The estimated exponents for 439 German districts roughly range from 8 to 12. In addition, we find correlations among the parameters and socio-demographic data, which we employ in a simplified parametrization of the damage function with just 3 independent parameters for each district. A Monte Carlo method is used to generate loss estimates and confidence bounds of daily and annual storm damages in Germany. Our approach reproduces the annual progression of winter storm losses and enables to estimate daily losses over a wide range of magnitudes.
ERIC Educational Resources Information Center
Hill, M. Anne
1989-01-01
Looks at the simultaneous labor force participation and hours of work decisions for Japanese wives, both employees and family workers. Although the estimated aggregate wage and income fluctuations for employees are somewhat higher than previous estimates for the United States, they are of the same order of magnitude. (JOW)
de Lima, Camila; Salomão Helou, Elias
2018-01-01
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
Mould-Quevedo, Joaquín; Contreras-Hernández, Iris; Verduzco, Wáscar; Mejía-Aranguré, Juan Manuel; Garduño-Espinosa, Juan
2009-07-01
Estimation of the economic costs of schizophrenia is a fundamental tool for a better understanding of the magnitude of this health problem. The aim of this study was to estimate the costs and effectiveness of five antipsychotic treatments (ziprasidone, olanzapine, risperidone, haloperidol and clozapine), which are included in the national formulary at the Instituto Mexicano del Seguro Social, through a simulation model. Type of economic evaluation: complete economic evaluation of cost-effectiveness. direct medical costs. 1 year. Effectiveness measure: number of months free of psychotic symptoms. to estimate cost-effectiveness, a Markov model was constructed and a Monte Carlo simulation was carried out. Effectiveness: the results of the Markov model showed that the antipsychotic with the highest number months free of psychotic symptoms was ziprasidone (mean 9.2 months). The median annual costs for patients using ziprasidone included in the hypothetical cohort was 194,766.6 Mexican pesos (MXP) (95% CI, 26,515.6-363,017.6 MXP), with an exchange rate of 1 € = 17.36 MXP. The highest costs in the probabilistic analysis were estimated for clozapine treatment (260,236.9 MXP). Through a probabilistic analysis, ziprasidone showed the lowest costs and the highest number of months free of psychotic symptoms and was also the most costeffective antipsychotic observed in acceptability curves and net monetary benefits. Copyright © 2009 Sociedad Española de Psiquiatría and Sociedad Española de Psiquiatría Biológica. Published by Elsevier Espana. All rights reserved.
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
ERIC Educational Resources Information Center
Malone, Amelia Schneider; Fuchs, Lynn S.
2015-01-01
The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of…
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
ERIC Educational Resources Information Center
Malone, Amelia S.; Fuchs, Lynn S.
2017-01-01
The three purposes of this study were to (a) describe fraction ordering errors among at-risk fourth grade students, (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors, and (c) examine the effect of students' ability to explain comparing problems on the probability…
2016-09-01
Thanks to the elegant reciprocal geometry of the Sagnac interferometer, many sources of drift that would present in other polarimetry techniques were...interferometers. And is 2 orders of magnitude better than competing polarimetry -based Faraday techniques. Couple a Rb Vapor cell to the Sagnac interferometer
New Frontiers in Networking with Emphasis on Defense Applications
2016-06-01
through its own experiences. d. The presence of “ elephants ” in the network is triggered partly by big data and partly by high resolution videos...transporting elephants and found even for this very limited mode of operation, good architectures with orders of magnitude rate and cost improvements are
Near field Rayleigh wave on soft porous layers.
Geebelen, N; Boeckx, L; Vermeir, G; Lauriks, W; Allard, J F; Dazel, O
2008-03-01
Simulations performed for a typical semi-infinite reticulated plastic foam saturated by air show that, at distances less than three Rayleigh wavelengths from the area of mechanical excitation by a circular source, the normal frame velocity is close to the Rayleigh pole contribution. Simulated measurements show that a good order of magnitude estimate of the phase speed and damping can be obtained at small distances from the source. Simulations are also performed for layers of finite thickness, where the phase velocity and damping depend on frequency. They indicate that the normal frame velocity at small distances from the source is always close to the Rayleigh pole contribution and that a good order of magnitude estimate of the phase speed of the Rayleigh wave can be obtained at small distances from the source. Furthermore, simulations show that precise measurements of the damping of the Rayleigh wave need larger distances. Measurements performed on a layer of finite thickness confirm these trends.
ERIC Educational Resources Information Center
Sunal, Dennis W., Ed.; Tracy, Dyanne M., Ed.
1992-01-01
Presents activities to supplement lessons on length and mass measurement or as part of a unit on atoms or orders of magnitude. Provides a lesson plan using aluminum foil to estimate unit measures, calculate the foil's thickness, and do an atom count. (MDH)
Reggeti, Mariana; Romero, Emilse; Eblen-Zajjur, Antonio
2016-06-01
There is a risk for an avian influenza AH5N1 virus pandemia. To estimate the magnitude and impact of an AH5N1 pandemic in areas of Latin-America in order to design interventions and to reduce morbidity-mortality. The InfluSim program was used to simulate a highly pathogenic AH5N1 aviar virus epidemic outbreak with human to human transmission in Valencia, Venezuela. We estimated the day of maximal number of cases, number of moderately and severely ill patients, exposed individuals, deaths and associated costs for 5 different interventions: absence of any intervention; implementation of antiviral treatment; reduction of 20% in population general contacts; closure of 20% of educational institutions; and reduction of 50% in massive public gatherings. Simulation parameters used were: population: 829.856 persons, infection risk 6-47%, contagiousness Index Rh o 2,5; relative contagiousness 90%, overall lethality 64,1% and, costs according to the official basic budget. For an outbreak lasting 200 days direct and indirect deaths by intervention strategies would be: 29,907; 29,900; 9,701; 29,295 and 14,752. Costs would follow a similar trend. Reduction of 20% in general population contacts results in a significant reduction of up to 68% of cases. The outbreak would collapse the health care system. Antiviral treatment would not be efficient during the outbreak. Interpersonal contact reduction proved to be the best sanitary measure to control an AH5N1 theoretical epidemic outbreak.
Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
Preconditioned conjugate gradient wave-front reconstructors for multiconjugate adaptive optics.
Gilles, Luc; Ellerbroek, Brent L; Vogel, Curtis R
2003-09-10
Multiconjugate adaptive optics (MCAO) systems with 10(4)-10(5) degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 10(4) actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10(-2) Hz, i.e., 4-5 orders of magnitude lower than the typical 10(3) Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.
Pissadaki, Eleftheria K; Bolam, J Paul
2013-01-01
Dopamine neurons of the substantia nigra pars compacta (SNc) are uniquely sensitive to degeneration in Parkinson's disease (PD) and its models. Although a variety of molecular characteristics have been proposed to underlie this sensitivity, one possible contributory factor is their massive, unmyelinated axonal arbor that is orders of magnitude larger than other neuronal types. We suggest that this puts them under such a high energy demand that any stressor that perturbs energy production leads to energy demand exceeding supply and subsequent cell death. One prediction of this hypothesis is that those dopamine neurons that are selectively vulnerable in PD will have a higher energy cost than those that are less vulnerable. We show here, through the use of a biology-based computational model of the axons of individual dopamine neurons, that the energy cost of axon potential propagation and recovery of the membrane potential increases with the size and complexity of the axonal arbor according to a power law. Thus SNc dopamine neurons, particularly in humans, whose axons we estimate to give rise to more than 1 million synapses and have a total length exceeding 4 m, are at a distinct disadvantage with respect to energy balance which may be a factor in their selective vulnerability in PD.
Pissadaki, Eleftheria K.; Bolam, J. Paul
2013-01-01
Dopamine neurons of the substantia nigra pars compacta (SNc) are uniquely sensitive to degeneration in Parkinson's disease (PD) and its models. Although a variety of molecular characteristics have been proposed to underlie this sensitivity, one possible contributory factor is their massive, unmyelinated axonal arbor that is orders of magnitude larger than other neuronal types. We suggest that this puts them under such a high energy demand that any stressor that perturbs energy production leads to energy demand exceeding supply and subsequent cell death. One prediction of this hypothesis is that those dopamine neurons that are selectively vulnerable in PD will have a higher energy cost than those that are less vulnerable. We show here, through the use of a biology-based computational model of the axons of individual dopamine neurons, that the energy cost of axon potential propagation and recovery of the membrane potential increases with the size and complexity of the axonal arbor according to a power law. Thus SNc dopamine neurons, particularly in humans, whose axons we estimate to give rise to more than 1 million synapses and have a total length exceeding 4 m, are at a distinct disadvantage with respect to energy balance which may be a factor in their selective vulnerability in PD. PMID:23515615
Bonilla, Manuel G.; Mark, Robert K.; Lienkaemper, James J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors.The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation in which the variance results primarily from measurement errors.Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are grouped by fault type or by region, including attenuation regions delineated by Evernden and others.Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating Ms with the logarithms of rupture length, fault displacement, or the product of length and displacement.Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of Ms on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Xiaoliang; Nie, Zimin; Luo, Qingtao
Driven by the motivation of searching for low-cost membrane alternatives, a novel nanoporous polytetrafluoroethylene/silica composite separator has been prepared and evaluated for its use in all-vanadium mixed-acid redox flow battery. This separator consisting of silica particles enmeshed in a polytetrafluoroethylene fibril matrix has no ion exchange capacity and is featured with unique nanoporous structures, which function as the ion transport channels in redox flow battery operation, with an average pore size of 38nm and a porosity of 48%. This separator has produced excellent electrochemical performance in the all-vanadium mixed-acid system with energy efficiency delivery comparable to Nafion membrane and superiormore » rate capability and temperature tolerance. The separator also demonstrates an exceptional capacity retention capability over extended cycling, offering additional operational latitude towards conveniently mitigating the capacity decay that is inevitable for Nafion. Because of the inexpensive raw materials and simple preparation protocol, the separator is particularly low-cost, estimated to be at least an order of magnitude more inexpensive than Nafion. Plus the proven chemical stability due to the same backbone material as Nafion, this separator possesses a good combination of critical membrane requirements and shows great potential to promote market penetration of the all-vanadium redox flow battery by enabling significant reduction of capital and cycle costs.« less
Cost of photovoltaic energy systems as determined by balance-of-system costs
NASA Technical Reports Server (NTRS)
Rosenblum, L.
1978-01-01
The effect of the balance-of-system (BOS), i.e., the total system less the modules, on photo-voltaic energy system costs is discussed for multikilowatt, flat-plate systems. Present BOS costs are in the range of 10 to 16 dollars per peak watt (1978 dollars). BOS costs represent approximately 50% of total system cost. The possibility of future BOS cost reduction is examined. It is concluded that, given the nature of BOS costs and the lack of comprehensive national effort focussed on cost reduction, it is unlikely that BOS costs will decline greatly in the next several years. This prognosis is contrasted with the expectations of the Department of Energy National Photovoltaic Program goals and pending legislation in the Congress which require a BOS cost reduction of an order of magnitude or more by the mid-1980s.
Toward the 1,000 dollars human genome.
Bennett, Simon T; Barnes, Colin; Cox, Anthony; Davies, Lisa; Brown, Clive
2005-06-01
Revolutionary new technologies, capable of transforming the economics of sequencing, are providing an unparalleled opportunity to analyze human genetic variation comprehensively at the whole-genome level within a realistic timeframe and at affordable costs. Current estimates suggest that it would cost somewhere in the region of 30 million US dollars to sequence an entire human genome using Sanger-based sequencing, and on one machine it would take about 60 years. Solexa is widely regarded as a company with the necessary disruptive technology to be the first to achieve the ultimate goal of the so-called 1,000 dollars human genome - the conceptual cost-point needed for routine analysis of individual genomes. Solexa's technology is based on completely novel sequencing chemistry capable of sequencing billions of individual DNA molecules simultaneously, a base at a time, to enable highly accurate, low cost analysis of an entire human genome in a single experiment. When applied over a large enough genomic region, these new approaches to resequencing will enable the simultaneous detection and typing of known, as well as unknown, polymorphisms, and will also offer information about patterns of linkage disequilibrium in the population being studied. Technological progress, leading to the advent of single-molecule-based approaches, is beginning to dramatically drive down costs and increase throughput to unprecedented levels, each being several orders of magnitude better than that which is currently available. A new sequencing paradigm based on single molecules will be faster, cheaper and more sensitive, and will permit routine analysis at the whole-genome level.
Large-scale terrestrial solar cell power generation cost: A preliminary assessment
NASA Technical Reports Server (NTRS)
Spakowski, A. E.; Shure, L. I.
1972-01-01
A cost study was made to assess the potential of the large-scale use of solar cell power for terrestrial applications. The incentive is the attraction of a zero-pollution source of power for wide-scale use. Unlike many other concepts for low-pollution power generation, even thermal pollution is avoided since only the incident solar flux is utilized. To provide a basis for comparison and a perspective for evaluation, the pertinent technology was treated in two categories: current and optimistic. Factors considered were solar cells, array assembly, power conditioning, site preparation, buildings, maintenance, and operation. The capital investment was assumed to be amortized over 30 years. The useful life of the solar cell array was assumed to be 10 years, and the cases of zero and 50-percent performance deg-radation were considered. Land costs, taxes, and profits were not included in this study because it was found too difficult to provide good generalized estimates of these items. On the basis of the factors considered, it is shown that even for optimistic projections of technology, electric power from large-sclae terrestrial use of solar cells is approximately two to three orders of magnitude more costly than current electric power generation from either fossil or nuclear fuel powerplants. For solar cell power generation to be a viable competitor on a cost basis, technological breakthroughs would be required in both solar cell and array fabrication and in site preparation.
Alignment-stabilized interference filter-tuned external-cavity quantum cascade laser.
Kischkat, Jan; Semtsiv, Mykhaylo P; Elagin, Mikaela; Monastyrskyi, Grygorii; Flores, Yuri; Kurlov, Sergii; Peters, Sven; Masselink, W Ted
2014-12-01
A passively alignment-stabilized external cavity quantum cascade laser (EC-QCL) employing a "cat's eye"-type retroreflector and an ultra-narrowband transmissive interference filter for wavelength selection is demonstrated and experimentally investigated. Compared with conventional grating-tuned ECQCLs, the setup is nearly two orders of magnitude more stable against misalignment of the components, and spectral fluctuation is reduced by one order of magnitude, allowing for a simultaneously lightweight and fail-safe construction, suitable for applications outdoors and in space. It also allows for a substantially greater level of miniaturization and cost reduction. These advantages fit in well with the general properties of modern QCLs in the promise to deliver useful and affordable mid-infrared-light sources for a variety of spectroscopic and imaging applications.
Measurements of a Lee Wave in the Southern Ocean: Energy and Momentum Fluxes and Mixing
NASA Astrophysics Data System (ADS)
Cusack, J. M.; Naveira Garabato, A.; Smeed, D.; Girton, J. B.
2016-02-01
Lee waves, internal waves generated by stratified flow over topographic features are thought to break and generate a significant proportion of the turbulent mixing required to close the abyssal overturning circulation. A lack of observations means that there is large uncertainty in the magnitude of contribution that lee waves make to turbulent transformations, as well as their importance in local and global momentum and energy budgets. Two EM-APEX profiling floats deployed in the Drake Passage during the Diapycnal and Isopycnal Mixing Experiment (DIMES) independently measured a large lee wave over the Shackleton Fracture Zone. A model for steady EM-APEX motion is presented and used to calculate absolute vertical water velocity in addition to horizontal velocity measurements made by the floats. The wave is observed to have velocity fluctuations in all three directions of over 15 cm s-1 and a frequency close to the local buoyancy frequency. Furthermore, the wave has a measured peak vertical flux of horizontal momentum of 6 N m-2, a value that is two orders of magnitude larger than the time mean wind forcing on the Southern Ocean. Linear internal wave theory was used to estimate wave energy density and fluxes, while a mixing parameterisation was used to estimate the magnitude of turbulent kinetic energy dissipation, which was found to be elevated above typical background levels by two orders of magnitude. This work provides the first direct measurement of a lee wave generated by ACC flow over topography with simultaneous estimates of energy fluxes and mixing.
The Short-Term Effects of Lying, Sitting and Standing on Energy Expenditure in Women
POPP, COLLIN J.; BRIDGES, WILLIAM C.; JESCH, ELLIOT D.
2018-01-01
The deleterious health effects of too much sitting have been associated with an increased risk for overweight and obesity. Replacing sitting with standing is the proposed intervention to increase daily energy expenditure (EE). The purpose of this study was to determine the short-term effects of lying, sitting, and standing postures on EE, and determine the magnitude of the effect each posture has on EE using indirect calorimetry (IC). Twenty-eight healthy females performed three separate positions (lying, sitting, standing) in random order. Inspired and expired gases were collected for 45-minutes (15 minutes for each position) using breath-by-breath indirect calorimetry. Oxygen consumption (VO2) and carbon dioxide production (VCO2) were measured to estimate EE. Statistical analyses used repeat measures ANOVA to analyze all variables and post hoc t-tests. Based on the ANOVA the individual, time period and order term did not result in a statistically significant difference. Lying EE and sitting EE were not different from each other (P = 0.56). However, standing EE (kcal/min) was 9.0 % greater than lying EE (kcal/min) (P = 0.003), and 7.1% greater than sitting EE (kcal/min) (P = 0.02). The energetic cost of standing was higher compared to lying and sitting. While this is statistically significant, the magnitude of the effect of standing when compared to sitting was small (Cohen’s d = 0.31). Short-term standing does not offer an energetic advantage when compared to sitting.
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
NASA Astrophysics Data System (ADS)
Hyman, J.; Hagberg, A.; Srinivasan, G.; Mohd-Yusof, J.; Viswanathan, H. S.
2017-12-01
We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
NASA Astrophysics Data System (ADS)
Hyman, Jeffrey D.; Hagberg, Aric; Srinivasan, Gowri; Mohd-Yusof, Jamaludin; Viswanathan, Hari
2017-07-01
We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.
A Comparison of Atomic Oxygen Degradation in Low Earth Orbit and in a Plasma Etcher
NASA Technical Reports Server (NTRS)
Townsend, Jacqueline A.; Park, Gloria
1997-01-01
In low Earth orbit (LEO) significant degradation of certain materials occurs from exposure to atomic oxygen (AO). Orbital opportunities to study this degradation for specific materials are limited and expensive. While plasma etchers are commonly used in ground-based studies because of their low cost and convenience, the environment produced in an etcher chamber differs greatly from the LEO environment. Because of the differences in environment, the validity of using etcher data has remained an open question. In this paper, degradation data for 22 materials from the orbital experiment Evaluation of Oxygen Interaction with Materials (EOIM-3) are compared with data from EOIM-3 control specimens exposed in a typical plasma etcher. This comparison indicates that, when carefully considered, plasma etcher results can produce order-of-magnitude estimates of orbital degradation. This allows the etcher to be used to screen unacceptable materials from further, more expensive tests.
Measurement of the PPN parameter γ by testing the geometry of near-Earth space
NASA Astrophysics Data System (ADS)
Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang
2016-06-01
The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.
Feng, Shijie; Chen, Qian; Zuo, Chao; Tao, Tianyang; Hu, Yan; Asundi, Anand
2017-01-23
Fringe projection is an extensively used technique for high speed three-dimensional (3-D) measurements of dynamic objects. To precisely retrieve a moving object at pixel level, researchers prefer to project a sequence of fringe images onto its surface. However, the motion often leads to artifacts in reconstructions due to the sequential recording of the set of patterns. In order to reduce the adverse impact of the movement, we present a novel high speed 3-D scanning technique combining the fringe projection and stereo. Firstly, promising measuring speed is achieved by modifying the traditional aperiodic sinusoidal patterns so that the fringe images can be cast at kilohertz with the widely used defocusing strategy. Next, a temporal intensity tracing algorithm is developed to further alleviate the influence of motion by accurately tracing the ideal intensity for stereo matching. Then, a combined cost measure is suggested to robustly estimate the cost for each pixel and lastly a three-step framework of refinement follows not only to eliminate outliers caused by the motion but also to obtain sub-pixel disparity results for 3-D reconstructions. In comparison with the traditional method where the effect of motion is not considered, experimental results show that the reconstruction accuracy for dynamic objects can be improved by an order of magnitude with the proposed method.
Nonlinear Symplectic Attitude Estimation for Small Satellites
2006-08-01
Vol. 45, No. 3, 2000, pp. 477-482. 7 Gelb, A., editor, Applied Optimal Estimation, The M.I.T. Press, Cambridge, MA, 1974. ’ Brown , R. G. and Hwang , P. Y...demonstrate orders of magnitude improvement in state and constants of motion estimation when compared to extended and iterative Kalman methods...satellites have fallen into the former category, including the ubiquitous Extended Kalman Filter (EKF).2 - 9 While this approach has been used
Morais, Sérgio Alberto; Delerue-Matos, Cristina; Gabarrell, Xavier
2013-03-15
In life cycle impact assessment (LCIA) models, the sorption of the ionic fraction of dissociating organic chemicals is not adequately modeled because conventional non-polar partitioning models are applied. Therefore, high uncertainties are expected when modeling the mobility, as well as the bioavailability for uptake by exposed biota and degradation, of dissociating organic chemicals. Alternative regressions that account for the ionized fraction of a molecule to estimate fate parameters were applied to the USEtox model. The most sensitive model parameters in the estimation of ecotoxicological characterization factors (CFs) of micropollutants were evaluated by Monte Carlo analysis in both the default USEtox model and the alternative approach. Negligible differences of CFs values and 95% confidence limits between the two approaches were estimated for direct emissions to the freshwater compartment; however the default USEtox model overestimates CFs and the 95% confidence limits of basic compounds up to three orders and four orders of magnitude, respectively, relatively to the alternative approach for emissions to the agricultural soil compartment. For three emission scenarios, LCIA results show that the default USEtox model overestimates freshwater ecotoxicity impacts for the emission scenarios to agricultural soil by one order of magnitude, and larger confidence limits were estimated, relatively to the alternative approach. Copyright © 2013 Elsevier B.V. All rights reserved.
Whyte, Sophie; Harnan, Susan
2014-06-01
A campaign to increase the awareness of the signs and symptoms of colorectal cancer (CRC) and encourage self-presentation to a GP was piloted in two regions of England in 2011. Short-term data from the pilot evaluation on campaign cost and changes in GP attendances/referrals, CRC incidence, and CRC screening uptake were available. The objective was to estimate the effectiveness and cost-effectiveness of a CRC awareness campaign by using a mathematical model which extrapolates short-term outcomes to predict long-term impacts on cancer mortality, quality-adjusted life-years (QALYs), and costs. A mathematical model representing England (aged 30+) for a lifetime horizon was developed. Long-term changes to cancer incidence, cancer stage distribution, cancer mortality, and QALYs were estimated. Costs were estimated incorporating costs associated with delivering the campaign, additional GP attendances, and changes in CRC treatment. Data from the pilot campaign suggested that the awareness campaign caused a 1-month 10 % increase in presentation rates. Based on this, the model predicted the campaign to cost £5.5 million, prevent 66 CRC deaths and gain 404 QALYs. The incremental cost-effectiveness ratio compared to "no campaign" was £13,496 per QALY. Results were sensitive to the magnitude and duration of the increase in presentation rates and to disease stage. The effectiveness and cost-effectiveness of a cancer awareness campaign can be estimated based on short-term data. Such predictions will aid policy makers in prioritizing between cancer control strategies. Future cost-effectiveness studies would benefit from campaign evaluations reporting as follows: data completeness, duration of impact, impact on emergency presentations, and comparison with non-intervention regions.
Regional characterisation of hydraulic properties of rock using air-lift data
NASA Astrophysics Data System (ADS)
Wladis, David; Gustafson, Gunnar
Hydrogeologic studies are commonly data-intense. In particular, estimations of hydraulic properties of hard rock often require large amounts of data. In many countries, large quantities of hydrogeologic data have been collected and archived over the years. Therefore, the use of existing data may provide a cost-efficient alternative to collecting new data in early stages of hydrogeologic studies, although the available data may be considered imprecise. Initially, however, the potential usefulness, i.e., the expected accuracy, of the available data in each specific case must be carefully examined. This study investigates the possibilities of obtaining estimates of transmissivity from hard-rock air-lift data in Sweden within an order of magnitude of results obtained from high-quality injection-test data. The expected accuracy of the results was examined analytically and by means of statistical methods. The results were also evaluated by comparison with injection-test data. The results indicate that air-lift data produce estimates of transmissivity within an order of magnitude compared to injection-test data in the studied examples. The study also shows that partial penetration and hydrofracturing may only affect the estimations approximately half an order of magnitude. Thus, existing data may provide a cost-efficient alternative to collection of new data in early stages of hydrogeologic studies. Résumé Les études hydrogéologiques reposent en général sur un nombre important de données. En particulier, l'estimation des propriétés hydrauliques des roches indurées exige souvent un grand nombre de données. Dans de nombreuses régions, des données hydrogéologiques très nombreuses ont été recueillies et archivées depuis longtemps. C'est pourquoi le recours à des données existantes peut être une alternative intéressante en termes de coût par rapport à l'obtention de nouvelles données dans les premières étapes des études hydrogéologiques, même si les données disponibles présentent une certaine imprécision. Au départ, il faut cependant examiner l'utilité potentielle, c'est-à-dire la précision attendue, de données imprécises dans chaque cas spécifique. Cette étude analyse les possibilités d'obtenir des estimations de transmissivitéà partir de données d'air-lift dans des roches de socle en Suède, au mieux à un facteur 10 près par rapport aux données d'excellente qualité fournies par des essais d'injection. La précision attendue des résultats a été examinée analytiquement et grâce à des méthodes statistiques. Les résultats ont aussi été comparés aux données d'essais d'injection. Les résultats indiquent que les données obtenues par air-lift fournissent des estimations de transmissivitéà un facteur 10 près par comparaison avec celles des essais d'injection des exemples étudiés. L'étude montre aussi que la pénétration partielle et la fracturation hydraulique peuvent affecter les estimations seulement d'environ un facteur 5. Ainsi, le recours à des données existantes peut être économiquement plus intéressant que le recueil de nouvelles données dans les premières étapes des études hydrogéologiques. Resumen Los estudios hidrogeológicos, en general, y las estimaciones de propiedades hidráulicas en rocas, en particular, requieren gran cantidad de datos. En muchos países se han creado grandes bases de datos hidrogeológicos, recogidos y almacenados a lo largo de los años. Por tanto, el uso de los datos ya existentes, aunque imprecisos, puede suponer una alternativa barata a la recopilación de nuevos datos, particularmente en las primeras fases de un estudio hidrogeológico. Sin embargo la precisión y utilidad del método deben analizarse para cada caso específico. En este estudio se contempla la posibilidad de, a partir de datos de ensayos de inyección de aire realizados en Suecia, obtener valores estimativos de transmisividad dentro del orden de magnitud de los resultados obtenidos de ensayos de inyección de alta calidad. La precisión del método se examinó de manera analítica y mediante métodos estadísticos, así como mediante la comparación con los datos de ensayos de inyección tradicionales. Del trabajo se observa que los datos procedentes de ensayos de inyección de aire dan lugar a unas estimaciones de transmisividad dentro del mismo orden de magnitud que los datos obtenidos en ensayos de inyección para los ejemplos que se han estudiado. También se concluye que la penetración parcial y la hidrofractura afectan las estimaciones de transmisividad en medio orden de magnitud, como mucho. Por tanto, el uso de los datos ya existentes es una alternativa eficiente y barata a la recopilación de nuevos datos en una primera fase de un estudio hidrogeológico.
Nano Icy Moons Propellant Harvester
NASA Technical Reports Server (NTRS)
VanWoerkom, Michael (Principal Investigator)
2017-01-01
As one of just a few bodies identified in the solar system with a liquid ocean, Europa has become a top priority in the search for life outside of Earth. However, cost estimates for exploring Europa have been prohibitively expensive, with estimates of a NASA Flagship class orbiter and lander approaching $5 billion. ExoTerra's NIMPH offers an affordable solution that can not only land, but return a sample from the surface to Earth. NIMPH combines solar electric propulsion (SEP) technologies being developed for the asteroid redirect mission and microsatellite electronics to reduce the cost of a full sample return mission below $500 million. A key to achieving this order-of-magnitude cost reduction is minimizing the initial mass of the system. The cost of any mission is directly proportional to its mass. By keeping the mission within the constraints of an Atlas V 551 launch vehicle versus an SLS, we can significantly reduce launch costs. To achieve this we reduce the landed mass of the sample return lander, which is the largest multiplier of mission mass, and shrink propellant mass through high efficiency SEP and gravity assists. The NIMPH projects first step in reducing landed mass focuses on development of a micro-In Situ Resource Utilization (micro-ISRU) system. ISRU allows us to minimize landed mass of a sample return mission by converting local ice into propellants. The project reduces the ISRU system to a CubeSat-scale package that weighs just 1.74 kg and consumes just 242 W of power. We estimate that use of this ISRU vs. an identical micro-lander without ISRU reduces fuel mass by 45 kg. As the dry mass of the lander grows for larger missions, these savings scale exponentially. Taking full advantage of the micro-ISRU system requires the development of a micro-liquid oxygen-liquid hydrogen engine. The micro-liquid oxygen-liquid hydrogen engine is tailored for the mission by scaling it to match the scale of the micro-lander and the low gravity of the target moon. We also tailor the engine for a near stoichiometric mixture ratio of 7.5. Most high-performance liquid oxygen-liquid hydrogen engines inject extra liquid hydrogen to lower the average molecular weight of the exhaust, which improves specific impulse. However, this extra liquid hydroden requires additional power and processing time on the surface for the ISRU to create. This increases mission cost, and on missions within high radiation environments such as Europa, increases radiation shielding mass. The resulting engine weighs just 1.36 kg and produces 71.5 newton of thrust at 364 s specific impulse. Finally, the mission reduces landed mass by taking advantage of the SEP modules solar power to beam energy to the surface using a collimated laser. This allows us to replace an 45 kg MMRTG with a 2.5 kg resonant array. By using the combination of ISRU, a liquid oxygen-liquid hydrogen engine, and beamed power, we reduce the initial mass of the lander to just 51.5 kg. When combined with an SEP module to ferry the lander to Europa the initial mission mass is just 6397 kg - low enough to be placed on an Earth escape trajectory using an Atlas V 551 launch vehicle. By comparison, we estimate a duplicate lander using an MMRTG and semi-storable propellants such as liquid oxygen-methane would result in an order of magnitude increase in initial lander mass to 445 kg. Attempting to perform the trajectory with a 450 s liquid oxygen-liquid hydrogen engine would increase initial mass to approximately 135,000 kg. Using an Atlas V 1 U.S. Dollar per kg rate to Earth escape value of $27.7k per kg, just the launch savings are over $3.5 billion.
Short-term spatial and temporal variability in greenhouse gas fluxes in riparian zones.
Vidon, P; Marchese, S; Welsh, M; McMillan, S
2015-08-01
Recent research indicates that riparian zones have the potential to contribute significant amounts of greenhouse gases (GHG: N2O, CO2, CH4) to the atmosphere. Yet, the short-term spatial and temporal variability in GHG emission in these systems is poorly understood. Using two transects of three static chambers at two North Carolina agricultural riparian zones (one restored, one unrestored), we show that estimates of the average GHG flux at the site scale can vary by one order of magnitude depending on whether the mean or the median is used as a measure of central tendency. Because the median tends to mute the effect of outlier points (hot spots and hot moments), we propose that both must be reported or that other more advanced spatial averaging techniques (e.g., kriging, area-weighted average) should be used to estimate GHG fluxes at the site scale. Results also indicate that short-term temporal variability in GHG fluxes (a few days) under seemingly constant temperature and hydrological conditions can be as large as spatial variability at the site scale, suggesting that the scientific community should rethink sampling protocols for GHG at the soil-atmosphere interface to include repeated measures over short periods of time at select chambers to estimate GHG emissions in the field. Although recent advances in technology provide tools to address these challenges, their cost is often too high for widespread implementation. Until technology improves, sampling design strategies will need to be carefully considered to balance cost, time, and spatial and temporal representativeness of measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lybarger, J.A.; Spengler, R.F.; Brown, D.R.
1998-10-01
This paper estimates the health costs at Superfund sites for conditions associated with volatile organic compounds (VOCs) in drinking water. Health conditions were identified from published literature and registry information as occurring at excess rates in VOC-exposed populations. These health conditions were: (1) some categories of birth defects, (2) urinary tract disorders, (3) diabetes, (4) eczema and skin conditions, (5) anemia, (6) speech and hearing impairments in children under 10 years of age, and (7) stroke. Excess rates were used to estimate the excess number of cases occurring among the total population living within one-half mile of 258 Superfund sites.more » These sites had evidence of completed human exposure pathways for VOCs in drinking water. For each type of medical condition, an individual`s expected medical costs, long-term care costs, and lost work time due to illness or premature mortality were estimated. Costs were calculated to be approximately $330 million per year, in the absence of any remediation or public health intervention programs. The results indicate the general magnitude of the economic burden associated with a limited number of contaminants at a portion of all Superfund sites, thus suggesting that the burden would be greater than that estimated in this study if all contaminants at all Superfund sites could be taken into account.« less
Cost model validation: a technical and cultural approach
NASA Technical Reports Server (NTRS)
Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.
2001-01-01
This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.
Economic burden of asthma in Korea.
Lee, Yo-Han; Yoon, Seok-Jun; Kim, Eun-Jung; Kim, Young-Ae; Seo, Hye-Young; Oh, In-Hwan
2011-01-01
Understanding the magnitude of the economic impact of an illness on society is fundamental to planning and implementing relevant policies. South Korea operates a compulsory universal health insurance system providing favorable conditions for evaluating the nationwide economic burden of illnesses. The aim of this study was to estimate the economic costs of asthma imposed on Korean society. The Korean National Health Insurance claims database was used for determining the health care services provided to asthma patients defined as having at least one inpatient or outpatient claim(s) with a primary diagnosis of asthma in 2008. Both direct and indirect costs were included. Direct costs were those associated directly with treatment, medication, and transportation. Indirect costs were assessed in terms of the loss of productivity in asthma patients and their caregivers and consisted of morbidity cost, mortality cost, and caregivers' time cost. The estimated cost for 2,273,290 asthma patients in 2008 was $831 million, with an average per capita cost of $336. Among the cost components, outpatient and medication costs represented the largest cost burden. Although the costs for children accounted for the largest proportion of the total cost, the per capita cost was highest among patients ≥50 years old. The economic burden of asthma in Korea is considerable. Considering that the burden will increase with the rising prevalence, implementation of effective national prevention approaches aimed at the appropriate target populations is imperative.
Characterizing reduced sulfur compounds emissions from a swine concentrated animal feeding operation
NASA Astrophysics Data System (ADS)
Rumsey, Ian C.; Aneja, Viney P.; Lonneman, William A.
2014-09-01
Reduced sulfur compounds (RSCs) emissions from concentrated animal feeding operations (CAFOs) have become a potential environmental and human health concern, as a result of changes in livestock production methods. RSC emissions were determined from a swine CAFO in North Carolina. RSC measurements were made over a period of ≈1 week from both the barn and lagoon during each of the four seasonal periods from June 2007 to April 2008. During sampling, meteorological and other environmental parameters were measured continuously. Seasonal hydrogen sulfide (H2S) barn concentrations ranged from 72 to 631 ppb. Seasonal dimethyl sulfide (DMS; CH3SCH3) and dimethyl disulfide (DMDS; CH3S2CH3) concentrations were 2-3 orders of magnitude lower, ranging from 0.18 to 0.89 ppb and 0.47 to 1.02 ppb, respectively. The overall average barn emission rate was 3.3 g day-1 AU-1 (AU (animal unit) = 500 kg of live animal weight) for H2S, which was approximately two orders of magnitude higher than the DMS and DMDS overall average emissions rates, determined as 0.017 g day-1 AU-1 and 0.036 g day-1 AU-1, respectively. The overall average lagoon flux was 1.33 μg m-2 min-1 for H2S, which was approximately an order of magnitude higher than the overall average DMS (0.12 μg m-2 min-1) and DMDS (0.09 μg m-2 min-1) lagoon fluxes. The overall average lagoon emission for H2S (0.038 g day-1 AU-1) was also approximately an order of magnitude higher than the overall average DMS (0.0034 g day-1 AU-1) and DMDS (0.0028 g day-1 AU-1) emissions. H2S, DMS and DMDS have offensive odors and low odor thresholds. Over all four sampling seasons, 77% of 15 min averaged H2S barn concentrations were an order of magnitude above the average odor threshold. During these sampling periods, however, DMS and DMDS concentrations did not exceed their odor thresholds. The overall average barn and lagoon emissions from this study were used to help estimate barn, lagoon and total (barn + lagoon) RSC emissions from swine CAFOs in North Carolina. Total (barn + lagoon) H2S emissions from swine CAFOs in North Carolina were estimated to be 1.22*106 kg yr-1. The barns had significantly higher H2S emissions than the lagoons, contributing ≈98% of total North Carolina H2S swine CAFO emissions. Total (barn + lagoon) emissions for DMS and DMDS were 1-2 orders of magnitude lower, with barns contributing ≈86% and ≈93% of total emissions, respectively. H2S swine CAFO emissions were estimated to contribute ≈18% of North Carolina H2S emissions.
Development of magnitude scaling relationship for earthquake early warning system in South Korea
NASA Astrophysics Data System (ADS)
Sheen, D.
2011-12-01
Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.
Lives Saved Tool (LiST) costing: a module to examine costs and prioritize interventions.
Bollinger, Lori A; Sanders, Rachel; Winfrey, William; Adesina, Adebiyi
2017-11-07
Achieving the Sustainable Development Goals will require careful allocation of resources in order to achieve the highest impact. The Lives Saved Tool (LiST) has been used widely to calculate the impact of maternal, neonatal and child health (MNCH) interventions for program planning and multi-country estimation in several Lancet Series commissions. As use of the LiST model increases, many have expressed a desire to cost interventions within the model, in order to support budgeting and prioritization of interventions by countries. A limited LiST costing module was introduced several years ago, but with gaps in cost types. Updates to inputs have now been added to make the module fully functional for a range of uses. This paper builds on previous work that developed an initial version of the LiST costing module to provide costs for MNCH interventions using an ingredients-based costing approach. Here, we update in 2016 the previous econometric estimates from 2013 with newly-available data and also include above-facility level costs such as program management. The updated econometric estimates inform percentages of intervention-level costs for some direct costs and indirect costs. These estimates add to existing values for direct cost requirements for items such as drugs and supplies and required provider time which were already available in LiST Costing. Results generated by the LiST costing module include costs for each intervention, as well as disaggregated costs by intervention including drug and supply costs, labor costs, other recurrent costs, capital costs, and above-service delivery costs. These results can be combined with mortality estimates to support prioritization of interventions by countries. The LiST costing module provides an option for countries to identify resource requirements for scaling up a maternal, neonatal, and child health program, and to examine the financial impact of different resource allocation strategies. It can be a useful tool for countries as they seek to identify the best investments for scarce resources. The purpose of the LiST model is to provide a tool to make resource allocation decisions in a strategic planning process through prioritizing interventions based on resulting impact on maternal and child mortality and morbidity.
Electrolysis Propulsion Provides High-Performance, Inexpensive, Clean Spacecraft Propulsion
NASA Technical Reports Server (NTRS)
deGroot, Wim A.
1999-01-01
An electrolysis propulsion system consumes electrical energy to decompose water into hydrogen and oxygen. These gases are stored in separate tanks and used when needed in gaseous bipropellant thrusters for spacecraft propulsion. The propellant and combustion products are clean and nontoxic. As a result, costs associated with testing, handling, and launching can be an order of magnitude lower than for conventional propulsion systems, making electrolysis a cost-effective alternative to state-of-the-art systems. The electrical conversion efficiency is high (>85 percent), and maximum thrust-to-power ratios of 0.2 newtons per kilowatt (N/kW), a 370-sec specific impulse, can be obtained. A further advantage of the water rocket is its dual-mode potential. For relatively high thrust applications, the system can be used as a bipropellant engine. For low thrust levels and/or small impulse bit requirements, cold gas oxygen can be used alone. An added innovation is that the same hardware, with modest modifications, can be converted into an energy-storage and power-generation fuel cell, reducing the spacecraft power and propulsion system weight by an order of magnitude.
NASA Technical Reports Server (NTRS)
Maynard, O. E.; Brown, W. C.; Edwards, A.; Haley, J. T.; Meltz, G.; Howell, J. M.; Nathan, A.
1975-01-01
The microwave rectifier technology, approaches to the receiving antenna, topology of rectenna circuits, assembly and construction, ROM cost estimates are discussed. Analyses and cost estimates for the equipment required to transmit the ground power to an external user. Noise and harmonic considerations are presented for both the amplitron and klystron and interference limits are identified and evaluated. The risk assessment discussion is discussed wherein technology risks are rated and ranked with regard to their importance in impacting the microwave power transmission system. The system analyses and evaluation are included of parametric studies of system relationships pertaining to geometry, materials, specific cost, specific weight, efficiency, converter packing, frequency selection, power distribution, power density, power output magnitude, power source, transportation and assembly. Capital costs per kW and energy costs as a function of rate of return, power source and transportation costs as well as build cycle time are presented. The critical technology and ground test program are discussed along with ROM costs and schedule. The orbital test program with associated critical technology and ground based program based on full implementation of the defined objectives is discussed.
NASA Astrophysics Data System (ADS)
Dittmann, Jason A.; Irwin, Jonathan M.; Charbonneau, David; Newton, Elisabeth R.
2016-02-01
The MEarth Project is a photometric survey systematically searching the smallest stars near the Sun for transiting rocky planets. Since 2008, MEarth has taken approximately two million images of 1844 stars suspected to be mid-to-late M dwarfs. We have augmented this survey by taking nightly exposures of photometric standard stars and have utilized this data to photometrically calibrate the MEarth system, identify photometric nights, and obtain an optical magnitude with 1.5% precision for each M dwarf system. Each optical magnitude is an average over many years of data, and therefore should be largely immune to stellar variability and flaring. We combine this with trigonometric distance measurements, spectroscopic metallicity measurements, and 2MASS infrared magnitude measurements in order to derive a color-magnitude-metallicity relation across the mid-to-late M dwarf spectral sequence that can reproduce spectroscopic metallicity determinations to a precision of 0.1 dex. We release optical magnitudes and metallicity estimates for 1567 M dwarfs, many of which did not have an accurate determination of either prior to this work. For an additional 277 stars without a trigonometric parallax, we provide an estimate of the distance, assuming solar neighborhood metallicity. We find that the median metallicity for a volume-limited sample of stars within 20 pc of the Sun is [Fe/H] = -0.03 ± 0.008, and that 29/565 of these stars have a metallicity of [Fe/H] = -0.5 or lower, similar to the low-metallicity distribution of nearby G dwarfs. When combined with the results of ongoing and future planet surveys targeting these objects, the metallicity estimates presented here will be important for assessing the significance of any putative planet-metallicity correlation.
Curran, Janet H.; Meyer, David F.; Tasker, Gary D.
2003-01-01
Estimates of the magnitude and frequency of peak streamflow are needed across Alaska for floodplain management, cost-effective design of floodway structures such as bridges and culverts, and other water-resource management issues. Peak-streamflow magnitudes for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were computed for 301 streamflow-gaging and partial-record stations in Alaska and 60 stations in conterminous basins of Canada. Flows were analyzed from data through the 1999 water year using a log-Pearson Type III analysis. The State was divided into seven hydrologically distinct streamflow analysis regions for this analysis, in conjunction with a concurrent study of low and high flows. New generalized skew coefficients were developed for each region using station skew coefficients for stations with at least 25 years of systematic peak-streamflow data. Equations for estimating peak streamflows at ungaged locations were developed for Alaska and conterminous basins in Canada using a generalized least-squares regression model. A set of predictive equations for estimating the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year peak streamflows was developed for each streamflow analysis region from peak-streamflow magnitudes and physical and climatic basin characteristics. These equations may be used for unregulated streams without flow diversions, dams, periodically releasing glacial impoundments, or other streamflow conditions not correlated to basin characteristics. Basin characteristics should be obtained using methods similar to those used in this report to preserve the statistical integrity of the equations.
A prevalence-based approach to societal costs occurring in consequence of child abuse and neglect
2012-01-01
Background Traumatization in childhood can result in lifelong health impairment and may have a negative impact on other areas of life such as education, social contacts and employment as well. Despite the frequent occurrence of traumatization, which is reflected in a 14.5 percent prevalence rate of severe child abuse and neglect, the economic burden of the consequences is hardly known. The objective of this prevalence-based cost-of-illness study is to show how impairment of the individual is reflected in economic trauma follow-up costs borne by society as a whole in Germany and to compare the results with other countries’ costs. Methods From a societal perspective trauma follow-up costs were estimated using a bottom-up approach. The literature-based prevalence rate includes emotional, physical and sexual abuse as well as physical and emotional neglect in Germany. Costs are derived from individual case scenarios of child endangerment presented in a German cost-benefit-analysis. A comparison with trauma follow-up costs in Australia, Canada and the USA is based on purchasing power parity. Results The annual trauma follow-up costs total to a margin of EUR 11.1 billion for the lower bound and to EUR 29.8 billion for the upper bound. This equals EUR 134.84 and EUR 363.58, respectively, per capita for the German population. These results conform to the ones obtained from cost studies conducted in Australia (lower bound) and Canada (upper bound), whereas the result for the United States is much lower. Conclusion Child abuse and neglect result in trauma follow-up costs of economically relevant magnitude for the German society. Although the result is well in line with other countries’ costs, the general lack of data should be fought in order to enable more detailed future studies. Creating a reliable cost data basis in the first place can pave the way for long-term cost savings. PMID:23158382
A prevalence-based approach to societal costs occurring in consequence of child abuse and neglect.
Habetha, Susanne; Bleich, Sabrina; Weidenhammer, Jörg; Fegert, Jörg M
2012-11-16
Traumatization in childhood can result in lifelong health impairment and may have a negative impact on other areas of life such as education, social contacts and employment as well. Despite the frequent occurrence of traumatization, which is reflected in a 14.5 percent prevalence rate of severe child abuse and neglect, the economic burden of the consequences is hardly known. The objective of this prevalence-based cost-of-illness study is to show how impairment of the individual is reflected in economic trauma follow-up costs borne by society as a whole in Germany and to compare the results with other countries' costs. From a societal perspective trauma follow-up costs were estimated using a bottom-up approach. The literature-based prevalence rate includes emotional, physical and sexual abuse as well as physical and emotional neglect in Germany. Costs are derived from individual case scenarios of child endangerment presented in a German cost-benefit-analysis. A comparison with trauma follow-up costs in Australia, Canada and the USA is based on purchasing power parity. The annual trauma follow-up costs total to a margin of EUR 11.1 billion for the lower bound and to EUR 29.8 billion for the upper bound. This equals EUR 134.84 and EUR 363.58, respectively, per capita for the German population. These results conform to the ones obtained from cost studies conducted in Australia (lower bound) and Canada (upper bound), whereas the result for the United States is much lower. Child abuse and neglect result in trauma follow-up costs of economically relevant magnitude for the German society. Although the result is well in line with other countries' costs, the general lack of data should be fought in order to enable more detailed future studies. Creating a reliable cost data basis in the first place can pave the way for long-term cost savings.
Uncertainty and sensitivity assessment of flood risk assessments
NASA Astrophysics Data System (ADS)
de Moel, H.; Aerts, J. C.
2009-12-01
Floods are one of the most frequent and costly natural disasters. In order to protect human lifes and valuable assets from the effect of floods many defensive structures have been build. Despite these efforts economic losses due to catastrophic flood events have, however, risen substantially during the past couple of decades because of continuing economic developments in flood prone areas. On top of that, climate change is expected to affect the magnitude and frequency of flood events. Because these ongoing trends are expected to continue, a transition can be observed in various countries to move from a protective flood management approach to a more risk based flood management approach. In a risk based approach, flood risk assessments play an important role in supporting decision making. Most flood risk assessments assess flood risks in monetary terms (damage estimated for specific situations or expected annual damage) in order to feed cost-benefit analysis of management measures. Such flood risk assessments contain, however, considerable uncertainties. This is the result from uncertainties in the many different input parameters propagating through the risk assessment and accumulating in the final estimate. Whilst common in some other disciplines, as with integrated assessment models, full uncertainty and sensitivity analyses of flood risk assessments are not so common. Various studies have addressed uncertainties regarding flood risk assessments, but have mainly focussed on the hydrological conditions. However, uncertainties in other components of the risk assessment, like the relation between water depth and monetary damage, can be substantial as well. This research therefore tries to assess the uncertainties of all components of monetary flood risk assessments, using a Monte Carlo based approach. Furthermore, the total uncertainty will also be attributed to the different input parameters using a variance based sensitivity analysis. Assessing and visualizing the uncertainties of the final risk estimate will be helpful to decision makers to make better informed decisions and attributing this uncertainty to the input parameters helps to identify which parameters are most important when it comes to uncertainty in the final estimate and should therefore deserve additional attention in further research.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2004-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merion M.
2002-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2003-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Unbalanced and Minimal Point Equivalent Estimation Second-Order Split-Plot Designs
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey
2007-01-01
Restricting the randomization of hard-to-change factors in industrial experiments is often performed by employing a split-plot design structure. From an economic perspective, these designs minimize the experimental cost by reducing the number of resets of the hard-to- change factors. In this paper, unbalanced designs are considered for cases where the subplots are relatively expensive and the experimental apparatus accommodates an unequal number of runs per whole-plot. We provide construction methods for unbalanced second-order split- plot designs that possess the equivalence estimation optimality property, providing best linear unbiased estimates of the parameters; independent of the variance components. Unbalanced versions of the central composite and Box-Behnken designs are developed. For cases where the subplot cost approaches the whole-plot cost, minimal point designs are proposed and illustrated with a split-plot Notz design.
The Cost of Youth Suicide in Australia.
Kinchin, Irina; Doran, Christopher M
2018-04-04
Suicide is the leading cause of death among Australians between 15 and 24 years of age. This study seeks to estimate the economic cost of youth suicide (15–24 years old) for Australia using 2014 as a reference year. The main outcome measure is monetized burden of youth suicide. Costs, in 2014 AU$, are measured and valued as direct costs, such as coronial inquiry, police, ambulance, and funeral expenses; indirect costs, such as lost economic productivity; and intangible costs, such as bereavement. In 2014, 307 young Australians lost their lives to suicide (82 females and 225 males). The average age at time of death was 20.4 years, representing an average loss of 62 years of life and close to 46 years of productive capacity. The average cost per youth suicide is valued at $2,884,426, including $9721 in direct costs, $2,788,245 as the value of lost productivity, and $86,460 as the cost of bereavement. The total economic loss of youth suicide in Australia is estimated at $22 billion a year (equivalent to US$ 17 billion), ranging from $20 to $25 billion. These findings can assist decision-makers understand the magnitude of adverse outcomes associated with youth suicide and the potential benefits to be achieved by investing in effective suicide prevention strategies.
The Cost of Youth Suicide in Australia
Doran, Christopher M.
2018-01-01
Suicide is the leading cause of death among Australians between 15 and 24 years of age. This study seeks to estimate the economic cost of youth suicide (15–24 years old) for Australia using 2014 as a reference year. The main outcome measure is monetized burden of youth suicide. Costs, in 2014 AU$, are measured and valued as direct costs, such as coronial inquiry, police, ambulance, and funeral expenses; indirect costs, such as lost economic productivity; and intangible costs, such as bereavement. In 2014, 307 young Australians lost their lives to suicide (82 females and 225 males). The average age at time of death was 20.4 years, representing an average loss of 62 years of life and close to 46 years of productive capacity. The average cost per youth suicide is valued at $2,884,426, including $9721 in direct costs, $2,788,245 as the value of lost productivity, and $86,460 as the cost of bereavement. The total economic loss of youth suicide in Australia is estimated at $22 billion a year (equivalent to US$ 17 billion), ranging from $20 to $25 billion. These findings can assist decision-makers understand the magnitude of adverse outcomes associated with youth suicide and the potential benefits to be achieved by investing in effective suicide prevention strategies. PMID:29617305
One Step Biomass Gas Reforming-Shift Separation Membrane Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Michael J.; Souleimanova, Razima
2012-12-28
GTI developed a plan where efforts were concentrated in 4 major areas: membrane material development, membrane module development, membrane process development, and membrane gasifier scale-up. GTI assembled a team of researchers to work in each area. Task 1.1 Ceramic Membrane Synthesis and Testing was conducted by Arizona State University (ASU), Task 1.2 Metallic Membrane Synthesis and Testing was conducted by the U.S. National Energy Technology Laboratory (NETL), Task 1.3 was conducted by SCHOTT, and GTI was to test all membranes that showed potential. The initial focus of the project was concentrated on membrane material development. Metallic and glass-based membranes weremore » identified as hydrogen selective membranes under the conditions of the biomass gasification, temperatures above 700C and pressures up to 30 atmospheres. Membranes were synthesized by arc-rolling for metallic type membranes and incorporating Pd into a glass matrix for glass membranes. Testing for hydrogen permeability properties were completed and the effects of hydrogen sulfide and carbon monoxide were investigated for perspective membranes. The initial candidate membrane of Pd80Cu20 chosen in 2008 was selected for preliminary reactor design and cost estimates. Although the H2A analysis results indicated a $1.96 cost per gge H2 based on a 5A (micron) thick PdCu membrane, there was not long-term operation at the required flux to satisfy the go/no go decision. Since the future PSA case yielded a $2.00/gge H2, DOE decided that there was insufficient savings compared with the already proven PSA technology to further pursue the membrane reactor design. All ceramic membranes synthesized by ASU during the project showed low hydrogen flux as compared with metallic membranes. The best ceramic membrane showed hydrogen permeation flux of 0.03 SCFH/ft2 at the required process conditions while the metallic membrane, Pd80Cu20 showed a flux of 47.2 SCFH/ft2 (3 orders of magnitude difference). Results from NETL showed Pd80Cu20 with the highest flux, therefore it was chosen as the initial and eventually, final candidate membrane. The criteria for choice were high hydrogen flux, long-term stability, and H2S tolerance. Results from SCHOTT using glass membranes showed a maximum of 0.25 SCFH/ft2, that is an order of magnitude better than the ceramic membrane but still two orders of magnitude lower than the metallic membrane. A membrane module was designed to be tested with an actual biomass gasifier. Some parts of the module were ordered but the work was stopped when a no go decision was made by the DOE.« less
NASA Astrophysics Data System (ADS)
Papagiannaki, K.; Lagouvardos, K.; Kotroni, V.; Papagiannakis, G.
2014-09-01
The objective of this study is the analysis of damaging frost events in agriculture, by examining the relationship between the daily minimum temperature in the lower atmosphere (at an isobaric level of 850 hPa) and crop production losses. Furthermore, the study suggests a methodological approach for estimating agriculture risk due to frost events, with the aim of estimating the short-term probability and magnitude of frost-related financial losses for different levels of 850 hPa temperature. Compared with near-surface temperature forecasts, temperature forecasts at the level of 850 hPa are less influenced by varying weather conditions or by local topographical features; thus, they constitute a more consistent indicator of the forthcoming weather conditions. The analysis of the daily monetary compensations for insured crop losses caused by weather events in Greece shows that, during the period 1999-2011, frost caused more damage to crop production than any other meteorological phenomenon. Two regions of different geographical latitudes are examined further, to account for the differences in the temperature ranges developed within their ecological environment. Using a series of linear and logistic regressions, we found that minimum temperature (at an 850 hPa level), grouped into three categories according to its magnitude, and seasonality, are significant variables when trying to explain crop damage costs, as well as to predict and quantify the likelihood and magnitude of damaging frost events.
Chen, Chia-Chi; Hsiao, Fei-Yuan; Shen, Li-Jiuan; Wu, Chien-Chih
2017-08-01
Medication errors may lead to adverse drug events (ADEs), which endangers patient safety and increases healthcare-related costs. The on-ward deployment of clinical pharmacists has been shown to reduce preventable ADEs, and save costs. The purpose of this study was to evaluate the ADEs prevention and cost-saving effects by clinical pharmacist deployment in a nephrology ward.This was a retrospective study, which compared the number of pharmacist interventions 1 year before and after a clinical pharmacist was deployed in a nephrology ward. The clinical pharmacist attended ward rounds, reviewed and revised all medication orders, and gave active recommendations of medication use. For intervention analysis, the numbers and types of the pharmacist's interventions in medication orders and the active recommendations were compared. For cost analysis, both estimated cost saving and avoidance were calculated and compared.The total numbers of pharmacist interventions in medication orders were 824 in 2012 (preintervention), and 1977 in 2013 (postintervention). The numbers of active recommendation were 40 in 2012, and 253 in 2013. The estimated cost savings in 2012 and 2013 were NT$52,072 and NT$144,138, respectively. The estimated cost avoidances of preventable ADEs in 2012 and 2013 were NT$3,383,700 and NT$7,342,200, respectively. The benefit/cost ratio increased from 4.29 to 9.36, and average admission days decreased by 2 days after the on-ward deployment of a clinical pharmacist.The number of pharmacist's interventions increased dramatically after her on-ward deployment. This service could reduce medication errors, preventable ADEs, and costs of both medications and potential ADEs.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
NASA Astrophysics Data System (ADS)
Papagiannaki, K.; Lagouvardos, K.; Kotroni, V.; Papagiannakis, G.
2014-01-01
The objective of this study is to analyze frost damaging events in agriculture, by examining the relationship between the daily minimum temperature at the lower atmosphere (at the pressure level of 850 hPa) and crop production losses. Furthermore, the study suggests a methodological approach for estimating agriculture risk due to frost events, with the aim to estimate the short-term probability and magnitude of frost-related financial losses for different levels of 850 hPa temperature. Compared with near surface temperature forecasts, temperature forecast at the level of 850 hPa is less influenced by varying weather conditions, as well as by local topographical features, thus it constitutes a more consistent indicator of the forthcoming weather conditions. The analysis of the daily monetary compensations for insured crop losses caused by weather events in Greece, during the period 1999-2011, shows that frost is the major meteorological phenomenon with adverse effects on crop productivity in the largest part of the country. Two regions of different geographical latitude are further examined, to account for the differences in the temperature ranges developed within their ecological environment. Using a series of linear and logistic regressions, we found that minimum temperature (at 850 hPa level), grouped in three categories according to its magnitude, and seasonality are significant variables when trying to explain crop damage costs, as well as to predict and quantify the likelihood and magnitude of frost damaging events.
NASA Astrophysics Data System (ADS)
Tsumune, Daisuke; Aoyama, Michio; Tsubono, Takaki; Misumi, Kazuhiro; Tateda, Yutaka
2017-04-01
A series of accidents at the Fukushima Dai-ichi Nuclear Power Plant (1F NPP) following the earthquake and tsunami of 11 March 2011 resulted in the release of radioactive materials to the ocean by two major pathways, direct release from the accident site and atmospheric deposition. Additional release pathways by river input and runoff from 1F NPP site with precipitation and were also effective for coastal zone in the specific periods before starting direct release on March 26 2011. Direct release from 1F NPP site is dominant one year after the accident. We estimated the direct release rate of 137Cs and 90Sr for more than five-and-a-half years after the accident by the Regional Ocean Model System (ROMS). Direct release rate of 137Cs were estimated for five-and-a-half years after the accident by comparing simulated results and measured activities adjacent to the 1F NPP site(at 5,6 discharge and south discharge). Directly release rate of 137Cs was estimated to be the order of magnitude of 1014 Bq/day and decreased exponentially with time to be the order of magnitude of 109 Bq/day by the end of September 2016. Estimated direct release rate have exponentially reduced with constant rate since November 2011. Apparent half-life of direct release rate was estimated to be 346 days. The estimated total amounts of directly released 137Cs was 3.7±0.7 PBq for five and a half years. Simulated 137Cs activities attributable to direct release were in good agreement with observed activities, a result that implies the estimated direct release rate was reasonable. Simulated 137Cs activity affected off coast in the Fukushima prefecture. We used the measured 137Cs activities by the Tokyo Electric Power Company (TEPCO) for the estimation of direct release. The sea water samples were corrected from the coast. The averaged 137Cs activities from November 2013 to June 2016 were 391 and 383 Bq/m3 at 5,6 discharge and south discharge, respectively. The averaged 137Cs activities measured by the Nuclear Regulation Agency (NRA) is about five times smaller than the one by the TEPCO because the NRA corrected seawater samples at 300-500m offshore by ship. Horizontal resolution of the model was 1km x 1km, therefore it is important to consider the difference of activities in the sub-grid scale for the detailed estimations of direct release. 90Sr/137Cs activity ratio measured adjacent to the 1F NPP is variable with time. The 90Sr/137Cs activity ratio was 0.62 due to the global fallout before the accident. The 90Sr/137Cs activity ratio decreased to 0.01 after the accident before April 2011. And the ratio increased to 1 by September 2013. And then the ratio decreased to 0.1-1. After October 2015, the ratio decreased to 0.1-0.2. Directly release rate of 90Sr was estimated to be the order of magnitude of 1012 Bq/day and decreased to the order of magnitude of 108 Bq/day by the end of September 2016. The estimated total amounts of directly released 90Sr was 35 ± 7 TBq.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
Ohsfeldt, Robert L.; Ward, Marcia M.; Schneider, John E.; Jaana, Mirou; Miller, Thomas R.; Lei, Yang; Wakefield, Douglas S.
2005-01-01
Objective The aim of this study was to estimate the costs of implementing computerized physician order entry (CPOE) systems in hospitals in a rural state and to evaluate the financial implications of statewide CPOE implementation. Methods A simulation model was constructed using estimates of initial and ongoing CPOE costs mapped onto all general hospitals in Iowa by bed quantity and current clinical information system (CIS) status. CPOE cost estimates were obtained from a leading CPOE vendor. Current CIS status was determined through mail survey of Iowa hospitals. Patient care revenue and operating cost data published by the Iowa Hospital Association were used to simulate the financial impact of CPOE adoption on hospitals. Results CPOE implementation would dramatically increase operating costs for rural and critical access hospitals in the absence of substantial costs savings associated with improved efficiency or improved patient safety. For urban and rural referral hospitals, the cost impact is less dramatic but still substantial. However, relatively modest benefits in the form of patient care cost savings or revenue enhancement would be sufficient to offset CPOE costs for these larger hospitals. Conclusion Implementation of CPOE in rural or critical access hospitals may depend on net increase in operating costs. Adoption of CPOE may be financially infeasible for these small hospitals in the absence of increases in hospital payments or ongoing subsidies from third parties. PMID:15492033
Order-of-magnitude physics of neutron stars. Estimating their properties from first principles
NASA Astrophysics Data System (ADS)
Reisenegger, Andreas; Zepeda, Felipe S.
2016-03-01
We use basic physics and simple mathematics accessible to advanced undergraduate students to estimate the main properties of neutron stars. We set the stage and introduce relevant concepts by discussing the properties of "everyday" matter on Earth, degenerate Fermi gases, white dwarfs, and scaling relations of stellar properties with polytropic equations of state. Then, we discuss various physical ingredients relevant for neutron stars and how they can be combined in order to obtain a couple of different simple estimates of their maximum mass, beyond which they would collapse, turning into black holes. Finally, we use the basic structural parameters of neutron stars to briefly discuss their rotational and electromagnetic properties.
The social cost of rheumatoid arthritis in Italy: the results of an estimation exercise.
Turchetti, G; Bellelli, S; Mosca, M
2014-03-14
The objective of this study is to estimate the mean annual social cost per adult person and the total social cost of rheumatoid arthritis (RA) in Italy. A literature review was performed by searching primary economic studies on adults in order to collect cost data of RA in Italy in the last decade. The review results were merged with data of institutional sources for estimating - following the methodological steps of the cost of illness analysis - the social cost of RA in Italy. The mean annual social cost of RA was € 13,595 per adult patient in Italy. Affecting 259,795 persons, RA determines a social cost of € 3.5 billions in Italy. Non-medical direct cost and indirect cost represent the main cost items (48% and 31%) of the total social cost of RA in Italy. Based on these results, it appears evident that the assessment of the economic burden of RA solely based on direct medical costs evaluation gives a limited view of the phenomenon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, J.R.; Marshall, M.E.; Barker, B.W.
In situations where cavity decoupling of underground nuclear explosions is a plausible evasion scenario, comprehensive seismic monitoring of any eventual CTBT will require the routine identification of many small seismic events with magnitudes in the range 2.0 < m sub b < 3.5. However, since such events are not expected to be detected teleseismically, their magnitudes will have to be estimated from regional recordings using seismic phases and frequency bands which are different from those employed in the teleseismic m sub b scale which is generally used to specify monitoring capability. Therefore, it is necessary to establish the m submore » b equivalences of any selected regional magnitude measures in order to estimate the expected detection statistics and thresholds of proposed CTBT seismic monitoring networks. In the investigations summarized in this report, this has been accomplished through analyses of synthetic data obtained by theoretically scaling observed regional seismic data recorded in Scandinavia and Central Asia from various tamped nuclear tests to obtain estimates of the corresponding seismic signals to be expected from small cavity decoupled nuclear tests at those same source locations.« less
Effect of Magnitude Estimation of Pleasantness and Intensity on fMRI Activation to Taste
Cerf-Ducastel, B.; Haase, L.; Murphy, C.
2012-01-01
The goal of the present study was to investigate whether the psychophysical evaluation of taste stimuli using magnitude estimation influences the pattern of cortical activation observed with neuroimaging. That is, whether different brain areas are involved in the magnitude estimation of pleasantness relative to the magnitude estimation of intensity. fMRI was utilized to examine the patterns of cortical activation involved in magnitude estimation of pleasantness and intensity during hunger in response to taste stimuli. During scanning, subjects were administered taste stimuli orally and were asked to evaluate the perceived pleasantness or intensity using the general Labeled Magnitude Scale (Green 1996, Bartoshuk et al. 2004). Image analysis was conducted using AFNI. Magnitude estimation of intensity and pleasantness shared common activations in the insula, rolandic operculum, and the medio dorsal nucleus of the thalamus. Globally, magnitude estimation of pleasantness produced significantly more activation than magnitude estimation of intensity. Areas differentially activated during magnitude estimation of pleasantness versus intensity included, e.g., the insula, the anterior cingulate gyrus, and putamen; suggesting that different brain areas were recruited when subjects made magnitude estimates of intensity and pleasantness. These findings demonstrate significant differences in brain activation during magnitude estimation of intensity and pleasantness to taste stimuli. An appreciation for the complexity of brain response to taste stimuli may facilitate a clearer understanding of the neural mechanisms underlying eating behavior and over consumption. PMID:23227271
Critical role for mesoscale eddy diffusion in supplying oxygen to hypoxic ocean waters
NASA Astrophysics Data System (ADS)
Gnanadesikan, Anand; Bianchi, Daniele; Pradal, Marie-Aude
2013-10-01
of the oceanic lateral eddy diffusion coefficient Aredi vary by more than an order of magnitude, ranging from less than a few hundred m2/s to thousands of m2/s. This uncertainty has first-order implications for the intensity of oceanic hypoxia, which is poorly simulated by the current generation of Earth System Models. Using satellite-based estimate of oxygen consumption in hypoxic waters to estimate the required diffusion coefficient for these waters gives a value of order 1000 m2/s. Varying Aredi across a suite of Earth System Models yields a broadly consistent result given a thermocline diapycnal diffusion coefficient of 1 × 10-5 m2/s.
77 FR 65607 - Internal Revenue Service
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-29
... Form 720 returns a systemic way to order additional tax forms and informational publications. Current... forms of information technology; and (e) estimates of capital or start-up costs and costs of operation...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Cycle Cost Analyses § 436.10 Purpose. This subpart establishes a methodology and procedures for estimating and comparing the life cycle costs of Federal buildings, for determining the life cycle cost effectiveness of energy conservation measures and water conservation measures, and for rank ordering life cycle...
Hendry, Gordon J; Turner, Debbie E; Gardner-Medwin, Janet; Lorgelly, Paula K; Woodburn, James
2014-02-06
An increased awareness of patients' and parents' care preferences regarding foot care is desirable from a clinical perspective as such information may be utilised to optimise care delivery. The aim of this study was to examine parents' preferences for, and valuations of foot care and foot-related outcomes in juvenile idiopathic arthritis (JIA). A discrete choice experiment (DCE) incorporating willingness-to-pay (WTP) questions was conducted by surveying 42 parents of children with JIA who were enrolled in a randomised-controlled trial of multidisciplinary foot care at a single UK paediatric rheumatology outpatients department. Attributes explored were: levels of pain; mobility; ability to perform activities of daily living (ADL); waiting time; referral route; and footwear. The DCE was administered at trial baseline. DCE data were analysed using a multinomial-logit-regression model to estimate preferences and relative importance of attributes of foot care. A stated-preference WTP question was presented to estimate parents' monetary valuation of health and service improvements. Every attribute in the DCE was statistically significant (p < 0.01) except that of cost (p = 0.118), suggesting that all attributes, except cost, have an impact on parents' preferences for foot care for their child. The magnitudes of the coefficients indicate that the strength of preference for each attribute was (in descending order): improved ability to perform ADL, reductions in foot pain, improved mobility, improved ability to wear desired footwear, multidisciplinary foot care route, and reduced waiting time. Parents' estimated mean annual WTP for a multidisciplinary foot care service was £1,119.05. In terms of foot care service provision for children with JIA, parents appear to prefer improvements in health outcomes over non-health outcomes and service process attributes. Cost was relatively less important than other attributes suggesting that it does not appear to impact on parents' preferences.
2014-01-01
Background An increased awareness of patients’ and parents’ care preferences regarding foot care is desirable from a clinical perspective as such information may be utilised to optimise care delivery. The aim of this study was to examine parents’ preferences for, and valuations of foot care and foot-related outcomes in juvenile idiopathic arthritis (JIA). Methods A discrete choice experiment (DCE) incorporating willingness-to-pay (WTP) questions was conducted by surveying 42 parents of children with JIA who were enrolled in a randomised-controlled trial of multidisciplinary foot care at a single UK paediatric rheumatology outpatients department. Attributes explored were: levels of pain; mobility; ability to perform activities of daily living (ADL); waiting time; referral route; and footwear. The DCE was administered at trial baseline. DCE data were analysed using a multinomial-logit-regression model to estimate preferences and relative importance of attributes of foot care. A stated-preference WTP question was presented to estimate parents’ monetary valuation of health and service improvements. Results Every attribute in the DCE was statistically significant (p < 0.01) except that of cost (p = 0.118), suggesting that all attributes, except cost, have an impact on parents’ preferences for foot care for their child. The magnitudes of the coefficients indicate that the strength of preference for each attribute was (in descending order): improved ability to perform ADL, reductions in foot pain, improved mobility, improved ability to wear desired footwear, multidisciplinary foot care route, and reduced waiting time. Parents’ estimated mean annual WTP for a multidisciplinary foot care service was £1,119.05. Conclusions In terms of foot care service provision for children with JIA, parents appear to prefer improvements in health outcomes over non-health outcomes and service process attributes. Cost was relatively less important than other attributes suggesting that it does not appear to impact on parents’ preferences. PMID:24502508
ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.
Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less
Reed, Shelby D; Li, Yanhong; Kamble, Shital; Polsky, Daniel; Graham, Felicia L; Bowers, Margaret T; Samsa, Gregory P; Paul, Sara; Schulman, Kevin A; Whellan, David J; Riegel, Barbara J
2012-01-01
Patient-centered health care interventions, such as heart failure disease management programs, are under increasing pressure to demonstrate good value. Variability in costing methods and assumptions in economic evaluations of such interventions limit the comparability of cost estimates across studies. Valid cost estimation is critical to conducting economic evaluations and for program budgeting and reimbursement negotiations. Using sound economic principles, we developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Costing Tool, a spreadsheet program that can be used by researchers and health care managers to systematically generate cost estimates for economic evaluations and to inform budgetary decisions. The tool guides users on data collection and cost assignment for associated personnel, facilities, equipment, supplies, patient incentives, miscellaneous items, and start-up activities. The tool generates estimates of total program costs, cost per patient, and cost per week and presents results using both standardized and customized unit costs for side-by-side comparisons. Results from pilot testing indicated that the tool was well-formatted, easy to use, and followed a logical order. Cost estimates of a 12-week exercise training program in patients with heart failure were generated with the costing tool and were found to be consistent with estimates published in a recent study. The TEAM-HF Costing Tool could prove to be a valuable resource for researchers and health care managers to generate comprehensive cost estimates of patient-centered interventions in heart failure or other conditions for conducting high-quality economic evaluations and making well-informed health care management decisions.
Reed, Shelby D.; Li, Yanhong; Kamble, Shital; Polsky, Daniel; Graham, Felicia L.; Bowers, Margaret T.; Samsa, Gregory P.; Paul, Sara; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara J.
2011-01-01
Background Patient-centered health care interventions, such as heart failure disease management programs, are under increasing pressure to demonstrate good value. Variability in costing methods and assumptions in economic evaluations of such interventions limit the comparability of cost estimates across studies. Valid cost estimation is critical to conducting economic evaluations and for program budgeting and reimbursement negotiations. Methods and Results Using sound economic principles, we developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Costing Tool, a spreadsheet program that can be used by researchers or health care managers to systematically generate cost estimates for economic evaluations and to inform budgetary decisions. The tool guides users on data collection and cost assignment for associated personnel, facilities, equipment, supplies, patient incentives, miscellaneous items, and start-up activities. The tool generates estimates of total program costs, cost per patient, and cost per week and presents results using both standardized and customized unit costs for side-by-side comparisons. Results from pilot testing indicated that the tool was well-formatted, easy to use, and followed a logical order. Cost estimates of a 12-week exercise training program in patients with heart failure were generated with the costing tool and were found to be consistent with estimates published in a recent study. Conclusions The TEAM-HF Costing Tool could prove to be a valuable resource for researchers and health care managers to generate comprehensive cost estimates of patient-centered interventions in heart failure or other conditions for conducting high-quality economic evaluations and making well-informed health care management decisions. PMID:22147884
Thorngren, Linnea; Dunér Holthuis, Thomas; Lindegarth, Susanne; Lindegarth, Mats
2017-01-01
Due to large-scale habitat losses and increasing pressures, benthic habitats in general, and perhaps oyster beds in particular, are commonly in decline and severely threatened on regional and global scales. Appropriate and cost-efficient methods for mapping and monitoring of the distribution, abundance and quality of remaining oyster populations are fundamental for sustainable management and conservation of these habitats and their associated values. Towed video has emerged as a promising method for surveying benthic communities in a both non-destructive and cost-efficient way. Here we examine its use as a tool for quantification and monitoring of oyster populations by (i) analysing how well abundances can be estimated and how living Ostrea edulis individuals can be distinguished from dead ones, (ii) estimating the variability within and among observers as well as the spatial variability at a number of scales, and finally (iii) evaluating the precision of estimated abundances under different scenarios for monitoring. Overall, the results show that the can be used to quantify abundance and occurrence of Ostrea edulis in heterogeneous environments. There was a strong correlation between abundances determined in the field and abundances estimated by video-analyses (r2 = 0.93), even though video analyses underestimated the total abundance of living oysters by 20%. Additionally, the method was largely repeatable within and among observers and revealed no evident bias in identification of living and dead oysters. We also concluded that the spatial variability was an order of magnitude larger than that due to observer errors. Subsequent modelling of precision showed that the total area sampled was the main determinant of precision and provided general method for determining precision. This study provides a thorough validation of the application of towed video on quantitative estimations of live oysters. The results suggest that the method can indeed be very useful for this purpose and we therefor recommend it for future monitoring of oysters and other threatened habitats and species.
Dunér Holthuis, Thomas; Lindegarth, Susanne; Lindegarth, Mats
2017-01-01
Due to large-scale habitat losses and increasing pressures, benthic habitats in general, and perhaps oyster beds in particular, are commonly in decline and severely threatened on regional and global scales. Appropriate and cost-efficient methods for mapping and monitoring of the distribution, abundance and quality of remaining oyster populations are fundamental for sustainable management and conservation of these habitats and their associated values. Towed video has emerged as a promising method for surveying benthic communities in a both non-destructive and cost-efficient way. Here we examine its use as a tool for quantification and monitoring of oyster populations by (i) analysing how well abundances can be estimated and how living Ostrea edulis individuals can be distinguished from dead ones, (ii) estimating the variability within and among observers as well as the spatial variability at a number of scales, and finally (iii) evaluating the precision of estimated abundances under different scenarios for monitoring. Overall, the results show that the can be used to quantify abundance and occurrence of Ostrea edulis in heterogeneous environments. There was a strong correlation between abundances determined in the field and abundances estimated by video-analyses (r2 = 0.93), even though video analyses underestimated the total abundance of living oysters by 20%. Additionally, the method was largely repeatable within and among observers and revealed no evident bias in identification of living and dead oysters. We also concluded that the spatial variability was an order of magnitude larger than that due to observer errors. Subsequent modelling of precision showed that the total area sampled was the main determinant of precision and provided general method for determining precision. This study provides a thorough validation of the application of towed video on quantitative estimations of live oysters. The results suggest that the method can indeed be very useful for this purpose and we therefor recommend it for future monitoring of oysters and other threatened habitats and species. PMID:29141028
Estimating preferences for modes of drug administration: The case of US healthcare professionals.
Tetteh, Ebenezer K; Morris, Steve; Titchener-Hooker, Nigel
2018-01-01
There are hidden drug administration costs that arise from a mismatch between end-user preferences and how manufacturers choose to formulate their drug products for delivery to patients. The corollary of this is: there are "intangible benefits" from considering end-user preferences in manufacturing patient-friendly medicines. It is important then to have some idea of what pharmaceutical manufacturers should consider in making patient-friendly medicines and of the magnitude of the indirect benefits from doing so. This study aimed to evaluate preferences of healthcare professionals in the US for the non-monetary attributes of different modes of drug administration. It uses these preference orderings to compute a monetary valuation of the indirect benefits from making patient-friendly medicines. A survey collected choice preferences of a sample of 210 healthcare professionals in the US for two unlabelled drug options. These drugs were identical except in the levels of attributes of drug administration. Using the choice data collected, statistical models were estimated to compute gross welfare benefits, measured by the expected compensating variation, from making drugs in a more patient-friendly manner. The monetary value of end-user benefits from developing patient-friendly drug delivery systems is: (1) as large as the annual acquisition costs per full treatment episode for some biologic drugs; and (2) likely to fall in the "high end" of the distribution of the direct monetary costs of drug administration. An examination of end-user preferences should help manufacturers make more effective and efficient use of limited resources for innovations in drug delivery system, or manufacturing research in general. Copyright © 2017 Elsevier Inc. All rights reserved.
GEOSTATISTICAL INTERPOLATION OF CHEMICAL CONCENTRATION. (R825689C037)
Measurements of contaminant concentration at a hazardous waste site typically vary over many orders of magnitude and have highly skewed distributions. This work presents a practical methodology for the estimation of solute concentration contour maps and volume...
NASA Astrophysics Data System (ADS)
Orans, Ren
1990-10-01
Existing procedures used to develop marginal costs for electric utilities were not designed for applications in an increasingly competitive market for electric power. The utility's value of receiving power, or the costs of selling power, however, depend on the exact location of the buyer or seller, the magnitude of the power and the period of time over which the power is used. Yet no electric utility in the United States has disaggregate marginal costs that reflect differences in costs due to the time, size or location of the load associated with their power or energy transactions. The existing marginal costing methods used by electric utilities were developed in response to the Public Utilities Regulatory Policy Act (PURPA) in 1978. The "ratemaking standards" (Title 1) established by PURPA were primarily concerned with the appropriate segmentation of total revenues to various classes-of-service, designing time-of-use rating periods, and the promotion of efficient long-term resource planning. By design, the methods were very simple and inexpensive to implement. Now, more than a decade later, the costing issues facing electric utilities are becoming increasingly complex, and the benefits of developing more specific marginal costs will outweigh the costs of developing this information in many cases. This research develops a framework for estimating total marginal costs that vary by the size, timing, and the location of changes in loads within an electric distribution system. To complement the existing work at the Electric Power Research Institute (EPRI) and Pacific Gas and Electric Company (PGandE) on estimating disaggregate generation and transmission capacity costs, this dissertation focuses on the estimation of distribution capacity costs. While the costing procedure is suitable for the estimation of total (generation, transmission and distribution) marginal costs, the empirical work focuses on the geographic disaggregation of marginal costs related to electric utility distribution investment. The study makes use of data from an actual distribution planning area, located within PGandE's service territory, to demonstrate the important characteristics of this new costing approach. The most significant result of this empirical work is that geographic differences in the cost of capacity in distribution systems can be as much as four times larger than the current system average utility estimates. Furthermore, lumpy capital investment patterns can lead to significant cost differences over time.
Space Station Freedom advanced photovoltaics and battery technology development planning
NASA Technical Reports Server (NTRS)
Brender, Karen D.; Cox, Spruce M.; Gates, Mark T.; Verzwyvelt, Scott A.
1993-01-01
Space Station Freedom (SSF) usable electrical power is planned to be built up incrementally during assembly phase to a peak of 75 kW end-of-life (EOL) shortly after Permanently Manned Capability (PMC) is achieved in 1999. This power will be provided by planar silicon (Si) arrays and nickel-hydrogen (NiH2) batteries. The need for power is expected to grow from 75 kW to as much as 150 kW EOL during the evolutionary phase of SSF, with initial increases beginning as early as 2002. Providing this additional power with current technology may not be as cost effective as using advanced technology arrays and batteries expected to develop prior to this evolutionary phase. A six-month study sponsored by NASA Langley Research Center and conducted by Boeing Defense and Space Group was initiated in Aug. 1991. The purpose of the study was to prepare technology development plans for cost effective advanced photovoltaic (PV) and battery technologies with application to SSF growth, SSF upgrade after its arrays and batteries reach the end of their design lives, and other low Earth orbit (LEO) platforms. Study scope was limited to information available in the literature, informal industry contacts, and key representatives from NASA and Boeing involved in PV and battery research and development. Ten battery and 32 PV technologies were examined and their performance estimated for SSF application. Promising technologies were identified based on performance and development risk. Rough order of magnitude cost estimates were prepared for development, fabrication, launch, and operation. Roadmaps were generated describing key issues and development paths for maturing these technologies with focus on SSF application.
Smoking, healthcare cost, and loss of productivity in Sweden 2001.
Bolin, Kristian; Lindgren, Björn
2007-01-01
Objectives were (a) to estimate healthcare cost and productivity losses due to smoking in Sweden 2001 and (b) to compare the results with studies for Sweden 1980, Canada 1991, Germany 1996, and the USA 1998. Published estimates on relative risks and Swedish smoking patterns were used to calculate attributable risks for smokers and former smokers. These were applied to cost estimates for smoking-related diseases based on data from public Swedish registers. The estimated total cost for Sweden 2001 was US 804 million dollars; COPD and cancer of the lung accounted for 43%. Healthcare cost accounted for 26% of the total cost. The estimated costs per smoker were US 3,200 dollars in the USA 1998; 1,600 in Canada 1991; 1,100 in Germany 1996; 600 in Sweden 2001; and 300 in Sweden 1980 (all in 2001 US dollar prices). To reduce the prevalence of smoking is an issue worthwhile pursuing in its own right. In order to reduce the cost of smoking, however, policy-makers should also explore and influence the factors that determine the cost per smoker. Sweden seems to have been more successful than comparable countries in pursuing both these objectives.
Effect of Common Cryoprotectants on Critical Warming Rates and Ice Formation in Aqueous Solutions
Hopkins, Jesse B.; Badeau, Ryan; Warkentin, Matthew; Thorne, Robert E.
2012-01-01
Ice formation on warming is of comparable or greater importance to ice formation on cooling in determining survival of cryopreserved samples. Critical warming rates required for ice-free warming of vitrified aqueous solutions of glycerol, dimethyl sulfoxide, ethylene glycol, polyethylene glycol 200 and sucrose have been measured for warming rates of order 10 to 104 K/s. Critical warming rates are typically one to three orders of magnitude larger than critical cooling rates. Warming rates vary strongly with cooling rates, perhaps due to the presence of small ice fractions in nominally vitrified samples. Critical warming and cooling rate data spanning orders of magnitude in rates provide rigorous tests of ice nucleation and growth models and their assumed input parameters. Current models with current best estimates for input parameters provide a reasonable account of critical warming rates for glycerol solutions at high concentrations/low rates, but overestimate both critical warming and cooling rates by orders of magnitude at lower concentrations and larger rates. In vitrification protocols, minimizing concentrations of potentially damaging cryoprotectants while minimizing ice formation will require ultrafast warming rates, as well as fast cooling rates to minimize the required warming rates. PMID:22728046
Estimating future flood frequency and magnitude in basins affected by glacier wastage.
DOT National Transportation Integrated Search
2014-10-01
Infrastructure, such as bridge crossings, requires informed structural designs in order to be effective and reliable for : decades. A typical bridge is intended to operate for 75 years or more, a period of time anticipated to exhibit a warming : clim...
Reverse time migration by Krylov subspace reduced order modeling
NASA Astrophysics Data System (ADS)
Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali
2018-04-01
Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.
Stress Drop Estimates from Induced Seismic Events in the Fort Worth Basin, Texas
NASA Astrophysics Data System (ADS)
Jeong, S. J.; Stump, B. W.; DeShon, H. R.
2017-12-01
Since the beginning of Barnett shale oil and gas production in the Fort Worth Basin, there have been earthquake sequences, including multiple magnitude 3.0+ events near the DFW International Airport, Azle, Irving-Dallas, and throughout Johnson County (Cleburne and Venus). These shallow depth earthquakes (2 to 8 km) have not exceeded magnitude 4.0 and have been widely felt; the close proximity of these earthquakes to a large population center motivates an assessment of the kinematics of the events in order to provide more accurate ground motion predictions. Previous studies have estimated average stress drops for the DFW airport and Cleburne earthquakes at 10 and 43 bars, respectively. Here, we calculate stress drops for Azle, Irving-Dallas and Venus earthquakes using seismic data from local (≤25 km) and regional (>25 km) seismic networks. Events with magnitudes above 2.5 are chosen to ensure adequate signal-to-noise. Stress drops are estimated by fitting the Brune earthquake model to the observed source spectrum with correction for propagation path effects and a local site effect using a high-frequency decay parameter, κ, estimated from acceleration spectrum. We find that regional average stress drops are similar to those estimated using local data, supporting the appropriateness of the propagation path and site corrections. The average stress drop estimates are 72 bars, which range from 7 to 240 bars. The results are consistent with global averages of 10 to 100 bars for intra-plate earthquakes and compatible with stress drops of DFW airport and Cleburne earthquakes. The stress drops show a slight breakdown in self-similarity with increasing moment magnitude. The breakdown of similarity for these events requires further study because of the limited magnitude range of the data. These results suggest that strong motions and seismic hazard from an injection-induced earthquake can be expected to be similar to those for tectonic events taking into account the shallow depth of induced earthquakes.
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
Ramos-Scharrón, Carlos E; Figueroa-Sánchez, Yasiel
2017-11-01
The combination of a topographically abrupt wet-tropical setting with the high level of soil exposure that typifies many sun-grown coffee farms represents optimal conditions for high erosion rates. Although traditionally considered as a main cause for water resource degradation, limited empirical evidence has existed to document its true contribution. This study relies on plot-scale experimental results conducted in western Puerto Rico to assess the impact of cultivated surfaces and farm access roads on runoff and sediment production from the plot to the farm and watershed scales. Results show that unsurfaced and graveled road surfaces produce one- to two-orders of magnitude more per unit area runoff than cultivated lands. Similarly, erosion rates from unsurfaced roads are about 102 g m -2 per cm of rainfall and these are two-orders of magnitude greater than from actively cultivated surfaces. Mitigation practices such as uncompacting road surfaces by ripping and gravel application reduce onsite erosion rates to 0.6% and 8% of unsurfaced conditions, respectively. At the farm scale, coffee farms are estimated to produce sediment at a rate of 12-18 Mg ha -1 yr -1 , and roads are undoubtedly the dominant sediment source responsible for 59-95% of the total sediment produced. The costs associated to ameliorating erosion problems through road graveling are high. Therefore, a combined approach that treats road erosion onsite with one that traps sediment before it reaches river networks is the viable solution to this problem. Copyright © 2017 Elsevier Ltd. All rights reserved.
Systematic review of drug administration costs and implications for biopharmaceutical manufacturing.
Tetteh, Ebenezer; Morris, Stephen
2013-10-01
The acquisition costs of biologic drugs are often considered to be relatively high compared with those of nonbiologics. However, the total costs of delivering these drugs also depend on the cost of administration. Ignoring drug administration costs may distort resource allocation decisions because these affect cost effectiveness. The objectives of this systematic review were to develop a framework of drug administration costs that considers both the costs of physical administration and the associated proximal costs; and, as a case example, to use this framework to evaluate administration costs for biologics within the UK National Health Service (NHS). We reviewed literature that reported estimates of administration costs for biologics within the UK NHS to identify how these costs were quantified and to examine how differences in dosage forms and regimens influenced administration costs. The literature reviewed were identified by searching the Centre for Review and Dissemination Databases (DARE, NHS EED and HTA); EMBASE (The Excerpta Medica Database); MEDLINE (using the OVID interface); Econlit (EBSCO); Tufts Medical Center Cost Effectiveness Analysis (CEA) Registry; and Google Scholar. We identified 4,344 potentially relevant studies, of which 43 studies were selected for this systematic review. We extracted estimates of the administration costs of biologics from these studies. We found evidence of variation in the way that administration costs were measured, and that this affected the magnitude of costs reported, which could then influence cost effectiveness. Our findings suggested that manufacturers of biologic medicines should pay attention to formulation issues and their impact on administration costs, because these affect the total costs of healthcare delivery and cost effectiveness.
A quality-based cost model for new electronic systems and products
NASA Astrophysics Data System (ADS)
Shina, Sammy G.; Saigal, Anil
1998-04-01
This article outlines a method for developing a quality-based cost model for the design of new electronic systems and products. The model incorporates a methodology for determining a cost-effective design margin allocation for electronic products and systems and its impact on manufacturing quality and cost. A spreadsheet-based cost estimating tool was developed to help implement this methodology in order for the system design engineers to quickly estimate the effect of design decisions and tradeoffs on the quality and cost of new products. The tool was developed with automatic spreadsheet connectivity to current process capability and with provisions to consider the impact of capital equipment and tooling purchases to reduce the product cost.
Scanning Tunneling Microscopic Characterization of an Engineered Organic Molecule
2011-08-01
attachment and wide-band MCT detector , was used. Figure 3 shows the spectra obtained for SAM of PMNBT (top), which was compared to raw crystal PMNBT...averaged in order to reduce random noise , especially in the high bias region. Figure 4d shows the average second-order STM I-V curves of each molecule...done to avoid the low signal-to- noise ratio regime of the STM (18). Our estimated value of go for dDT is about two orders of magnitude smaller than
A white paper: Operational efficiency. New approaches to future propulsion systems
NASA Technical Reports Server (NTRS)
Rhodes, Russel; Wong, George
1991-01-01
Advanced launch systems for the next generation of space transportation systems (1995 to 2010) must deliver large payloads (125,000 to 500,000 lbs) to low earth orbit (LEO) at one tenth of today's cost, or 300 to 400 $/lb of payload. This cost represents an order of magnitude reduction from the Titan unmanned vehicle cost of delivering payload to orbit. To achieve this sizable reduction, the operations cost as well as the engine cost must both be lower than current engine system. The Advanced Launch System (ALS) is studying advanced engine designs, such as the Space Transportation Main Engine (STME), which has achieved notable reduction in cost. The results are presented of a current study wherein another level of cost reduction can be achieved by designing the propulsion module utilizing these advanced engines for enhanced operations efficiency and reduced operations cost.
Developing a lower-cost atmospheric CO2 monitoring system using commercial NDIR sensor
NASA Astrophysics Data System (ADS)
Arzoumanian, E.; Bastos, A.; Gaynullin, B.; Laurent, O.; Vogel, F. R.
2017-12-01
Cities release to the atmosphere about 44 % of global energy-related CO2. It is clear that accurate estimates of the magnitude of anthropogenic and natural urban emissions are needed to assess their influence on the carbon balance. A dense ground-based CO2 monitoring network in cities would potentially allow retrieving sector specific CO2 emission estimates when combined with an atmospheric inversion framework using reasonably accurate observations (ca. 1 ppm for hourly means). One major barrier for denser observation networks can be the high cost of high precision instruments or high calibration cost of cheaper and unstable instruments. We have developed and tested a novel inexpensive NDIR sensors for CO2 measurements which fulfils cost and typical parameters requirements (i.e. signal stability, efficient handling, and connectivity) necessary for this task. Such sensors are essential in the market of emissions estimates in cities from continuous monitoring networks as well as for leak detection of MRV (monitoring, reporting, and verification) services for industrial sites. We conducted extensive laboratory tests (short and long-term repeatability, cross-sensitivities, etc.) on a series of prototypes and the final versions were also tested in a climatic chamber. On four final HPP prototypes the sensitivity to pressure and temperature were precisely quantified and correction&calibration strategies developed. Furthermore, we fully integrated these HPP sensors in a Raspberry PI platform containing the CO2 sensor and additional sensors (pressure, temperature and humidity sensors), gas supply pump and a fully automated data acquisition unit. This platform was deployed in parallel to Picarro G2401 instruments in the peri-urban site Saclay - next to Paris, and in the urban site Jussieu - Paris, France. These measurements were conducted over several months in order to characterize the long-term drift of our HPP instruments and the ability of the correction and calibration scheme to provide bias free observations. From the lessons learned in the laboratory tests and field measurements, we developed a specific correction and calibration strategy for our NDIR sensors. Latest results and calibration strategies will be shown.
Sá, Luísa; Costa-Santos, Cristina; Teixeira, Andreia; Couto, Luciana; Costa-Pereira, Altamiro; Hespanhol, Alberto; Santos, Paulo; Martins, Carlos
2015-01-01
Background Physicians’ ability to make cost-effective decisions has been shown to be affected by their knowledge of health care costs. This study assessed whether Portuguese family physicians are aware of the costs of the most frequently prescribed diagnostic and laboratory tests. Methods A cross-sectional study was conducted in a representative sample of Portuguese family physicians, using computer-assisted telephone interviews for data collection. A Likert scale was used to assess physician’s level of agreement with four statements about health care costs. Family physicians were also asked to estimate the costs of diagnostic and laboratory tests. Each physician’s cost estimate was compared with the true cost and the absolute error was calculated. Results One-quarter (24%; 95% confidence interval: 23%–25%) of all cost estimates were accurate to within 25% of the true cost, with 55% (95% IC: 53–56) overestimating and 21% (95% IC: 20–22) underestimating the true actual cost. The majority (76%) of family physicians thought they did not have or were uncertain as to whether they had adequate knowledge of diagnostic and laboratory test costs, and only 7% reported receiving adequate education. The majority of the family physicians (82%) said that they had adequate access to information about the diagnostic and laboratory test costs. Thirty-three percent thought that costs did not influence their decision to order tests, while 27% were uncertain. Conclusions Portuguese family physicians have limited awareness of diagnostic and laboratory test costs, and our results demonstrate a need for improved education in this area. Further research should focus on identifying whether interventions in cost knowledge actually change ordering behavior, in identifying optimal methods to disseminate cost information, and on improving the cost-effectiveness of care. PMID:26356625
NASA Astrophysics Data System (ADS)
Hamilton, Joel; Whittlesey, Norman K.; Robison, M. Henry; Willis, David
2002-08-01
This analysis addresses three important conceptual problems in the measurement of direct and indirect costs and benefits: (1) the distribution of impacts between a regional economy and the encompassing state economy; (2) the distinction between indirect impacts and indirect costs (IC), focusing on the dynamic time path unemployed resources follow to find alternative employment; and (3) the distinction among the affected firms' microeconomic categories of fixed and variable costs as they are used to compute regional direct and indirect costs. It uses empirical procedures that reconcile the usual measures of economic impact provided by input/output models with the estimates of economic costs and benefits required for analysis of welfare changes. The paper illustrates the relationships and magnitudes involved in the context of water policy issues facing the Pecos River Basin of New Mexico.
The Asia-Pacific effects of a megatsunami along the Tonga Trench
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2015-04-01
A megatsunami (M>9.0) along the Tonga Trench has far-reaching consequences for 4 major continents of the world, and exposure ranging from the cities of Sydney and Brisbane, the coastlines of Japan, Canada, USA, and along South America not to mention the Pacific Islands. Using the TSUDAT software of Geoscience Australia, relevant scenarios are selected for the location. Fault mechanics and the possible regime are also then examined to create the scenario. In this study, the effects of a megatsunami scenario are investigated including the run-up heights in coastal regions on these four continents in addition to other hazard effects. Global level DEM and bathymetry data is used to provide a first estimate of the exposed population, built infrastructure (capital stock) and GDP in the tsunami inundation area. The uncertainties of such a study are taken into account by adjusting the scenario via source mechanism, magnitude range and directivity effects. This is combined with basic vulnerability functions from historical tsunamis in order to give an exposed and estimated loss and cost of reconstruction across the Pacific rim. Notes as to the warning times, country preparation and evacuation plans for tsunamis are also made given long lead times in some cases.
Electroencephalography in ellipsoidal geometry with fourth-order harmonics.
Alcocer-Sosa, M; Gutierrez, D
2016-08-01
We present a solution to the electroencephalographs (EEG) forward problem of computing the scalp electric potentials for the case when the head's geometry is modeled using a four-shell ellipsoidal geometry and the brain sources with an equivalent current dipole (ECD). The proposed solution includes terms up to the fourth-order ellipsoidal harmonics and we compare this new approximation against those that only considered up to second- and third-order harmonics. Our comparisons use as reference a solution in which a tessellated volume approximates the head and the forward problem is solved through the boundary element method (BEM). We also assess the solution to the inverse problem of estimating the magnitude of an ECD through different harmonic approximations. Our results show that the fourth-order solution provides a better estimate of the ECD in comparison to lesser order ones.
The Value Of Enhanced Neo Surveys
NASA Astrophysics Data System (ADS)
Harris, Alan W.
2012-10-01
NEO surveys have now achieved, more or less, the “Spaceguard Goal” of cataloging 90% of NEAs larger than 1 km in diameter, and thereby have reduced the short-term hazard from cosmic impacts by about an order of magnitude, from an actuarial estimate of 1,000 deaths per year (actually about a billion every million years, with very little in between), to about 100 deaths per year, with a shift toward smaller but more frequent events accounting for the remaining risk. It is fair to ask, then, what is the value of a next-generation accelerated survey to “retire” much of the remaining risk. The curve of completion of survey versus size of NEA is remarkably similar for any survey, ground or space based, visible light or thermal IR, so it is possible to integrate risk over all sizes, with a time variable curve of completion to evaluate the actuarial value of speeding up survey completion. I will present my latest estimate of NEA population and completion of surveys. From those I will estimate the “value” of accelerated surveys such as Pan-STARRS, LSST, or space-based surveys, versus continuing with current surveys. My tentative conclusion is that we may have already reached the point in terms of cost-benefit where accelerated surveys are not cost-effective in terms of reducing impact risk. If not yet, we soon will. On the other hand, the surveys, which find and catalog main-belt and other classes of small bodies as well as NEOs, have provided a gold mine of good science. The scientific value of continued or accelerated surveys needs to be emphasized as the impact risk is increasingly “retired.”
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Schneider, Evan G.; Vaughan, David A.; Hall, Jeffrey L.; Yu, Chi Yau
2011-01-01
As we have previously reported, it may be possible to launch payloads into low-Earth orbit (LEO) at a per-kilogram cost that is one to two orders of magnitude lower than current launch systems, using only a relatively small capital investment (comparable to a single large present-day launch). An attractive payload would be large quantities of high-performance chemical rocket propellant (e.g. Liquid Oxygen/Liquid Hydrogen (LO2/LH2)) that would greatly facilitate, if not enable, extensive exploration of the moon, Mars, and beyond.
Strain effects on oxygen migration in perovskites.
Mayeshiba, Tam; Morgan, Dane
2015-01-28
Fast oxygen transport materials are necessary for a range of technologies, including efficient and cost-effective solid oxide fuel cells, gas separation membranes, oxygen sensors, chemical looping devices, and memristors. Strain is often proposed as a method to enhance the performance of oxygen transport materials, but the magnitude of its effect and its underlying mechanisms are not well-understood, particularly in the widely-used perovskite-structured oxygen conductors. This work reports on an ab initio prediction of strain effects on migration energetics for nine perovskite systems of the form LaBO3, where B = [Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Ga]. Biaxial strain, as might be easily produced in epitaxial systems, is predicted to lead to approximately linear changes in migration energy. We find that tensile biaxial strain reduces the oxygen vacancy migration barrier across the systems studied by an average of 66 meV per percent strain for a single selected hop, with a low of 36 and a high of 89 meV decrease in migration barrier per percent strain across all systems. The estimated range for the change in migration barrier within each system is ±25 meV per percent strain when considering all hops. These results suggest that strain can significantly impact transport in these materials, e.g., a 2% tensile strain can increase the diffusion coefficient by about three orders of magnitude at 300 K (one order of magnitude at 500 °C or 773 K) for one of the most strain-responsive materials calculated here (LaCrO3). We show that a simple elasticity model, which assumes only dilative or compressive strain in a cubic environment and a fixed migration volume, can qualitatively but not quantitatively model the strain dependence of the migration energy, suggesting that factors not captured by continuum elasticity play a significant role in the strain response.
May, Peter; Garrido, Melissa M; Cassel, J Brian; Morrison, R Sean; Normand, Charles
2016-10-01
To evaluate the sensitivity of treatment effect estimates when length of stay (LOS) is used to control for unobserved heterogeneity when estimating treatment effect on cost of hospital admission with observational data. We used data from a prospective cohort study on the impact of palliative care consultation teams (PCCTs) on direct cost of hospital care. Adult patients with an advanced cancer diagnosis admitted to five large medical and cancer centers in the United States between 2007 and 2011 were eligible for this study. Costs were modeled using generalized linear models with a gamma distribution and a log link. We compared variability in estimates of PCCT impact on hospitalization costs when LOS was used as a covariate, as a sample parameter, and as an outcome denominator. We used propensity scores to account for patient characteristics associated with both PCCT use and total direct hospitalization costs. We analyzed data from hospital cost databases, medical records, and questionnaires. Our propensity score weighted sample included 969 patients who were discharged alive. In analyses of hospitalization costs, treatment effect estimates are highly sensitive to methods that control for LOS, complicating interpretation. Both the magnitude and significance of results varied widely with the method of controlling for LOS. When we incorporated intervention timing into our analyses, results were robust to LOS-controls. Treatment effect estimates using LOS-controls are not only suboptimal in terms of reliability (given concerns over endogeneity and bias) and usefulness (given the need to validate the cost-effectiveness of an intervention using overall resource use for a sample defined at baseline) but also in terms of robustness (results depend on the approach taken, and there is little evidence to guide this choice). To derive results that minimize endogeneity concerns and maximize external validity, investigators should match and analyze treatment and comparison arms on baseline factors only. Incorporating intervention timing may deliver results that are more reliable, more robust, and more useful than those derived using LOS-controls. © Health Research and Educational Trust.
Keller, Virginie D J; Williams, Richard J; Lofthouse, Caryn; Johnson, Andrew C
2014-02-01
Dilution factors are a critical component in estimating concentrations of so-called "down-the-drain" chemicals (e.g., pharmaceuticals) in rivers. The present study estimated the temporal and spatial variability of dilution factors around the world using geographically referenced data sets at 0.5° × 0.5° resolution. Domestic wastewater effluents were derived from national per capita domestic water use estimates and gridded population. Monthly and annual river flows were estimated by accumulating runoff estimates using topographically derived flow directions. National statistics, including the median and interquartile range, were generated to quantify dilution factors. Spatial variability of the dilution factor was found to be considerable; for example, there are 4 orders of magnitude in annual median dilution factor between Canada and Morocco. Temporal variability within a country can also be substantial; in India, there are up to 9 orders of magnitude between median monthly dilution factors. These national statistics provide a global picture of the temporal and spatial variability of dilution factors and, hence, of the potential exposure to down-the-drain chemicals. The present methodology has potential for a wide international community (including decision makers and pharmaceutical companies) to assess relative exposure to down-the-drain chemicals released by human pollution in rivers and, thus, target areas of potentially high risk. © 2013 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.
Entanglement is a costly life-history stage in large whales.
van der Hoop, Julie; Corkeron, Peter; Moore, Michael
2017-01-01
Individuals store energy to balance deficits in natural cycles; however, unnatural events can also lead to unbalanced energy budgets. Entanglement in fishing gear is one example of an unnatural but relatively common circumstance that imposes energetic demands of a similar order of magnitude and duration of life-history events such as migration and pregnancy in large whales. We present two complementary bioenergetic approaches to estimate the energy associated with entanglement in North Atlantic right whales, and compare these estimates to the natural energetic life history of individual whales. Differences in measured blubber thicknesses and estimated blubber volumes between normal and entangled, emaciated whales indicate between 7.4 × 10 10 J and 1.2 × 10 11 J of energy are consumed during the course to death of a lethal entanglement. Increased thrust power requirements to overcome drag forces suggest that when entangled, whales require 3.95 × 10 9 to 4.08 × 10 10 J more energy to swim. Individuals who died from their entanglements performed significantly more work (energy expenditure × time) than those that survived; entanglement duration is therefore critical in determining whales' survival. Significant sublethal energetic impacts also occur, especially in reproductive females. Drag from fishing gear contributes up to 8% of the 4-year female reproductive energy budget, delaying time of energetic equilibrium (to restore energy lost by a particular entanglement) for reproduction by months to years. In certain populations, chronic entanglement in fishing gear can be viewed as a costly unnatural life-history stage, rather than a rare or short-term incident.
NASA Astrophysics Data System (ADS)
Turner, D.
2014-12-01
Understanding the potential economic and physical impacts of climate change on coastal resources involves evaluating a number of distinct adaptive responses. This paper presents a tool for such analysis, a spatially-disaggregated optimization model for adaptation to sea level rise (SLR) and storm surge, the Coastal Impact and Adaptation Model (CIAM). This decision-making framework fills a gap between very detailed studies of specific locations and overly aggregate global analyses. While CIAM is global in scope, the optimal adaptation strategy is determined at the local level, evaluating over 12,000 coastal segments as described in the DIVA database (Vafeidis et al. 2006). The decision to pursue a given adaptation measure depends on local socioeconomic factors like income, population, and land values and how they develop over time, relative to the magnitude of potential coastal impacts, based on geophysical attributes like inundation zones and storm surge. For example, the model's decision to protect or retreat considers the costs of constructing and maintaining coastal defenses versus those of relocating people and capital to minimize damages from land inundation and coastal storms. Uncertain storm surge events are modeled with a generalized extreme value distribution calibrated to data on local surge extremes. Adaptation is optimized for the near-term outlook, in an "act then learn then act" framework that is repeated over the model time horizon. This framework allows the adaptation strategy to be flexibly updated, reflecting the process of iterative risk management. CIAM provides new estimates of the economic costs of SLR; moreover, these detailed results can be compactly represented in a set of adaptation and damage functions for use in integrated assessment models. Alongside the optimal result, CIAM evaluates suboptimal cases and finds that global costs could increase by an order of magnitude, illustrating the importance of adaptive capacity and coastal policy.
Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward
2014-01-01
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na(+) and K(+) channels, with generator potential and graded potential models lacking voltage-gated Na(+) channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na(+) channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a 'footprint' in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation.
Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward
2014-01-01
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation. PMID:24465197
Spatial patterns of mixing in the Solomon Sea
NASA Astrophysics Data System (ADS)
Alberty, M. S.; Sprintall, J.; MacKinnon, J.; Ganachaud, A.; Cravatte, S.; Eldin, G.; Germineaud, C.; Melet, A.
2017-05-01
The Solomon Sea is a marginal sea in the southwest Pacific that connects subtropical and equatorial circulation, constricting transport of South Pacific Subtropical Mode Water and Antarctic Intermediate Water through its deep, narrow channels. Marginal sea topography inhibits internal waves from propagating out and into the open ocean, making these regions hot spots for energy dissipation and mixing. Data from two hydrographic cruises and from Argo profiles are employed to indirectly infer mixing from observations for the first time in the Solomon Sea. Thorpe and finescale methods indirectly estimate the rate of dissipation of kinetic energy (ɛ) and indicate that it is maximum in the surface and thermocline layers and decreases by 2-3 orders of magnitude by 2000 m depth. Estimates of diapycnal diffusivity from the observations and a simple diffusive model agree in magnitude but have different depth structures, likely reflecting the combined influence of both diapycnal mixing and isopycnal stirring. Spatial variability of ɛ is large, spanning at least 2 orders of magnitude within isopycnal layers. Seasonal variability of ɛ reflects regional monsoonal changes in large-scale oceanic and atmospheric conditions with ɛ increased in July and decreased in March. Finally, tide power input and topographic roughness are well correlated with mean spatial patterns of mixing within intermediate and deep isopycnals but are not clearly correlated with thermocline mixing patterns.
Evidence on the cost of breast cancer drugs is required for rational decision making.
Berghuis, Anne Margreet Sofie; Koffijberg, Hendrik; Terstappen, Leonardus Wendelinus Mathias Marie; Sleijfer, Stefan; IJzerman, Maarten Joost
2018-01-01
For rational decision making, assessing the cost-effectiveness and budget impact of new drugs and comparing the costs of drugs already on the market is required. In addition to value frameworks, such as the American Society of Clinical Oncology Value Framework and the European Society of Medical Oncology-Magnitude of Clinical benefit Scale, this also requires a transparent overview of actual drug prices. While list prices are available, evidence on treatment cost is not. This paper aims to synthesise evidence on the reimbursement and costs of high-cost breast cancer drugs in The Netherlands (NL). A literature review was performed to identify currently reimbursed breast cancer drugs in the NL. Treatment costs were determined by multiplying list prices with the average length of treatment and dosing schedule. Comparing list prices to the estimated treatment cost resulted in substantial differences in the ranking of costliness of the drugs. The average mean treatment length was unknown for 11/31 breast cancer drugs (26.2%). The differences in the 15 highest-cost drugs were largest for Bevacizumab, Lapatinib and everolimus, with list prices of €541, €158, €1,168 and estimated treatment cost of €174,400, €18,682 and €31,207, respectively. The lowest-cost (patented) targeted drug is €1,818 more expensive than the highest-cost (off-patent) generic drug according to the estimated drug treatment cost. A lack of evidence on the reimbursement and cost of high-cost breast cancer drugs complicates rapid and transparent evidence synthesis, necessary to focus strategies aiming to limit the increasing healthcare costs. Interestingly, the findings show that off-patent generics (such as paclitaxel or doxorubicin), although substantially cheaper than patented drugs, are still relatively costly. Extending standardisation and increasing European and national regulations on presenting information on costs per cancer drug is highly recommended.
A Framework for Automating Cost Estimates in Assembly Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calton, T.L.; Peters, R.R.
1998-12-09
When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less
Ploch, Caitlin C; Mansi, Chris S S A; Jayamohan, Jayaratnam; Kuhl, Ellen
2016-06-01
Three-dimensional (3D) printing holds promise for a wide variety of biomedical applications, from surgical planning, practicing, and teaching to creating implantable devices. The growth of this cheap and easy additive manufacturing technology in orthopedic, plastic, and vascular surgery has been explosive; however, its potential in the field of neurosurgery remains underexplored. A major limitation is that current technologies are unable to directly print ultrasoft materials like human brain tissue. In this technical note, the authors present a new technology to create deformable, personalized models of the human brain. The method combines 3D printing, molding, and casting to create a physiologically, anatomically, and tactilely realistic model based on magnetic resonance images. Created from soft gelatin, the model is easy to produce, cost-efficient, durable, and orders of magnitude softer than conventionally printed 3D models. The personalized brain model cost $50, and its fabrication took 24 hours. In mechanical tests, the model stiffness (E = 25.29 ± 2.68 kPa) was 5 orders of magnitude softer than common 3D printed materials, and less than an order of magnitude stiffer than mammalian brain tissue (E = 2.64 ± 0.40 kPa). In a multicenter surgical survey, model size (100.00%), visual appearance (83.33%), and surgical anatomy (81.25%) were perceived as very realistic. The model was perceived as very useful for patient illustration (85.00%), teaching (94.44%), learning (100.00%), surgical training (95.00%), and preoperative planning (95.00%). With minor refinements, personalized, deformable brain models created via 3D printing will improve surgical training and preoperative planning with the ultimate goal to provide accurate, customized, high-precision treatment. Copyright © 2016 Elsevier Inc. All rights reserved.
Systems engineering and integration: Cost estimation and benefits analysis
NASA Technical Reports Server (NTRS)
Dean, ED; Fridge, Ernie; Hamaker, Joe
1990-01-01
Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.
NASA Astrophysics Data System (ADS)
Vaishnav, Parth; Horner, Nathaniel; Azevedo, Inês L.
2017-09-01
We estimate the lifetime magnitude and distribution of the private and public benefits and costs of currently installed distributed solar PV systems in the United States. Using data for recently-installed systems, we estimate the balance of benefits and costs associated with installing a non-utility solar PV system today. We also study the geographical distribution of the various subsidies that are made available to owners of rooftop solar PV systems, and compare it to distributions of population and income. We find that, after accounting for federal subsidies and local rebates and assuming a discount rate of 7%, the private benefits of new installations will exceed private costs only in seven of the 19 states for which we have data and only if customers can sell excess power to the electric grid at the retail price. These states are characterized by abundant sunshine (California, Texas and Nevada) or by high electricity prices (New York). Public benefits from reduced air pollution and climate change impact exceed the costs of the various subsidies offered system owners for less than 10% of the systems installed, even assuming a 2% discount rate. Subsidies flowed disproportionately to counties with higher median incomes in 2006. In 2014, the distribution of subsidies was closer to that of population income, but subsidies still flowed disproportionately to the better-off. The total, upfront, subsidy per kilowatt of installed capacity has fallen from 5200 in 2006 to 1400 in 2014, but the absolute magnitude of subsidy has soared as installed capacity has grown explosively. We see considerable differences in the balance of costs and benefits even within states, indicating that local factors such as system price and solar resource are important, and that policies (e.g. net metering) could be made more efficient by taking local conditions into account.
Mammography screening: how important is cost as a barrier to use?
Urban, N; Anderson, G L; Peacock, S
1994-01-01
OBJECTIVES. Recent legislation will improve insurance coverage for screening mammography and effectively lower its cost to many women. Although cost has been cited as a barrier to use, evidence of the magnitude of its effect on use is limited. METHODS. Mammography use in the past 2 years among women aged 50 to 75 residing in four suburban or rural counties in Washington State was estimated from 1989 survey data. Logistic regression analysis was used to estimate the odds ratio of mammography use as a function of economic and other variables. Within a residential area, averages were used to measure the market price of mammography and the time cost to obtain a mammogram. RESULTS. Use was lower among women who faced a higher net price or who preferred to obtain a mammogram during weekend or evening hours and higher among women with higher incomes. Visiting no doctor regularly and smoking were predictors of failure to use mammography. CONCLUSION. The effects of economic variables on mammography use are important and stable across subsets of the population, but they are modest in size. PMID:8279611
Mammography screening: how important is cost as a barrier to use?
Urban, N; Anderson, G L; Peacock, S
1994-01-01
Recent legislation will improve insurance coverage for screening mammography and effectively lower its cost to many women. Although cost has been cited as a barrier to use, evidence of the magnitude of its effect on use is limited. Mammography use in the past 2 years among women aged 50 to 75 residing in four suburban or rural counties in Washington State was estimated from 1989 survey data. Logistic regression analysis was used to estimate the odds ratio of mammography use as a function of economic and other variables. Within a residential area, averages were used to measure the market price of mammography and the time cost to obtain a mammogram. Use was lower among women who faced a higher net price or who preferred to obtain a mammogram during weekend or evening hours and higher among women with higher incomes. Visiting no doctor regularly and smoking were predictors of failure to use mammography. The effects of economic variables on mammography use are important and stable across subsets of the population, but they are modest in size.
Wong, Charlene A; Kulhari, Sajal; McGeoch, Ellen J; Jones, Arthur T; Weiner, Janet; Polsky, Daniel; Baker, Tom
2018-05-29
The design of the Affordable Care Act's (ACA) health insurance marketplaces influences complex health plan choices. To compare the choice environments of the public health insurance exchanges in the fourth (OEP4) versus third (OEP3) open enrollment period and to examine online marketplace run by private companies, including a total cost estimate comparison. In November-December 2016, we examined the public and private online health insurance exchanges. We navigated each site for "real-shopping" (personal information required) and "window-shopping" (no required personal information). Public (n = 13; 12 state-based marketplaces and HealthCare.gov ) and private (n = 23) online health insurance exchanges. Features included consumer decision aids (e.g., total cost estimators, provider lookups) and plan display (e.g., order of plans). We examined private health insurance exchanges for notable features (i.e., those not found on public exchanges) and compared the total cost estimates on public versus private exchanges for a standardized consumer. Nearly all studied consumer decision aids saw increased deployment in the public marketplaces in OEP4 compared to OEP3. Over half of the public exchanges (n = 7 of 13) had total cost estimators (versus 5 of 14 in OEP3) in window-shopping and integrated provider lookups (window-shopping: 7; real-shopping: 8). The most common default plan orders were by premium or total cost estimate. Notable features on private health insurance exchanges were unique data presentation (e.g., infographics) and further personalized shopping (e.g., recommended plan flags). Health plan total cost estimates varied substantially between the public and private exchanges (average difference $1526). The ACA's public health insurance exchanges offered more tools in OEP4 to help consumers select a plan. While private health insurance exchanges presented notable features, the total cost estimates for a standardized consumer varied widely on public versus private exchanges.
Evaluation of the Scottsdale Loop 101 automated speed enforcement demonstration program.
Shin, Kangwon; Washington, Simon P; van Schalkwyk, Ida
2009-05-01
Speeding is recognized as a major contributing factor in traffic crashes. In order to reduce speed-related crashes, the city of Scottsdale, Arizona implemented the first fixed-camera photo speed enforcement program (SEP) on a limited access freeway in the US. The 9-month demonstration program spanning from January 2006 to October 2006 was implemented on a 6.5 mile urban freeway segment of Arizona State Route 101 running through Scottsdale. This paper presents the results of a comprehensive analysis of the impact of the SEP on speeding behavior, crashes, and the economic impact of crashes. The impact on speeding behavior was estimated using generalized least square estimation, in which the observed speeds and the speeding frequencies during the program period were compared to those during other periods. The impact of the SEP on crashes was estimated using 3 evaluation methods: a before-and-after (BA) analysis using a comparison group, a BA analysis with traffic flow correction, and an empirical Bayes BA analysis with time-variant safety. The analysis results reveal that speeding detection frequencies (speeds> or =76 mph) increased by a factor of 10.5 after the SEP was (temporarily) terminated. Average speeds in the enforcement zone were reduced by about 9 mph when the SEP was implemented, after accounting for the influence of traffic flow. All crash types were reduced except rear-end crashes, although the estimated magnitude of impact varies across estimation methods (and their corresponding assumptions). When considering Arizona-specific crash related injury costs, the SEP is estimated to yield about $17 million in annual safety benefits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dittmann, Jason A.; Irwin, Jonathan M.; Charbonneau, David
The MEarth Project is a photometric survey systematically searching the smallest stars near the Sun for transiting rocky planets. Since 2008, MEarth has taken approximately two million images of 1844 stars suspected to be mid-to-late M dwarfs. We have augmented this survey by taking nightly exposures of photometric standard stars and have utilized this data to photometrically calibrate the MEarth system, identify photometric nights, and obtain an optical magnitude with 1.5% precision for each M dwarf system. Each optical magnitude is an average over many years of data, and therefore should be largely immune to stellar variability and flaring. Wemore » combine this with trigonometric distance measurements, spectroscopic metallicity measurements, and 2MASS infrared magnitude measurements in order to derive a color–magnitude–metallicity relation across the mid-to-late M dwarf spectral sequence that can reproduce spectroscopic metallicity determinations to a precision of 0.1 dex. We release optical magnitudes and metallicity estimates for 1567 M dwarfs, many of which did not have an accurate determination of either prior to this work. For an additional 277 stars without a trigonometric parallax, we provide an estimate of the distance, assuming solar neighborhood metallicity. We find that the median metallicity for a volume-limited sample of stars within 20 pc of the Sun is [Fe/H] = −0.03 ± 0.008, and that 29/565 of these stars have a metallicity of [Fe/H] = −0.5 or lower, similar to the low-metallicity distribution of nearby G dwarfs. When combined with the results of ongoing and future planet surveys targeting these objects, the metallicity estimates presented here will be important for assessing the significance of any putative planet–metallicity correlation.« less
Jakobsen, Marie; Kolodziejczyk, Christophe; Klausen Fredslund, Eskild; Poulsen, Peter Bo; Dybro, Lars; Paaske Johnsen, Søren
2017-06-12
Use of oral anticoagulation therapy in patients with atrial fibrillation (AF) involves a trade-off between a reduced risk of ischemic stroke and an increased risk of bleeding events. Different anticoagulation therapies have different safety profiles and data on the societal costs of both ischemic stroke and bleeding events are necessary for assessing the cost-effectiveness and budgetary impact of different treatment options. To our knowledge, no previous studies have estimated the societal costs of bleeding events in patients with AF. The objective of this study was to estimate the 3-years societal costs of first-incident intracranial, gastrointestinal and other major bleeding events in Danish patients with AF. The study was an incidence-based cost-of-illness study carried out from a societal perspective and based on data from national Danish registries covering the period 2002-2012. Costs were estimated using a propensity score matching and multivariable regression analysis (first difference OLS) in a cohort design. Average 3-years societal costs attributable to intracranial, gastrointestinal and other major bleeding events were 27,627, 17,868, and 12,384 EUR per patient, respectively (2015 prices). Existing evidence shows that the corresponding costs of ischemic stroke were 24,084 EUR per patient (2012 prices). The average costs of bleeding events did not differ between patients with AF who were on oral anticoagulation therapy prior to the event and patients who were not. The societal costs attributable to major bleeding events in patients with AF are significant. Intracranial haemorrhages are most costly to society with average costs of similar magnitude as the costs of ischemic stroke. The average costs of gastrointestinal and other major bleeding events are lower than the costs of intracranial haemorrhages, but still substantial. Knowledge about the relative size of the costs of bleeding events compared to ischemic stroke in patients with AF constitutes valuable evidence for decisions-makers in Denmark as well as in other countries.
Sonntag, Diana; Jarczok, Marc N; Ali, Shehzad
2017-09-01
The aim of this study was to quantify the magnitude of lifetime costs of overweight and obesity by socioeconomic status (SES). Differential Costs (DC)-Obesity is a new model that uses time-to-event simulation and the Markov modeling approach to compare lifetime excess costs of overweight and obesity among individuals with low, middle, and high SES. SES was measured by a multidimensional aggregated index based on level of education, occupational class, and income by using longitudinal data of the German Socioeconomic Panel (SOEP). Random-effects meta-analysis was applied to combine estimates of (in)direct costs of overweight and obesity. DC-Obesity brings attention to opposite socioeconomic gradients in lifetime costs due to obesity compared to overweight. Compared to individuals with obesity and high SES, individuals with obesity and low SES had lifetime excess costs that were two times higher (€8,526). In contrast, these costs were 20% higher in groups with overweight and high SES than in groups with overweight and low SES (€2,711). The results of this study indicate that SES may play a pivotal role in designing cost-effective and sustainable interventions to prevent and treat overweight and obesity. DC-Obesity may help public policy planners to make informed decisions about obesity programs targeted at vulnerable SES groups. © 2017 The Obesity Society.
Constellation Program Life-cycle Cost Analysis Model (LCAM)
NASA Technical Reports Server (NTRS)
Prince, Andy; Rose, Heidi; Wood, James
2008-01-01
The Constellation Program (CxP) is NASA's effort to replace the Space Shuttle, return humans to the moon, and prepare for a human mission to Mars. The major elements of the Constellation Lunar sortie design reference mission architecture are shown. Unlike the Apollo Program of the 1960's, affordability is a major concern of United States policy makers and NASA management. To measure Constellation affordability, a total ownership cost life-cycle parametric cost estimating capability is required. This capability is being developed by the Constellation Systems Engineering and Integration (SE&I) Directorate, and is called the Lifecycle Cost Analysis Model (LCAM). The requirements for LCAM are based on the need to have a parametric estimating capability in order to do top-level program analysis, evaluate design alternatives, and explore options for future systems. By estimating the total cost of ownership within the context of the planned Constellation budget, LCAM can provide Program and NASA management with the cost data necessary to identify the most affordable alternatives. LCAM is also a key component of the Integrated Program Model (IPM), an SE&I developed capability that combines parametric sizing tools with cost, schedule, and risk models to perform program analysis. LCAM is used in the generation of cost estimates for system level trades and analyses. It draws upon the legacy of previous architecture level cost models, such as the Exploration Systems Mission Directorate (ESMD) Architecture Cost Model (ARCOM) developed for Simulation Based Acquisition (SBA), and ATLAS. LCAM is used to support requirements and design trade studies by calculating changes in cost relative to a baseline option cost. Estimated costs are generally low fidelity to accommodate available input data and available cost estimating relationships (CERs). LCAM is capable of interfacing with the Integrated Program Model to provide the cost estimating capability for that suite of tools.
Dewa, Carolyn S; Hoch, Jeffrey S
2014-06-01
This article estimates the net benefit for a company incorporating a collaborative care model into its return-to-work program for workers on short-term disability related to a mental disorder. Employing a simple decision model, the net benefit and uncertainty were explored. The breakeven point occurs when the average short-term disability episode is reduced by at least 7 days. In addition, 85% of the time, benefits could outweigh costs. Model results and sensitivity analyses indicate that organizational benefits can be greater than the costs of incorporating a collaborative care model into a return-to-work program for workers on short-term disability related to a mental disorder. The results also demonstrate how the probability of a program's effectiveness and the magnitude of its effectiveness are key factors that determine whether the benefits of a program outweigh its costs.
Development of weight and cost estimates for lifting surfaces with active controls
NASA Technical Reports Server (NTRS)
Anderson, R. D.; Flora, C. C.; Nelson, R. M.; Raymond, E. T.; Vincent, J. H.
1976-01-01
Equations and methodology were developed for estimating the weight and cost incrementals due to active controls added to the wing and horizontal tail of a subsonic transport airplane. The methods are sufficiently generalized to be suitable for preliminary design. Supporting methodology and input specifications for the weight and cost equations are provided. The weight and cost equations are structured to be flexible in terms of the active control technology (ACT) flight control system specification. In order to present a self-contained package, methodology is also presented for generating ACT flight control system characteristics for the weight and cost equations. Use of the methodology is illustrated.
Artemisinin resistance--modelling the potential human and economic costs.
Lubell, Yoel; Dondorp, Arjen; Guérin, Philippe J; Drake, Tom; Meek, Sylvia; Ashley, Elizabeth; Day, Nicholas P J; White, Nicholas J; White, Lisa J
2014-11-23
Artemisinin combination therapy is recommended as first-line treatment for falciparum malaria across the endemic world and is increasingly relied upon for treating vivax malaria where chloroquine is failing. Artemisinin resistance was first detected in western Cambodia in 2007, and is now confirmed in the Greater Mekong region, raising the spectre of a malaria resurgence that could undo a decade of progress in control, and threaten the feasibility of elimination. The magnitude of this threat has not been quantified. This analysis compares the health and economic consequences of two future scenarios occurring once artemisinin-based treatments are available with high coverage. In the first scenario, artemisinin combination therapy (ACT) is largely effective in the management of uncomplicated malaria and severe malaria is treated with artesunate, while in the second scenario ACT are failing at a rate of 30%, and treatment of severe malaria reverts to quinine. The model is applied to all malaria-endemic countries using their specific estimates for malaria incidence, transmission intensity and GDP. The model describes the direct medical costs for repeated diagnosis and retreatment of clinical failures as well as admission costs for severe malaria. For productivity losses, the conservative friction costing method is used, which assumes a limited economic impact for individuals that are no longer economically active until they are replaced from the unemployment pool. Using conservative assumptions and parameter estimates, the model projects an excess of 116,000 deaths annually in the scenario of widespread artemisinin resistance. The predicted medical costs for retreatment of clinical failures and for management of severe malaria exceed US$32 million per year. Productivity losses resulting from excess morbidity and mortality were estimated at US$385 million for each year during which failing ACT remained in use as first-line treatment. These 'ballpark' figures for the magnitude of the health and economic threat posed by artemisinin resistance add weight to the call for urgent action to detect the emergence of resistance as early as possible and contain its spread from known locations in the Mekong region to elsewhere in the endemic world.
Estimating the lifetime risk of cancer associated with multiple CT scans.
Ivanov, V K; Kashcheev, V V; Chekin, S Yu; Menyaylo, A N; Pryakhin, E A; Tsyb, A F; Mettler, F A
2014-12-01
Multiple CT scans are often done on the same patient resulting in an increased risk of cancer. Prior publications have estimated risks on a population basis and often using an effective dose. Simply adding up the risks from single scans does not correctly account for the survival function. A methodology for estimating personal radiation risks attributed to multiple CT imaging using organ doses is presented in this article. The estimated magnitude of the attributable risk fraction for the possible development of radiation-induced cancer indicates the necessity for strong clinical justification when ordering multiple CT scans.
Chumney, Elinor C G; Biddle, Andrea K; Simpson, Kit N; Weinberger, Morris; Magruder, Kathryn M; Zelman, William N
2004-01-01
As cost-effectiveness analyses (CEAs) are increasingly used to inform policy decisions, there is a need for more information on how different cost determination methods affect cost estimates and the degree to which the resulting cost-effectiveness ratios (CERs) may be affected. The lack of specificity of diagnosis-related groups (DRGs) could mean that they are ill-suited for costing applications in CEAs. Yet, the implications of using International Classification of Diseases-9th edition (ICD-9) codes or a form of disease-specific risk group stratification instead of DRGs has yet to be clearly documented. To demonstrate the implications of different disease coding mechanisms on costs and the magnitude of error that could be introduced in head-to-head comparisons of resulting CERs. We based our analyses on a previously published Markov model for HIV/AIDS therapies. We used the Healthcare Cost and Utilisation Project Nationwide Inpatient Sample (HCUP-NIS) data release 6, which contains all-payer data on hospital inpatient stays from selected states. We added costs for the mean number of hospitalisations, derived from analyses based on either DRG or ICD-9 codes or risk group stratification cost weights, to the standard outpatient and prescription drug costs to yield an estimate of total charges for each AIDS-defining illness (ADI). Finally, we estimated the Markov model three times with the appropriate ADI cost weights to obtain CERs specific to the use of either DRG or ICD-9 codes or risk group. Contrary to expectations, we found that the choice of coding/grouping assumptions that are disease-specific by either DRG codes, ICD-9 codes or risk group resulted in very similar CER estimates for highly active antiretroviral therapy. The large variations in the specific ADI cost weights across the three different coding approaches was especially interesting. However, because no one approach produced consistently higher estimates than the others, the Markov model's weighted cost per event and resulting CERs were remarkably close in value to one another. Although DRG codes are based on broader categories and contain less information than ICD-9 codes, in practice the choice of whether to use DRGs or ICD-9 codes may have little effect on the CEA results in heterogeneous conditions such as HIV/AIDS.
Dlubac, Katherine; Knight, Rosemary; Song, Yi-Qiao; Bachman, Nate; Grau, Ben; Cannia, Jim; Williams, John
2013-01-01
Hydraulic conductivity (K) is one of the most important parameters of interest in groundwater applications because it quantifies the ease with which water can flow through an aquifer material. Hydraulic conductivity is typically measured by conducting aquifer tests or wellbore flow (WBF) logging. Of interest in our research is the use of proton nuclear magnetic resonance (NMR) logging to obtain information about water-filled porosity and pore space geometry, the combination of which can be used to estimate K. In this study, we acquired a suite of advanced geophysical logs, aquifer tests, WBF logs, and sidewall cores at the field site in Lexington, Nebraska, which is underlain by the High Plains aquifer. We first used two empirical equations developed for petroleum applications to predict K from NMR logging data: the Schlumberger Doll Research equation (KSDR) and the Timur-Coates equation (KT-C), with the standard empirical constants determined for consolidated materials. We upscaled our NMR-derived K estimates to the scale of the WBF-logging K(KWBF-logging) estimates for comparison. All the upscaled KT-C estimates were within an order of magnitude of KWBF-logging and all of the upscaled KSDR estimates were within 2 orders of magnitude of KWBF-logging. We optimized the fit between the upscaled NMR-derived K and KWBF-logging estimates to determine a set of site-specific empirical constants for the unconsolidated materials at our field site. We conclude that reliable estimates of K can be obtained from NMR logging data, thus providing an alternate method for obtaining estimates of K at high levels of vertical resolution.
System analysis of alcohol countermeasures
DOT National Transportation Integrated Search
1976-01-01
The purpose of the contract was to conduct a benefit/cost analysis of seven alcohol safety countermeasures in order to determine the potential for successful implementation in terms of the estimated cost/effectiveness of each countermeasure and to pr...
John B. Grantham; Eldon Estep; John M. Pierovich; Harold Tarkow; Thomas C. Adams
1974-01-01
Results are reported of a preliminary investigation of feasibility of using wood residue to meet energy and raw material needs in the Pacific Coast States. Magnitude of needs was examined and volume of logging-residue and unused mill residue was estimated. Costs of obtaining and preprocessing logging residue for energy and pulp and particle board raw material were...
Intertemporal consumption with directly measured welfare functions and subjective expectations
Kapteyn, Arie; Kleinjans, Kristin J.; van Soest, Arthur
2010-01-01
Euler equation estimation of intertemporal consumption models requires many, often unverifiable assumptions. These include assumptions on expectations and preferences. We aim at reducing some of these requirements by using direct subjective information on respondents’ preferences and expectations. The results suggest that individually measured welfare functions and expectations have predictive power for the variation in consumption across households. Furthermore, estimates of the intertemporal elasticity of substitution based on the estimated welfare functions are plausible and of a similar order of magnitude as other estimates found in the literature. The model favored by the data only requires cross-section data for estimation. PMID:20442798
Placebo effect of medication cost in Parkinson disease: a randomized double-blind study.
Espay, Alberto J; Norris, Matthew M; Eliassen, James C; Dwivedi, Alok; Smith, Matthew S; Banks, Christi; Allendorfer, Jane B; Lang, Anthony E; Fleck, David E; Linke, Michael J; Szaflarski, Jerzy P
2015-02-24
To examine the effect of cost, a traditionally "inactive" trait of intervention, as contributor to the response to therapeutic interventions. We conducted a prospective double-blind study in 12 patients with moderate to severe Parkinson disease and motor fluctuations (mean age 62.4 ± 7.9 years; mean disease duration 11 ± 6 years) who were randomized to a "cheap" or "expensive" subcutaneous "novel injectable dopamine agonist" placebo (normal saline). Patients were crossed over to the alternate arm approximately 4 hours later. Blinded motor assessments in the "practically defined off" state, before and after each intervention, included the Unified Parkinson's Disease Rating Scale motor subscale, the Purdue Pegboard Test, and a tapping task. Measurements of brain activity were performed using a feedback-based visual-motor associative learning functional MRI task. Order effect was examined using stratified analysis. Although both placebos improved motor function, benefit was greater when patients were randomized first to expensive placebo, with a magnitude halfway between that of cheap placebo and levodopa. Brain activation was greater upon first-given cheap but not upon first-given expensive placebo or by levodopa. Regardless of order of administration, only cheap placebo increased activation in the left lateral sensorimotor cortex and other regions. Expensive placebo significantly improved motor function and decreased brain activation in a direction and magnitude comparable to, albeit less than, levodopa. Perceptions of cost are capable of altering the placebo response in clinical studies. This study provides Class III evidence that perception of cost is capable of influencing motor function and brain activation in Parkinson disease. © 2015 American Academy of Neurology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1985-10-01
The purpose of this task is to determine if environmental contamination has resulted from waste disposal practices at Otis ANGB MA; to provide estimates of the magnitude and extent of contamination, should contamination be found; to identify potential environmental consequences of migrating pollutants; to identify any additional investigations and their attendant costs necessary to identify the magnitude, extent and direction of movement of discovered contaminants. Partial contents include: well and test pit logs; site safety plan; laboratory analytical methods and quality assurance; and Federal and state drinking water and human health standards applicable in Massachusetts.
Cumulus cloud model estimates of trace gas transports
NASA Technical Reports Server (NTRS)
Garstang, Michael; Scala, John; Simpson, Joanne; Tao, Wei-Kuo; Thompson, A.; Pickering, K. E.; Harris, R.
1989-01-01
Draft structures in convective clouds are examined with reference to the results of the NASA Amazon Boundary Layer Experiments (ABLE IIa and IIb) and calculations based on a multidimensional time dependent dynamic and microphysical numerical cloud model. It is shown that some aspects of the draft structures can be calculated from measurements of the cloud environment. Estimated residence times in the lower regions of the cloud based on surface observations (divergence and vertical velocities) are within the same order of magnitude (about 20 min) as model trajectory estimates.
The model for estimation production cost of embroidery handicraft
NASA Astrophysics Data System (ADS)
Nofierni; Sriwana, IK; Septriani, Y.
2017-12-01
Embroidery industry is one of type of micro industry that produce embroidery handicraft. These industries are emerging in some rural areas of Indonesia. Embroidery clothing are produce such as scarves and clothes that show cultural value of certain region. The owner of an enterprise must calculate the cost of production before making a decision on how many products are received from the customer. A calculation approach to production cost analysis is needed to consider the feasibility of each order coming. This study is proposed to design the expert system (ES) in order to improve production management in the embroidery industry. The model will design used Fuzzy inference system as a model to estimate production cost. Research conducted based on survey and knowledge acquisitions from stakeholder of supply chain embroidery handicraft industry at Bukittinggi, West Sumatera, Indonesia. This paper will use fuzzy input where the quality, the complexity of the design and the working hours required and the result of the model are useful to manage production cost on embroidery production.
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
NASA Astrophysics Data System (ADS)
Gatti, Davide; Güttler, Andreas; Frohnapfel, Bettina; Tropea, Cameron
2015-05-01
In the present work, wall oscillations for turbulent skin friction drag reduction are realized in an air turbulent duct flow by means of spanwise-oscillating active surfaces based on dielectric electroactive polymers. The actuator system produces spanwise wall velocity oscillations of 820 mm/s semi-amplitude at its resonance frequency of 65 Hz while consuming an active power of a few 100 mW. The actuators achieved a maximum integral drag reduction of 2.4 %. The maximum net power saving, budget of the power benefit and cost of the control, was measured for the first time with wall oscillations. Though negative, the net power saving is order of magnitudes higher than what has been estimated in previous studies. Two new direct numerical simulations of turbulent channel flow show that the finite size of the actuator only partially explains the lower values of integral drag reduction typically achieved in laboratory experiments compared to numerical simulations.
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, Jeffrey De'Haven; Hagberg, Aric Arild; Mohd-Yusof, Jamaludin
Here, we present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We also derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths.more » First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. We obtain accurate estimates of first passage times with an order of magnitude reduction of CPU time and mesh size using the proposed method.« less
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
Hyman, Jeffrey De'Haven; Hagberg, Aric Arild; Mohd-Yusof, Jamaludin; ...
2017-07-10
Here, we present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We also derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths.more » First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. We obtain accurate estimates of first passage times with an order of magnitude reduction of CPU time and mesh size using the proposed method.« less
NASA Astrophysics Data System (ADS)
Hannel, Mark D.; Abdulali, Aidan; O'Brien, Michael; Grier, David G.
2018-06-01
Holograms of colloidal particles can be analyzed with the Lorenz-Mie theory of light scattering to measure individual particles' three-dimensional positions with nanometer precision while simultaneously estimating their sizes and refractive indexes. Extracting this wealth of information begins by detecting and localizing features of interest within individual holograms. Conventionally approached with heuristic algorithms, this image analysis problem can be solved faster and more generally with machine-learning techniques. We demonstrate that two popular machine-learning algorithms, cascade classifiers and deep convolutional neural networks (CNN), can solve the feature-localization problem orders of magnitude faster than current state-of-the-art techniques. Our CNN implementation localizes holographic features precisely enough to bootstrap more detailed analyses based on the Lorenz-Mie theory of light scattering. The wavelet-based Haar cascade proves to be less precise, but is so computationally efficient that it creates new opportunities for applications that emphasize speed and low cost. We demonstrate its use as a real-time targeting system for holographic optical trapping.
Thermoelectric bolometers based on silicon membranes
NASA Astrophysics Data System (ADS)
Varpula, Aapo; Timofeev, Andrey V.; Shchepetov, Andrey; Grigoras, Kestutis; Ahopelto, Jouni; Prunnila, Mika
2017-05-01
State-of-the-art high performance IR sensing and imaging systems utilize highly expensive photodetector technology, which requires exotic and toxic materials and cooling. Cost-effective alternatives, uncooled bolometer detectors, are widely used in commercial long-wave IR (LWIR) systems. Compared to the cooled detectors they are much slower and have approximately an order of magnitude lower detectivity in the LWIR. We present uncooled bolometer technology which is foreseen to be capable of narrowing the gap between the cooled and uncooled technologies. The proposed technology is based on ultra-thin silicon membranes, the thermal conductivity and electrical properties of which can be controlled by membrane thickness and doping, respectively. The thermal signal is transduced into electric voltage using thermocouple consisting of highly-doped n and p type Si beams. Reducing the thickness of the Si membrane improves the performance (i.e. sensitivity and speed) as thermal conductivity and thermal mass of Si membrane decreases with decreasing thickness. Based on experimental data we estimate the performance of these uncooled thermoelectric bolometers.
Evolving Markets for Commercial, Civil, and Military Services
NASA Astrophysics Data System (ADS)
Kaplan, Marshall H.
2003-01-01
Recent commercial failures in the LEO market, declining budgets for research, and other political factors have made it difficult for entrepreneurs and financial institutions to realize returns from investments in new space transportation systems and satellites. This paper explores the major factors impacting future markets that make use of our space infrastructure. At the top of the list is the high cost of space access. This has been extremely expensive, and will continue to be expensive as long as space access remains low on the nation's priority list. While launch prices have generally been reduced over the past several years, they remain well above the elastic range of supply and demand. Our best estimate is that it will take an order of magnitude reduction to significantly expand the market. Projections about market segments that will represent future winners in space and launch demand forecasts are presented. Future markets, outside of traditional strongholds, are explored, including a long-term view of new commercial space activities, conventional and ambitious future/futuristic activities, and related business aspects.
Communications Support for National Flight Data Center Information System.
1980-11-01
funtions : 0 Establishment and termination, * Message transfer, 0 Retransmission of blocks, Establishment and Termination: the establishment procedure...relate to hardware components, transmission facilities and cost relationships . The costs are grouped into one-time and recurring costs. L.2 HARDWARE...the NADIN switching center in Atlanta. The purchase and installation costs are estimated to be $1000. L.4 COST RELATIONSHIPS In order to accurately
NASA Astrophysics Data System (ADS)
Hanna, Steven R.; Young, George S.
2017-01-01
What do the terms "top-down", "inverse", "backwards", "adjoint", "sensor data fusion", "receptor", "source term estimation (STE)", to name several appearing in the current literature, have in common? These varied terms are used by different disciplines to describe the same general methodology - the use of observations of air pollutant concentrations and knowledge of wind fields to identify air pollutant source locations and/or magnitudes. Academic journals are publishing increasing numbers of papers on this topic. Examples of scenarios related to this growing interest, ordered from small scale to large scale, are: use of real-time samplers to quickly estimate the location of a toxic gas release by a terrorist at a large public gathering (e.g., Haupt et al., 2009);
The direct and indirect costs of managing chronic obstructive pulmonary disease in Greece.
Souliotis, Kyriakos; Kousoulakou, Hara; Hillas, Georgios; Tzanakis, Nikos; Toumbis, Michalis; Vassilakopoulos, Theodoros
2017-01-01
COPD is associated with significant economic burden. The objective of this study was to explore the direct and indirect costs associated with COPD and identify the key cost drivers of disease management in Greece. A Delphi panel of Greek pulmonologists was conducted, which aimed at eliciting local COPD treatment patterns and resource use. Resource use was translated into costs using official health insurance tariffs and Diagnosis-Related Groups (DRGs). In addition, absenteeism and caregiver's costs were recorded in order to quantify indirect COPD costs. The total costs of managing COPD per patient per year were estimated at €4,730, with direct (medical and nonmedical) and indirect costs accounting for 62.5% and 37.5%, respectively. COPD exacerbations were responsible for 32% of total costs (€1,512). Key exacerbation-related cost drivers were hospitalization (€830) and intensive care unit (ICU) admission costs (€454), jointly accounting for 85% of total exacerbation costs. Annual maintenance phase costs were estimated at €835, with pharmaceutical treatment accounting for 77% (€639.9). Patient time costs were estimated at €146 per year. The average number of sick days per year was estimated at 16.9, resulting in productivity losses of €968. Caregiver's costs were estimated at €806 per year. The management of COPD in Greece is associated with intensive resource use and significant economic burden. Exacerbations and productivity losses are the key cost drivers. Cost containment policies should focus on prioritizing treatments that increase patient compliance as these can lead to reduction of exacerbations, longer maintenance phases, and thus lower costs.
48 CFR 1352.216-76 - Placement of orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... price or estimated cost or fee; (4) Delivery or performance date; (5) Place of delivery or performance... contact information for the DOC task and delivery order ombudsman is ____. (End of clause) [75 FR 10570...
Economic burden made celiac disease an expensive and challenging condition for Iranian patients.
Pourhoseingholi, Mohamad Amin; Rostami-Nejad, Mohammad; Barzegar, Farnoush; Rostami, Kamran; Volta, Umberto; Sadeghi, Amir; Honarkar, Zahra; Salehi, Niloofar; Asadzadeh-Aghdaei, Hamid; Baghestani, Ahmad Reza; Zali, Mohammad Reza
2017-01-01
The aim of this study was to estimate the economic burden of celiac disease (CD) in Iran. The assessment of burden of CD has become an important primary or secondary outcome measure in clinical and epidemiologic studies. Information regarding medical costs and gluten free diet (GFD) costs were gathered using questionnaire and checklists offered to the selected patients with CD. The data included the direct medical cost (including Doctor Visit, hospitalization, clinical test examinations, endoscopies, etc.), GFD cost and loss productivity cost (as the indirect cost) for CD patient were estimated. The factors used for cost estimation included frequency of health resource utilization and gluten free diet basket. Purchasing Power Parity Dollar (PPP$) was used in order to make inter-country comparisons. Total of 213 celiac patients entered to this study. The mean (standard deviation) of total cost per patient per year was 3377 (1853) PPP$. This total cost including direct medical cost, GFD costs and loss productivity cost per patients per year. Also the mean and standard deviation of medical cost and GFD cost were 195 (128) PPP$ and 932 (734) PPP$ respectively. The total costs of CD were significantly higher for male. Also GFD cost and total cost were higher for unmarried patients. In conclusion, our estimation of CD economic burden is indicating that CD patients face substantial expense that might not be affordable for a good number of these patients. The estimated economic burden may put these patients at high risk for dietary neglect resulting in increasing the risk of long term complications.
Retiree out-of-pocket healthcare spending: a study of consumer expectations and policy implications.
Hoffman, Allison K; Jackson, Howell E
2013-01-01
Even though most American retirees benefit from Medicare coverage, a mounting body of research predicts that many will face large and increasing out-of-pocket expenditures for healthcare costs in retirement and that many already struggle to finance these costs. It is unclear, however, whether the general population understands the likely magnitude of these out-of-pocket expenditures well enough to plan for them effectively. This study is the first comprehensive examination of Americans' expectations regarding their out-of-pocket spending on healthcare in retirement. We surveyed over 1700 near retirees and retirees to assess their expectations regarding their own spending and then compared their responses to experts' estimates. Our main findings are twofold. First, overall expectations of out-of-pocket spending are mixed. While a significant proportion of respondents estimated out-of-pocket costs in retirement at or above expert estimates of what the typical retiree will spend, a disproportionate number estimated their future spending substantially below what experts view as likely. Estimates by members of some demographic subgroups, including women and younger respondents, deviated relatively further from the experts' estimates. Second, respondents consistently misjudged spending uncertainty. In particular, respondents significantly underestimated how much individual health experience and changes in government policy can affect individual out-of-pocket spending. We discuss possible policy responses, including efforts to improve financial planning and ways to reduce unanticipated financial risk through reform of health insurance regulation.
Economic tools to promote transparency and comparability in the Paris Agreement
NASA Astrophysics Data System (ADS)
Aldy, Joseph; Pizer, William; Tavoni, Massimo; Reis, Lara Aleluia; Akimoto, Keigo; Blanford, Geoffrey; Carraro, Carlo; Clarke, Leon E.; Edmonds, James; Iyer, Gokul C.; McJeon, Haewon C.; Richels, Richard; Rose, Steven; Sano, Fuminori
2016-11-01
The Paris Agreement culminates a six-year transition towards an international climate policy architecture based on parties submitting national pledges every five years. An important policy task will be to assess and compare these contributions. We use four integrated assessment models to produce metrics of Paris Agreement pledges, and show differentiated effort across countries: wealthier countries pledge to undertake greater emission reductions with higher costs. The pledges fall in the lower end of the distributions of the social cost of carbon and the cost-minimizing path to limiting warming to 2 °C, suggesting insufficient global ambition in light of leaders’ climate goals. Countries’ marginal abatement costs vary by two orders of magnitude, illustrating that large efficiency gains are available through joint mitigation efforts and/or carbon price coordination. Marginal costs rise almost proportionally with income, but full policy costs reveal more complex regional patterns due to terms of trade effects.
Economic Burden of Smoking in Iran: A Prevalence-Based Annual Cost Approach
Rezaei, Satar; Matin, Behzad Karami; Hajizadeh, Mohammad; Bazyar, Mohammad; Sari, Ali Akbari
2017-01-01
Objectives: The burden of smoking on the health system and society is significant. The current study aimed to estimate the annual direct and indirect costs of smoking in Iran for the year 2014. Methods: A prevalence-based disease-specific approach was used to determine costs associated with the three most common smoking-related diseases: lung cancer (LC), chronic obstructive pulmonary disease (COPD) and ischaemic heart disease (IHD). Data on healthcare utilization were obtained from an original survey, hospital records and questionnaires. The number of deaths was extracted from the global burden diseases study (GBD). The human capital approach was applied to estimate the costs of morbidity and mortality due to smoking-related diseases, classified as direct (hospitalization, outpatients and non-medical costs) and indirect (mortality and morbidity). Results: The total economic cost of the three most common smoking-attributable diseases in Iran was US$1.46 billion in 2014, including US$1.05 billion (71.7%) in indirect and US$0.41 billion (28.3%) in direct costs. Direct costs of the three smoking-related diseases accounted for 1.6% of total healthcare expenditures and total costs were about 0.26% of Iran’s gross domestic product (GDP) in 2014. Conclusions: Our study indicated that smoking places a substantial economic burden on Iranian society. Therefore, sustained smoking cessation interventions and tobacco control policies are required to reduce the magnitude and extent of smoking-attributable costs in Iran. PMID:29072438
Global economic impacts of climate variability and change during the 20th century.
Estrada, Francisco; Tol, Richard S J; Botzen, Wouter J W
2017-01-01
Estimates of the global economic impacts of observed climate change during the 20th century obtained by applying five impact functions of different integrated assessment models (IAMs) are separated into their main natural and anthropogenic components. The estimates of the costs that can be attributed to natural variability factors and to the anthropogenic intervention with the climate system in general tend to show that: 1) during the first half of the century, the amplitude of the impacts associated with natural variability is considerably larger than that produced by anthropogenic factors and the effects of natural variability fluctuated between being negative and positive. These non-monotonic impacts are mostly determined by the low-frequency variability and the persistence of the climate system; 2) IAMs do not agree on the sign (nor on the magnitude) of the impacts of anthropogenic forcing but indicate that they steadily grew over the first part of the century, rapidly accelerated since the mid 1970's, and decelerated during the first decade of the 21st century. This deceleration is accentuated by the existence of interaction effects between natural variability and natural and anthropogenic forcing. The economic impacts of anthropogenic forcing range in the tenths of percentage of the world GDP by the end of the 20th century; 3) the impacts of natural forcing are about one order of magnitude lower than those associated with anthropogenic forcing and are dominated by the solar forcing; 4) the interaction effects between natural and anthropogenic factors can importantly modulate how impacts actually occur, at least for moderate increases in external forcing. Human activities became dominant drivers of the estimated economic impacts at the end of the 20th century, producing larger impacts than those of low-frequency natural variability. Some of the uses and limitations of IAMs are discussed.
Global economic impacts of climate variability and change during the 20th century
Estrada, Francisco; Tol, Richard S. J.; Botzen, Wouter J. W.
2017-01-01
Estimates of the global economic impacts of observed climate change during the 20th century obtained by applying five impact functions of different integrated assessment models (IAMs) are separated into their main natural and anthropogenic components. The estimates of the costs that can be attributed to natural variability factors and to the anthropogenic intervention with the climate system in general tend to show that: 1) during the first half of the century, the amplitude of the impacts associated with natural variability is considerably larger than that produced by anthropogenic factors and the effects of natural variability fluctuated between being negative and positive. These non-monotonic impacts are mostly determined by the low-frequency variability and the persistence of the climate system; 2) IAMs do not agree on the sign (nor on the magnitude) of the impacts of anthropogenic forcing but indicate that they steadily grew over the first part of the century, rapidly accelerated since the mid 1970's, and decelerated during the first decade of the 21st century. This deceleration is accentuated by the existence of interaction effects between natural variability and natural and anthropogenic forcing. The economic impacts of anthropogenic forcing range in the tenths of percentage of the world GDP by the end of the 20th century; 3) the impacts of natural forcing are about one order of magnitude lower than those associated with anthropogenic forcing and are dominated by the solar forcing; 4) the interaction effects between natural and anthropogenic factors can importantly modulate how impacts actually occur, at least for moderate increases in external forcing. Human activities became dominant drivers of the estimated economic impacts at the end of the 20th century, producing larger impacts than those of low-frequency natural variability. Some of the uses and limitations of IAMs are discussed. PMID:28212384
Ginsberg, Gary M; Kaliner, Ehud; Grotto, Itamar
2016-01-01
Worldwide, ambient air pollution accounts for around 3.7 million deaths annually. Measuring the burden of disease is important not just for advocacy but also is a first step towards carrying out a full cost-utility analysis in order to prioritise technological interventions that are available to reduce air pollution (and subsequent morbidity and mortality) from industrial, power generating and vehicular sources. We calculated the average national exposure to particulate matter particles less than 2.5 μm (PM2.5) in diameter by weighting readings from 52 (non-roadside) monitoring stations by the population of the catchment area around the station. The PM2.5 exposure level was then multiplied by the gender and cause specific (Acute Lower Respiratory Infections, Asthma, Circulatory Diseases, Coronary Heart Failure, Chronic Obstructive Pulmonary Disease, Diabetes, Ischemic Heart Disease, Lung Cancer, Low Birth Weight, Respiratory Diseases and Stroke) relative risks and the national age, cause and gender specific mortality (and hospital utilisation which included neuro-degenerative disorders) rates to arrive at the estimated mortality and hospital days attributable to ambient PM2.5 pollution in Israel in 2015. We utilised a WHO spread-sheet model, which was expanded to include relative risks (based on more recent meta-analyses) of sub-sets of other diagnoses in two additional models. Mortality estimates from the three models were 1609, 1908 and 2253 respectively in addition to 184,000, 348,000 and 542,000 days hospitalisation in general hospitals. Total costs from PM2.5 pollution (including premature burial costs) amounted to $544 million, $1030 million and $1749 million respectively (or 0.18 %, 0.35 % and 0.59 % of GNP). Subject to the caveat that our estimates were based on a limited number of non-randomly sited stations exposure data. The mortality, morbidity and monetary burden of disease attributable to air pollution from particulate matter in Israel is of sufficient magnitude to warrant the consideration of and prioritisation of technological interventions that are available to reduce air pollution from industrial, power generating and vehicular sources. The accuracy of our burden estimates would be improved if more precise estimates of population exposure were to become available in the future.
A Bayesian perspective on magnitude estimation.
Petzschner, Frederike H; Glasauer, Stefan; Stephan, Klaas E
2015-05-01
Our representation of the physical world requires judgments of magnitudes, such as loudness, distance, or time. Interestingly, magnitude estimates are often not veridical but subject to characteristic biases. These biases are strikingly similar across different sensory modalities, suggesting common processing mechanisms that are shared by different sensory systems. However, the search for universal neurobiological principles of magnitude judgments requires guidance by formal theories. Here, we discuss a unifying Bayesian framework for understanding biases in magnitude estimation. This Bayesian perspective enables a re-interpretation of a range of established psychophysical findings, reconciles seemingly incompatible classical views on magnitude estimation, and can guide future investigations of magnitude estimation and its neurobiological mechanisms in health and in psychiatric diseases, such as schizophrenia. Copyright © 2015 Elsevier Ltd. All rights reserved.
Limitations and opportunities for the social cost of carbon (Invited)
NASA Astrophysics Data System (ADS)
Rose, S. K.
2010-12-01
Estimates of the marginal value of carbon dioxide-the social cost of carbon (SCC)-were recently adopted by the U.S. Government in order to satisfy requirements to value estimated GHG changes of new federal regulations. However, the development and use of SCC estimates of avoided climate change impacts comes with significant challenges and controversial decisions. Fortunately, economics can provide some guidance for conceptually appropriate estimates. At the same time, economics defaults to a benefit-cost decision framework to identify socially optimal policies. However, not all current policy decisions are benefit-cost based, nor depend on monetized information, or even have the same threshold for information. While a conceptually appropriate SCC is a useful metric, how far can we take it? This talk discusses potential applications of the SCC, limitations based on the state of research and methods, as well as opportunities for among other things consistency with climate risk management and research and decision-making tools.
Propulsion Options for Primary Thrust and Attitude Control of Microspacecraft
NASA Technical Reports Server (NTRS)
deGroot, W. A.
1998-01-01
Order of magnitude decreases in the size of scientific satellites and spacecraft could provide concurrent decreases in mission costs because of lower launch and fabrication costs. Although many subsystems are amenable to dramatic size reductions, miniaturization of the propulsion subsystems is not straightforward. There are a range of requirements for both primary and attitude control propulsion, dictated by mission requirements, satellite size, and power restrictions. Many of the established propulsion technologies can not currently be applied to microspacecraft. Because of this, micro-electromechanical systems (MEMS) fabrication technology is being explored as a path for miniaturization.
Wilking, Nils; Wilking, Ulla; Jönsson, Bengt
2014-06-01
Cancer is a major burden to the health care system, presently mainly in developed countries, but is rapidly becoming a problem of similar magnitude in developing countries. Cancer ranks number two or three measured in loss of "good years of life" in Europe. The direct cost of cancer are estimated to be around 50% of total health care costs and of these costs a major part is linked to cancer drugs. With the ongoing revolution in the understanding of cancer and the development of an increasing number of new, but often very costly drugs, the health care systems in all parts of the world need to have a systematic way of evaluating new cancer drugs. Health technology assessment (HTA) now plays a major role in many parts of Europe. HTA has its focus on determining the value of new innovations in order to balance allocation of health care resources in a fair and equal way. This paper reviews the HTA process in general and for cancer drugs specifically. The key findings are that cancer drugs must be evaluated in a similar way as other health care technologies. One must however take into account that cancer drugs are often approved with a high level of uncertainty. Thus, it is of key importance that not only clinical efficacy, i.e., effect in pivotal clinical trials, is taken into account, but that there is a great need for follow-up studies so that post regulatory approval is able to properly measure population based effects [clinical effectiveness (CLE)].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Jochem W; Laird, Daniel; Costello, Ronan
This paper presents a comparative assessment of three fundamentally different wave energy converter technology development trajectories. The three technology development trajectories are expressed and visualised as a function of technology readiness levels and technology performance levels. The assessment shows that development trajectories that initially prioritize technology readiness over technology performance are likely to require twice the development time, consume a threefold of the development cost, and are prone to a risk of technical or commercial failure of one order of magnitude higher than those development trajectories that initially prioritize technology performance over technology readiness.
Incorporating psychological influences in probabilistic cost analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kujawski, Edouard; Alvaro, Mariana; Edwards, William
2004-01-08
Today's typical probabilistic cost analysis assumes an ''ideal'' project that is devoid of the human and organizational considerations that heavily influence the success and cost of real-world projects. In the real world ''Money Allocated Is Money Spent'' (MAIMS principle); cost underruns are rarely available to protect against cost overruns while task overruns are passed on to the total project cost. Realistic cost estimates therefore require a modified probabilistic cost analysis that simultaneously models the cost management strategy including budget allocation. Psychological influences such as overconfidence in assessing uncertainties and dependencies among cost elements and risks are other important considerations thatmore » are generally not addressed. It should then be no surprise that actual project costs often exceed the initial estimates and are delivered late and/or with a reduced scope. This paper presents a practical probabilistic cost analysis model that incorporates recent findings in human behavior and judgment under uncertainty, dependencies among cost elements, the MAIMS principle, and project management practices. Uncertain cost elements are elicited from experts using the direct fractile assessment method and fitted with three-parameter Weibull distributions. The full correlation matrix is specified in terms of two parameters that characterize correlations among cost elements in the same and in different subsystems. The analysis is readily implemented using standard Monte Carlo simulation tools such as {at}Risk and Crystal Ball{reg_sign}. The analysis of a representative design and engineering project substantiates that today's typical probabilistic cost analysis is likely to severely underestimate project cost for probability of success values of importance to contractors and procuring activities. The proposed approach provides a framework for developing a viable cost management strategy for allocating baseline budgets and contingencies. Given the scope and magnitude of the cost-overrun problem, the benefits are likely to be significant.« less
Östensson, Ellinor; Fröberg, Maria; Leval, Amy; Hellström, Ann-Cathrin; Bäcklund, Magnus; Zethraeus, Niklas; Andersson, Sonia
2015-01-01
Objective Costs associated with HPV-related diseases such as cervical dysplasia, cervical cancer, and genital warts have not been evaluated in Sweden. These costs must be estimated in order to determine the potential savings if these diseases were eradicated and to assess the combined cost-effectiveness of HPV vaccination and cervical cancer screening. The present study aimed to estimate prevention, management, and treatment costs associated with cervical dysplasia, cervical cancer, and genital warts from a societal perspective in Sweden in 2009, 1 year before the quadrivalent HPV vaccination program was implemented. Methods and Materials Data from the Swedish cervical cancer screening program was used to calculate the costs associated with prevention (cytological cervical cancer screening), management (colposcopy and biopsy following inadequate/abnormal cytological results), and treatment of CIN. Swedish official statistics were used to estimate treatment costs associated with cervical cancer. Published epidemiological data were used to estimate the number of incident, recurrent, and persistent cases of genital warts; a clinical expert panel assessed management and treatment procedures. Estimated visits, procedures, and use of medications were used to calculate the annual cost associated with genital warts. Results From a societal perspective, total estimated costs associated with cervical cancer and genital warts in 2009 were €106.6 million, of which €81.4 million (76%) were direct medical costs. Costs associated with prevention, management, and treatment of CIN were €74 million; screening and management costs for women with normal and inadequate cytology alone accounted for 76% of this sum. The treatment costs associated with incident and prevalent cervical cancer and palliative care were €23 million. Estimated costs for incident, recurrent and persistent cases of genital warts were €9.8 million. Conclusion Prevention, management, and treatment costs associated with cervical dysplasia, cervical cancer, and genital warts are substantial. Defining these costs is important for future cost-effectiveness analyses of the quadrivalent HPV vaccination program in Sweden. PMID:26398189
Utility of Satellite Magnetic Observations for Estimating Near-Surface Magnetic Anomalies
NASA Technical Reports Server (NTRS)
Kim, Hyung Rae; vonFrese, Ralph R. B.; Taylor, Patrick T.; Kim, Jeong Woo; Park, Chan Hong
2003-01-01
Regional to continental scale magnetic anomaly maps are becoming increasingly available from airborne, shipborne, and terrestrial surveys. Satellite data are commonly considered to fill the coverage gaps in regional compilations of these near-surface surveys. For the near-surface Antarctic magnetic anomaly map being produced by the Antarctic Digital Magnetic Anomaly Project (ADMAP), we show that near-surface magnetic anomaly estimation is greatly enhanced by the joint inversion of the near-surface data with the satellite observations relative to the conventional technique such as minimum curvature. Orsted observations are especially advantageous relative to the Magsat data that have order-of-magnitude greater measurement errors, albeit at much lower orbital altitudes. CHAMP is observing the geomagnetic field with the same measurement accuracy as the Orsted mission, but at the lower orbital altitudes covered by Magsat. Hence, additional significant improvement in predicting near-surface magnetic anomalies can result as these CHAMP data are available. Our analysis also suggests that considerable new insights on the magnetic properties of the lithosphere may be revealed by a further order-of-magnitude improvement in the accuracy of the magnetometer measurements at minimum orbital altitude.
1980-04-01
Adjusting the Trauma Scale for Frequency and Magnitude of Flooding Little information is available on the duration of the psychic impair- ment...percents of psychic impairment which can readily be translated into monetary compensa- tion amounts based on Veteran’s Administration awards for...alcoholic addiction, developmental deviation). The severity of the psychic impairment was determined, in part, by an estimate of the persistence and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benemann, J.R.; Oswald, W.J.
There is growing evidence that global warming could become a major global environmental threat during the 21st century. The precautionary principle commands preventive action, at both national and international levels, to minimize this potential threat. Many near-term, relatively inexpensive, mitigation options are available. In addition, long-term research is required to evaluate and develop advanced, possibly more expensive, countermeasures, in the eventuality that they may be required. The utilization of power plant CO{sub 2} and its recycling into fossil fuel substitutes by microalgae cultures could be one such long-term technology. Microalgae production is an expanding industry in the U.S., with threemore » commercial systems (of approximately 10 hectare each) producing nutriceuticals, specifically beta-carotene, extracted from Dunaliella, and Spirulina biomass. Microalgae are also used in wastewater treatment. Currently production costs are high, about $10,000/ton of algal biomass, almost two orders of magnitude higher than acceptable for greenhouse gas mitigation. This report reviews the current state-of-the-art, including algal cultivation and harvesting-processing, and outlines a technique for achieving very high productivities. Costs of CO{sub 2} mitigation with microalgae production of oils ({open_quotes}biodiesel{close_quotes}) are estimated and future R&D needs outlined.« less
Feasibility study of a magnetic fusion production reactor
NASA Astrophysics Data System (ADS)
Moir, R. W.
1986-12-01
A magnetic fusion reactor can produce 10.8 kg of tritium at a fusion power of only 400 MW —an order of magnitude lower power than that of a fission production reactor. Alternatively, the same fusion reactor can produce 995 kg of plutonium. Either a tokamak or a tandem mirror production plant can be used for this purpose; the cost is estimated at about 1.4 billion (1982 dollars) in either case. (The direct costs are estimated at 1.1 billion.) The production cost is calculated to be 22,000/g for tritium and 260/g for plutonium of quite high purity (1%240Pu). Because of the lack of demonstrated technology, such a plant could not be constructed today without significant risk. However, good progress is being made in fusion technology and, although success in magnetic fusion science and engineering is hard to predict with assurance, it seems possible that the physics basis and much of the needed technology could be demonstrated in facilities now under construction. Most of the remaining technology could be demonstrated in the early 1990s in a fusion test reactor of a few tens of megawatts. If the Magnetic Fusion Energy Program constructs a fusion test reactor of approximately 400 MW of fusion power as a next step in fusion power development, such a facility could be used later as a production reactor in a spinoff application. A construction decision in the late 1980s could result in an operating production reactor in the late 1990s. A magnetic fusion production reactor (MFPR) has four potential advantages over a fission production reactor: (1) no fissile material input is needed; (2) no fissioning exists in the tritium mode and very low fissioning exists in the plutonium mode thus avoiding the meltdown hazard; (3) the cost will probably be lower because of the smaller thermal power required; (4) and no reprocessing plant is needed in the tritium mode. The MFPR also has two disadvantages: (1) it will be more costly to operate because it consumes rather than sells electricity, and (2) there is a risk of not meeting the design goals.
Discharge of debris from ice at the margin of the Greenland ice sheet
Knight, P.G.; Waller, R.I.; Patterson, C.J.; Jones, A.P.; Robinson, Z.P.
2002-01-01
Sediment production at a terrestrial section of the ice-sheet margin in West Greenland is dominated by debris released through the basal ice layer. The debris flux through the basal ice at the margin is estimated to be 12-45 m3 m-1 a-1. This is three orders of magnitude higher than that previously reported for East Antarctica, an order of magnitude higher than sites reported from in Norway, Iceland and Switzerland, but an order of magnitude lower than values previously reported from tidewater glaciers in Alaska and other high-rate environments such as surging glaciers. At our site, only negligible amounts of debris are released through englacial, supraglacial or subglacial sediment transfer. Glacio-fluvial sediment production is highly localized, and long sections of the ice-sheet margin receive no sediment from glaciofluvial sources. These findings differ from those of studies at more temperate glacial settings where glaciofluvial routes are dominant and basal ice contributes only a minor percentage of the debris released at the margin. These data on debris flux through the terrestrial margin of an outlet glacier contribute to our limited knowledge of debris production from the Greenland ice sheet.
Zhao, Hui; Chen, Chuansheng; Zhang, Hongchuan; Zhou, Xinlin; Mei, Leilei; Chen, Chunhui; Chen, Lan; Cao, Zhongyu; Dong, Qi
2012-01-01
Using an artificial-number learning paradigm and the ERP technique, the present study investigated neural mechanisms involved in the learning of magnitude and spatial order. 54 college students were divided into 2 groups matched in age, gender, and school major. One group was asked to learn the associations between magnitude (dot patterns) and the meaningless Gibson symbols, and the other group learned the associations between spatial order (horizontal positions on the screen) and the same set of symbols. Results revealed differentiated neural mechanisms underlying the learning processes of symbolic magnitude and spatial order. Compared to magnitude learning, spatial-order learning showed a later and reversed distance effect. Furthermore, an analysis of the order-priming effect showed that order was not inherent to the learning of magnitude. Results of this study showed a dissociation between magnitude and order, which supports the numerosity code hypothesis of mental representations of magnitude. PMID:23185363
D-Optimal Experimental Design for Contaminant Source Identification
NASA Astrophysics Data System (ADS)
Sai Baba, A. K.; Alexanderian, A.
2016-12-01
Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.
NASA Astrophysics Data System (ADS)
Park, J. H.; Park, Y. K.; Kim, T. S.; Kim, G.; Cho, C.; Kim, I.
2017-12-01
North Korea(NK) has conducted the 6th Underground Nuclear Test(UNT) with the one order bigger magnitude than previous ones on 3 Sep. 2017. By using correlated waveform comparison the estimated epicenter of the 6th NK UNT was estimated at 41.3020N 129.0795E located about 200 m toward northern direction from the previous 5th NK UNT site. The body wave magnitude was calculated as mb 5.7 through our routine process measuring the maximum amplitude of P wave in the higher frequency over 1 Hz using stations around the Korean peninsula, however, this could be underestimated in the case that the source energy spectra of UNT radiated dominantly in the lower frequency below 1 Hz. Considering source spectra of the 6th NK UNT, we applied to P wave the 2nd order Butterworth bandpass filter between 0.1 and 1 Hz and measured that the amplitude ratio of 6th/5th UNT. Instead of 6 7 ratio from the raw P waves, the filtered amplitude ratio resulted in 10 12 at several stations. After cross check of the amplitude ratio in bandpass filtered method to the previous NK UNT we finalized the magnitude of the 6th NK UNT as mb 6.1. The collapse earthquake has happened after the 6th NK UNT about 8 minutes 32 seconds and the epicenter estimated to be located around the UNT site within 1 km. The similarity of wave forms to that of the two mine collapse cases in South Korea and moment tensor inversion indicated the source mechanism was very similar to the mine collapse. Three earthquakes were detected and analyzed locations and magnitudes, we thought these earthquakes were induced from the accumulated tectonic stress by the NK UNT. The collapse event's wave forms are very different from those of the induced earthquakes.
The Empirical Analysis of Cigarette Tax Avoidance and Illicit Trade in Vietnam, 1998-2010
Nguyen, Minh Thac; Denniston, Ryan; Nguyen, Hien Thi Thu; Hoang, Tuan Anh; Ross, Hana; So, Anthony D.
2014-01-01
Illicit trade carries the potential to magnify existing tobacco-related health care costs through increased availability of untaxed and inexpensive cigarettes. What is known with respect to the magnitude of illicit trade for Vietnam is produced primarily by the industry, and methodologies are typically opaque. Independent assessment of the illicit cigarette trade in Vietnam is vital to tobacco control policy. This paper measures the magnitude of illicit cigarette trade for Vietnam between 1998 and 2010 using two methods, discrepancies between legitimate domestic cigarette sales and domestic tobacco consumption estimated from surveys, and trade discrepancies as recorded by Vietnam and trade partners. The results indicate that Vietnam likely experienced net smuggling in during the period studied. With the inclusion of adjustments for survey respondent under-reporting, inward illicit trade likely occurred in three of the four years for which surveys were available. Discrepancies in trade records indicate that the value of smuggled cigarettes into Vietnam ranges from $100 million to $300 million between 2000 and 2010 and that these cigarettes primarily originate in Singapore, Hong Kong, Macao, Malaysia, and Australia. Notable differences in trends over time exist between the two methods, but by comparison, the industry estimates consistently place the magnitude of illicit trade at the upper bounds of what this study shows. The unavailability of annual, survey-based estimates of consumption may obscure the true, annual trend over time. Second, as surveys changed over time, estimates relying on them may be inconsistent with one another. Finally, these two methods measure different components of illicit trade, specifically consumption of illicit cigarettes regardless of origin and smuggling of cigarettes into a particular market. However, absent a gold standard, comparisons of different approaches to illicit trade measurement serve efforts to refine and improve measurement approaches and estimates. PMID:24489886
The empirical analysis of cigarette tax avoidance and illicit trade in Vietnam, 1998-2010.
Nguyen, Minh Thac; Denniston, Ryan; Nguyen, Hien Thi Thu; Hoang, Tuan Anh; Ross, Hana; So, Anthony D
2014-01-01
Illicit trade carries the potential to magnify existing tobacco-related health care costs through increased availability of untaxed and inexpensive cigarettes. What is known with respect to the magnitude of illicit trade for Vietnam is produced primarily by the industry, and methodologies are typically opaque. Independent assessment of the illicit cigarette trade in Vietnam is vital to tobacco control policy. This paper measures the magnitude of illicit cigarette trade for Vietnam between 1998 and 2010 using two methods, discrepancies between legitimate domestic cigarette sales and domestic tobacco consumption estimated from surveys, and trade discrepancies as recorded by Vietnam and trade partners. The results indicate that Vietnam likely experienced net smuggling in during the period studied. With the inclusion of adjustments for survey respondent under-reporting, inward illicit trade likely occurred in three of the four years for which surveys were available. Discrepancies in trade records indicate that the value of smuggled cigarettes into Vietnam ranges from $100 million to $300 million between 2000 and 2010 and that these cigarettes primarily originate in Singapore, Hong Kong, Macao, Malaysia, and Australia. Notable differences in trends over time exist between the two methods, but by comparison, the industry estimates consistently place the magnitude of illicit trade at the upper bounds of what this study shows. The unavailability of annual, survey-based estimates of consumption may obscure the true, annual trend over time. Second, as surveys changed over time, estimates relying on them may be inconsistent with one another. Finally, these two methods measure different components of illicit trade, specifically consumption of illicit cigarettes regardless of origin and smuggling of cigarettes into a particular market. However, absent a gold standard, comparisons of different approaches to illicit trade measurement serve efforts to refine and improve measurement approaches and estimates.
Utilizing Expert Knowledge in Estimating Future STS Costs
NASA Technical Reports Server (NTRS)
Fortner, David B.; Ruiz-Torres, Alex J.
2004-01-01
A method of estimating the costs of future space transportation systems (STSs) involves classical activity-based cost (ABC) modeling combined with systematic utilization of the knowledge and opinions of experts to extend the process-flow knowledge of existing systems to systems that involve new materials and/or new architectures. The expert knowledge is particularly helpful in filling gaps that arise in computational models of processes because of inconsistencies in historical cost data. Heretofore, the costs of planned STSs have been estimated following a "top-down" approach that tends to force the architectures of new systems to incorporate process flows like those of the space shuttles. In this ABC-based method, one makes assumptions about the processes, but otherwise follows a "bottoms up" approach that does not force the new system architecture to incorporate a space-shuttle-like process flow. Prototype software has been developed to implement this method. Through further development of software, it should be possible to extend the method beyond the space program to almost any setting in which there is a need to estimate the costs of a new system and to extend the applicable knowledge base in order to make the estimate.
NASA Astrophysics Data System (ADS)
Zakharchenko, V. D.; Kovalenko, I. G.
2014-05-01
A new method for the line-of-sight velocity estimation of a high-speed near-Earth object (asteroid, meteorite) is suggested. The method is based on the use of fractional, one-half order derivative of a Doppler signal. The algorithm suggested is much simpler and more economical than the classical one, and it appears preferable for use in orbital weapon systems of threat response. Application of fractional differentiation to quick evaluation of mean frequency location of the reflected Doppler signal is justified. The method allows an assessment of the mean frequency in the time domain without spectral analysis. An algorithm structure for the real-time estimation is presented. The velocity resolution estimates are made for typical asteroids in the X-band. It is shown that the wait time can be shortened by orders of magnitude compared with similar value in the case of a standard spectral processing.
ERIC Educational Resources Information Center
Baker, Bruce D.
2011-01-01
This article applies the education cost function methodology in order to estimate additional costs associated with black student concentration and with alternative, race-neutral measures of urban poverty. Recent research highlights the continued importance of the role of race in educational outcomes, and how the intersection of peer group effects…
Verhaeghe, Nick; Lievens, Delfine; Annemans, Lieven; Vander Laenen, Freya; Putman, Koen
2016-01-01
Alcohol, tobacco, illicit drugs, and psychoactive pharmaceuticals' use is associated with a higher likelihood of developing several diseases and injuries and, as a consequence, considerable health-care expenditures. There is yet a lack of consistent methodologies to estimate the economic impact of addictive substances to society. The aim was to assess the methodological approaches applied in social cost studies estimating the economic impact of alcohol, tobacco, illicit drugs, and psychoactive pharmaceuticals. A systematic literature review through the electronic databases, Medline (PubMed) and Web of Science, was performed. Studies in English published from 1997 examining the social costs of the addictive substances alcohol, tobacco, illicit drugs, and psychoactive pharmaceuticals were eligible for inclusion. Twelve social cost studies met the inclusion criteria. In all studies, the direct and indirect costs were measured, but the intangible costs were seldom taken into account. A wide variety in cost items included across studies was observed. Sensitivity analyses to address the uncertainty around certain cost estimates were conducted in eight studies considered in the review. Differences in cost items included in cost-of-illness studies limit the comparison across studies. It is clear that it is difficult to deal with all consequences of substance use in cost-of-illness studies. Future social cost studies should be based on sound methodological principles in order to result in more reliable cost estimates of the economic burden of substance use.
NASA Astrophysics Data System (ADS)
Brenning, A.; Schwinn, M.; Ruiz-Páez, A. P.; Muenchow, J.
2014-03-01
Mountain roads in developing countries are known to increase landslide occurrence due to often inadequate drainage systems and mechanical destabilization of hillslopes by undercutting and overloading. This study empirically investigates landslide initiation frequency along two paved interurban highways in the tropical Andes of southern Ecuador across different climatic regimes. Generalized additive models (GAM) and generalized linear models (GLM) were used to analyze the relationship between mapped landslide initiation points and distance to highway while accounting for topographic, climatic and geological predictors as possible confounders. A spatial block bootstrap was used to obtain non-parametric confidence intervals for the odds ratio of landslide occurrence near the highways (25 m distance) compared to a 200 m distance. The estimated odds ratio was 18-21 with lower 95% confidence bounds > 13 in all analyses. Spatial bootstrap estimation using the GAM supports the higher odds ratio estimate of 21.2 (95% confidence interval: 15.5-25.3). The highway-related effects were observed to fade at about 150 m distance. Road effects appear to be enhanced in geological units characterized by Holocene gravels and Laramide andesite/basalt. Overall, landslide susceptibility was found to be more than one order of magnitude higher in close proximity to paved interurban highways in the Andes of southern Ecuador.
NASA Astrophysics Data System (ADS)
Brenning, A.; Schwinn, M.; Ruiz-Páez, A. P.; Muenchow, J.
2015-01-01
Mountain roads in developing countries are known to increase landslide occurrence due to often inadequate drainage systems and mechanical destabilization of hillslopes by undercutting and overloading. This study empirically investigates landslide initiation frequency along two paved interurban highways in the tropical Andes of southern Ecuador across different climatic regimes. Generalized additive models (GAM) and generalized linear models (GLM) were used to analyze the relationship between mapped landslide initiation points and distance to highway while accounting for topographic, climatic, and geological predictors as possible confounders. A spatial block bootstrap was used to obtain nonparametric confidence intervals for the odds ratio of landslide occurrence near the highways (25 m distance) compared to a 200 m distance. The estimated odds ratio was 18-21, with lower 95% confidence bounds >13 in all analyses. Spatial bootstrap estimation using the GAM supports the higher odds ratio estimate of 21.2 (95% confidence interval: 15.5-25.3). The highway-related effects were observed to fade at about 150 m distance. Road effects appear to be enhanced in geological units characterized by Holocene gravels and Laramide andesite/basalt. Overall, landslide susceptibility was found to be more than 1 order of magnitude higher in close proximity to paved interurban highways in the Andes of southern Ecuador.
High-resolution wavefront control of high-power laser systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brase, J; Brown, C; Carrano, C
1999-07-08
Nearly every new large-scale laser system application at LLNL has requirements for beam control which exceed the current level of available technology. For applications such as inertial confinement fusion, laser isotope separation, laser machining, and laser the ability to transport significant power to a target while maintaining good beam quality is critical. There are many ways that laser wavefront quality can be degraded. Thermal effects due to the interaction of high-power laser or pump light with the internal optical components or with the ambient gas are common causes of wavefront degradation. For many years, adaptive optics based on thing deformablemore » glass mirrors with piezoelectric or electrostrictive actuators have be used to remove the low-order wavefront errors from high-power laser systems. These adaptive optics systems have successfully improved laser beam quality, but have also generally revealed additional high-spatial-frequency errors, both because the low-order errors have been reduced and because deformable mirrors have often introduced some high-spatial-frequency components due to manufacturing errors. Many current and emerging laser applications fall into the high-resolution category where there is an increased need for the correction of high spatial frequency aberrations which requires correctors with thousands of degrees of freedom. The largest Deformable Mirrors currently available have less than one thousand degrees of freedom at a cost of approximately $1M. A deformable mirror capable of meeting these high spatial resolution requirements would be cost prohibitive. Therefore a new approach using a different wavefront control technology is needed. One new wavefront control approach is the use of liquid-crystal (LC) spatial light modulator (SLM) technology for the controlling the phase of linearly polarized light. Current LC SLM technology provides high-spatial-resolution wavefront control, with hundreds of thousands of degrees of freedom, more than two orders of magnitude greater than the best Deformable Mirrors currently made. Even with the increased spatial resolution, the cost of these devices is nearly two orders of magnitude less than the cost of the largest deformable mirror.« less
NASA Astrophysics Data System (ADS)
Sone, B. T.; Nkosi, S. S.; Nkosi, M. M.; Coetsee-Hugo, E.; Swart, H. C.; Maaza, M.
2018-05-01
Application of thin film technology is increasing in many areas such as energy production, energy saving, telecommunications, protective and smart coatings, etc. This increased application creates a need for simple, cost-effective methods for the synthesis of highly multifunctional metal oxide thin films. The technique of Aqueous Chemical Growth is presented in this paper as a simple inexpensive means of producing WO3 thin films that find applications in gas sensing, electrochromism and photocatalysis. We demonstrate, through this technique, that heterogeneous nucleation and growth of WO3 thin films on plain glass substrates takes place at low pHs and low temperatures (75-95 °C) without the use of surfactants and template directing methods. The substrates used needed no surface-modification. On the plain glass substrates (soda lime silicates) a variety of micro-nanostructures could be observed most important of which were nanoplatelets that acted as a basic building block for the self-assembly of more hierarchical 3-d microspheres and thin films. The dominant crystallographic structure observed through X-ray diffraction analysis was found to be hexagonal-WO3 and monoclinic WO3. The thin films produced showed a fair degree of porosity. Some of the thin films on glass showed ability to sense, unaided, H2 at 250 °C. Sensor responses were observed to be 1 - 2 orders of magnitude. The films also demonstrated potential to sense CO2 even though this could only be achieved using high concentrations of CO2 gas at temperatures of 300 °C and above. The sensor responses at 300 °C were estimated to be less than 1 order of magnitude.
NASA Astrophysics Data System (ADS)
Beckingham, L. E.; Mitnick, E. H.; Zhang, S.; Voltolini, M.; Yang, L.; Steefel, C. I.; Swift, A.; Cole, D. R.; Sheets, J.; Kneafsey, T. J.; Landrot, G.; Anovitz, L. M.; Mito, S.; Xue, Z.; Ajo Franklin, J. B.; DePaolo, D.
2015-12-01
CO2 sequestration in deep sedimentary formations is a promising means of reducing atmospheric CO2 emissions but the rate and extent of mineral trapping remains difficult to predict. Reactive transport models provide predictions of mineral trapping based on laboratory mineral reaction rates, which have been shown to have large discrepancies with field rates. This, in part, may be due to poor quantification of mineral reactive surface area in natural porous media. Common estimates of mineral reactive surface area are ad hoc and typically based on grain size, adjusted several orders of magnitude to account for surface roughness and reactivity. This results in orders of magnitude discrepancies in estimated surface areas that directly translate into orders of magnitude discrepancies in model predictions. Additionally, natural systems can be highly heterogeneous and contain abundant nano- and micro-porosity, which can limit connected porosity and access to mineral surfaces. In this study, mineral-specific accessible surface areas are computed for a sample from the reservoir formation at the Nagaoka pilot CO2 injection site (Japan). Accessible mineral surface areas are determined from a multi-scale image analysis including X-ray microCT, SEM QEMSCAN, XRD, SANS, and SEM-FIB. Powder and flow-through column laboratory experiments are performed and the evolution of solutes in the aqueous phase is tracked. Continuum-scale reactive transport models are used to evaluate the impact of reactive surface area on predictions of experimental reaction rates. Evaluated reactive surface areas include geometric and specific surface areas (eg. BET) in addition to their reactive-site weighted counterparts. The most accurate predictions of observed powder mineral dissolution rates were obtained through use of grain-size specific surface areas computed from a BET-based correlation. Effectively, this surface area reflects the grain-fluid contact area, or accessible surface area, in the powder dissolution experiment. In the model of the flow-through column experiment, the accessible mineral surface area, computed from the multi-scale image analysis, is evaluated in addition to the traditional surface area estimates.
Integrating Low-Cost Mems Accelerometer Mini-Arrays (mama) in Earthquake Early Warning Systems
NASA Astrophysics Data System (ADS)
Nof, R. N.; Chung, A. I.; Rademacher, H.; Allen, R. M.
2016-12-01
Current operational Earthquake Early Warning Systems (EEWS) acquire data with networks of single seismic stations, and compute source parameters assuming earthquakes to be point sources. For large events, the point-source assumption leads to an underestimation of magnitude, and the use of single stations leads to large uncertainties in the locations of events outside the network. We propose the use of mini-arrays to improve EEWS. Mini-arrays have the potential to: (a) estimate reliable hypocentral locations by beam forming (FK-analysis) techniques; (b) characterize the rupture dimensions and account for finite-source effects, leading to more reliable estimates for large magnitudes. Previously, the high price of multiple seismometers has made creating arrays cost-prohibitive. However, we propose setting up mini-arrays of a new seismometer based on low-cost (<$150), high-performance MEMS accelerometer around conventional seismic stations. The expected benefits of such an approach include decreasing alert-times, improving real-time shaking predictions and mitigating false alarms. We use low-resolution 14-bit Quake Catcher Network (QCN) data collected during Rapid Aftershock Mobilization Program (RAMP) in Christchurch, NZ following the M7.1 Darfield earthquake in September 2010. As the QCN network was so dense, we were able to use small sub-array of up to ten sensors spread along a maximum area of 1.7x2.2 km2 to demonstrate our approach and to solve for the BAZ of two events (Mw4.7 and Mw5.1) with less than ±10° error. We will also present the new 24-bit device details, benchmarks, and real-time measurements.
A learning framework for age rank estimation based on face images with scattering transform.
Chang, Kuang-Yu; Chen, Chu-Song
2015-03-01
This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.
On strain and stress in living cells
NASA Astrophysics Data System (ADS)
Cox, Brian N.; Smith, David W.
2014-11-01
Recent theoretical simulations of amelogenesis and network formation and new, simple analyses of the basic multicellular unit (BMU) allow estimation of the order of magnitude of the strain energy density in populations of living cells in their natural environment. A similar simple calculation translates recent measurements of the force-displacement relation for contacting cells (cell-cell adhesion energy) into equivalent volume energy densities, which are formed by averaging the changes in contact energy caused by a cell's migration over the cell's volume. The rates of change of these mechanical energy densities (energy density rates) are then compared to the order of magnitude of the metabolic activity of a cell, expressed as a rate of production of metabolic energy per unit volume. The mechanical energy density rates are 4-5 orders of magnitude smaller than the metabolic energy density rate in amelogenesis or bone remodeling in the BMU, which involve modest cell migration velocities, and 2-3 orders of magnitude smaller for innervation of the gut or angiogenesis, where migration rates are among the highest for all cell types. For representative cell-cell adhesion gradients, the mechanical energy density rate is 6 orders of magnitude smaller than the metabolic energy density rate. The results call into question the validity of using simple constitutive laws to represent living cells. They also imply that cells need not migrate as inanimate objects of gradients in an energy field, but are better regarded as self-powered automata that may elect to be guided by such gradients or move otherwise. Thus Ġel=d/dt 1/2 >[(C11+C12)ɛ02+2μγ02]=(C11+C12)ɛ0ɛ˙0+2μγ0γ˙0 or Ġel=ηEɛ0ɛ˙0+η‧Eγ0γ˙0 with 1.4≤η≤3.4 and 0.7≤η‧≤0.8 for Poisson's ratio in the range 0.2≤ν≤0.4 and η=1.95 and η‧=0.75 for ν=0.3. The spatial distribution of shear strains arising within an individual cell as cells slide past one another during amelogenesis is not known in detail. However, estimates can be inferred from the known relative velocities of the cells' centers of mass. When averaged over a volume comparable to the cell size, representative values of the strain are, to order of magnitude, ɛ0≈0.1 and γ0≈0.1. The shape distortions of cells seen, for example, in Fig. 1c, imply peak strains in minor segments of a cell of magnitude unity, ɛ0≈1 and γ0≈1; these values represent the upper bound of plausible values and are included for discussion of the extremes of attainable strain energy rates.Given the strain magnitudes, the strain rates follow from the fact that a cell switches from one contacting neighbor in the adjacent row to the next in approximately 0.25 d, during which motion the strains might vary from zero to their maximum values and back again. Thus the most probable shear strain rate is inferred to be γ˙0=10-6 s-1 and the most probable tensile strain rate is inferred to be ɛ˙0≈10-6 s-1, with high bounds γ˙0=10-5 s-1 and ɛ˙0=10-5 s-1.
Estimates of the magnitudes of major marine mass extinctions in earth history
2016-01-01
Procedures introduced here make it possible, first, to show that background (piecemeal) extinction is recorded throughout geologic stages and substages (not all extinction has occurred suddenly at the ends of such intervals); second, to separate out background extinction from mass extinction for a major crisis in earth history; and third, to correct for clustering of extinctions when using the rarefaction method to estimate the percentage of species lost in a mass extinction. Also presented here is a method for estimating the magnitude of the Signor–Lipps effect, which is the incorrect assignment of extinctions that occurred during a crisis to an interval preceding the crisis because of the incompleteness of the fossil record. Estimates for the magnitudes of mass extinctions presented here are in most cases lower than those previously published. They indicate that only ∼81% of marine species died out in the great terminal Permian crisis, whereas levels of 90–96% have frequently been quoted in the literature. Calculations of the latter numbers were incorrectly based on combined data for the Middle and Late Permian mass extinctions. About 90 orders and more than 220 families of marine animals survived the terminal Permian crisis, and they embodied an enormous amount of morphological, physiological, and ecological diversity. Life did not nearly disappear at the end of the Permian, as has often been claimed. PMID:27698119
Estimates of the magnitudes of major marine mass extinctions in earth history
NASA Astrophysics Data System (ADS)
Stanley, Steven M.
2016-10-01
Procedures introduced here make it possible, first, to show that background (piecemeal) extinction is recorded throughout geologic stages and substages (not all extinction has occurred suddenly at the ends of such intervals); second, to separate out background extinction from mass extinction for a major crisis in earth history; and third, to correct for clustering of extinctions when using the rarefaction method to estimate the percentage of species lost in a mass extinction. Also presented here is a method for estimating the magnitude of the Signor-Lipps effect, which is the incorrect assignment of extinctions that occurred during a crisis to an interval preceding the crisis because of the incompleteness of the fossil record. Estimates for the magnitudes of mass extinctions presented here are in most cases lower than those previously published. They indicate that only ˜81% of marine species died out in the great terminal Permian crisis, whereas levels of 90-96% have frequently been quoted in the literature. Calculations of the latter numbers were incorrectly based on combined data for the Middle and Late Permian mass extinctions. About 90 orders and more than 220 families of marine animals survived the terminal Permian crisis, and they embodied an enormous amount of morphological, physiological, and ecological diversity. Life did not nearly disappear at the end of the Permian, as has often been claimed.
Estimates of the magnitudes of major marine mass extinctions in earth history.
Stanley, Steven M
2016-10-18
Procedures introduced here make it possible, first, to show that background (piecemeal) extinction is recorded throughout geologic stages and substages (not all extinction has occurred suddenly at the ends of such intervals); second, to separate out background extinction from mass extinction for a major crisis in earth history; and third, to correct for clustering of extinctions when using the rarefaction method to estimate the percentage of species lost in a mass extinction. Also presented here is a method for estimating the magnitude of the Signor-Lipps effect, which is the incorrect assignment of extinctions that occurred during a crisis to an interval preceding the crisis because of the incompleteness of the fossil record. Estimates for the magnitudes of mass extinctions presented here are in most cases lower than those previously published. They indicate that only ∼81% of marine species died out in the great terminal Permian crisis, whereas levels of 90-96% have frequently been quoted in the literature. Calculations of the latter numbers were incorrectly based on combined data for the Middle and Late Permian mass extinctions. About 90 orders and more than 220 families of marine animals survived the terminal Permian crisis, and they embodied an enormous amount of morphological, physiological, and ecological diversity. Life did not nearly disappear at the end of the Permian, as has often been claimed.
ERIC Educational Resources Information Center
Nienhusser, H. Kenny; Oshio, Toko
2017-01-01
High school students' accuracy in estimating the cost of college (AECC) was examined by utilizing a new methodological approach, the absolute-deviation-continuous construct. This study used the High School Longitudinal Study of 2009 (HSLS:09) data and examined 10,530 11th grade students in order to measure their AECC for 4-year public and private…
Crystal growth and magnetic anisotropy in the spin-chain ruthenate Na2RuO4
NASA Astrophysics Data System (ADS)
Balodhi, Ashiwini; Singh, Yogesh
2018-02-01
We report single-crystal growth, electrical resistivity ρ , anisotropic magnetic susceptibility χ , and heat capacity Cp measurements on the one-dimensional spin-chain ruthenate Na2RuO4 . We observe variable range hopping (VRH) behavior in ρ (T ) . The magnetic susceptibility with magnetic field perpendicular (χ⊥) and parallel (χ∥) to the spin chains is reported. The magnetic properties are anisotropic with χ⊥>χ∥ in the temperature range of measurements T ≈2 -305 K with χ⊥/χ∥≈1.4 at 305 K. From an analysis of the χ (T ) data we attempt to estimate the anisotropy in the g factor and Van Vleck paramagnetic contribution. An anomaly in χ (T ) and a corresponding step-like anomaly in Cp at TN=37 K confirms long-range antiferromagnetic ordering. This temperature is an order of magnitude smaller than the Weiss temperature θ ≈-250 K and points to suppression of long-range magnetic order due to low dimensionality. A fit of the experimental χ (T ) by a one-dimensional spin-chain model gave an estimate of the intrachain exchange interaction 2 J ≈-85 K and the magnitude of the interchain coupling |2 J⊥|≈3 K.
Chai, Huamin; Guerriere, Denise N; Zagorski, Brandon; Coyte, Peter C
2014-01-01
With increasing emphasis on the provision of home-based palliative care in Canada, economic evaluation is warranted, given its tremendous demands on family caregivers. Despite this, very little is known about the economic outcomes associated with home-based unpaid care-giving at the end of life. The aims of this study were to (i) assess the magnitude and share of unpaid care costs in total healthcare costs for home-based palliative care patients, from a societal perspective and (ii) examine the sociodemographic and clinical factors that account for variations in this share. One hundred and sixty-nine caregivers of patients with a malignant neoplasm were interviewed from time of referral to a home-based palliative care programme provided by the Temmy Latner Centre for Palliative Care at Mount Sinai Hospital, Toronto, Canada, until death. Information regarding palliative care resource utilisation and costs, time devoted to care-giving and sociodemographic and clinical characteristics was collected between July 2005 and September 2007. Over the last 12 months of life, the average monthly cost was $14 924 (2011 CDN$) per patient. Unpaid care-giving costs were the largest component - $11 334, accounting for 77% of total palliative care expenses, followed by public costs ($3211; 21%) and out-of-pocket expenditures ($379; 2%). In all cost categories, monthly costs increased exponentially with proximity to death. Seemingly unrelated regression estimation suggested that the share of unpaid care costs of total costs was driven by patients' and caregivers' sociodemographic characteristics. Results suggest that overwhelming the proportion of palliative care costs is unpaid care-giving. This share of costs requires urgent attention to identify interventions aimed at alleviating the heavy financial burden and to ultimately ensure the viability of home-based palliative care in future. © 2013 John Wiley & Sons Ltd.
Performance of US teaching hospitals: a panel analysis of cost inefficiency.
Rosko, Michael D
2004-02-01
This research summarizes an analysis of the impact of environment pressures on hospital inefficiency during the period 1990-1999. The panel design included 616 hospitals. Of these, 211 were academic medical centers and 415 were hospitals with smaller teaching programs. The primary sources of data were the American Hospital Association's Annual Survey of Hospitals and Medicare Cost Reports. Hospital inefficiency was estimated by a regression technique called stochastic frontier analysis. This technique estimates a "best practice cost frontier" for each hospital that is based on the hospital's outputs and input prices. The cost efficiency of each hospital was defined as the ratio of the stochastic frontier total costs to observed total costs. Average inefficiency declined from 14.35% in 1990 to 11.42% in 1998. It increased to 11.78% in 1999. Decreases in inefficiency were associated with the HMO penetration rate and time. Increases in inefficiency were associated with for-profit ownership status and Medicare share of admissions. The implementation of the provisions of the Balanced Budget Act of 1997 was followed by a small decrease in average hospital inefficiency. Analysis found that the SFA results were moderately sensitive to the specification of the teaching output variable. Thus, although the SFA technique can be useful for detecting differences in inefficiency between groups of hospitals (i.e., those with high versus those with low Medicare shares or for-profit versus not-for-profit hospitals), its relatively low precision indicates it should not be used for exact estimates of the magnitude of differences associated with inefficiency-effects variables.
NASA Astrophysics Data System (ADS)
Cheek, Kim A.
2017-08-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.
Singapore’s willingness to pay for mitigation of transboundary forest-fire haze from Indonesia
NASA Astrophysics Data System (ADS)
Lin, Yuan; Wijedasa, Lahiru S.; Chisholm, Ryan A.
2017-02-01
Haze pollution over the past four decades in Southeast Asia is mainly a result of forest and peatland fires in Indonesia. The economic impacts of haze include adverse health effects and disruption to transport and tourism. Previous studies have used a variety of approaches to assess the economic impacts of haze and the forest fires more generally. But no study has used contingent valuation to assess non-market impacts of haze on individuals. Here we apply contingent valuation to estimate impacts of haze on Singapore, one of most severely affected countries. We used a double-bounded dichotomous-choice survey design and the Kaplan-Meier-Turnbull method to infer the distribution of Singaporeans’ willingness to pay (WTP) for haze mitigation. Our estimate of mean individual WTP was 0.97% of annual income (n = 390). To calculate total national WTP, we stratified by income, the demographic variable most strongly related to individual WTP. The total WTP estimate was 643.5 million per year (95% CI [527.7 million, 765.0 million]). This estimate is comparable in magnitude to previously estimated impacts of Indonesia’s fires and also to the estimated costs of peatland protection and restoration. We recommend that our results be incorporated into future cost-benefit analyses of the fires and mitigation strategies.
Asbestos exposure--quantitative assessment of risk
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, J.M.; Weill, H.
Methods for deriving quantitative estimates of asbestos-associated health risks are reviewed and their numerous assumptions and uncertainties described. These methods involve extrapolation of risks observed at past relatively high asbestos concentration levels down to usually much lower concentration levels of interest today--in some cases, orders of magnitude lower. These models are used to calculate estimates of the potential risk to workers manufacturing asbestos products and to students enrolled in schools containing asbestos products. The potential risk to workers exposed for 40 yr to 0.5 fibers per milliliter (f/ml) of mixed asbestos fiber type (a permissible workplace exposure limit under considerationmore » by the Occupational Safety and Health Administration (OSHA) ) are estimated as 82 lifetime excess cancers per 10,000 exposed. The risk to students exposed to an average asbestos concentration of 0.001 f/ml of mixed asbestos fiber types for an average enrollment period of 6 school years is estimated as 5 lifetime excess cancers per one million exposed. If the school exposure is to chrysotile asbestos only, then the estimated risk is 1.5 lifetime excess cancers per million. Risks from other causes are presented for comparison; e.g., annual rates (per million) of 10 deaths from high school football, 14 from bicycling (10-14 yr of age), 5 to 20 for whooping cough vaccination. Decisions concerning asbestos products require participation of all parties involved and should only be made after a scientifically defensible estimate of the associated risk has been obtained. In many cases to date, such decisions have been made without adequate consideration of the level of risk or the cost-effectiveness of attempts to lower the potential risk. 73 references.« less
O'Hanlon, Claire E; Parthan, Anju; Kruse, Morgan; Cartier, Shannon; Stollenwerk, Bjorn; Jiang, Yawen; Caloyeras, John P; Crittenden, Daria B; Barron, Richard
2017-07-01
The goal of this study was to assess and compare the potential clinical and economic value of emerging bone-forming agents using the only currently available agent, teriparatide, as a reference case in patients at high, near-term (imminent, 1- to 2-year) risk of osteoporotic fractures, extending to a lifetime horizon with sequenced antiresorptive agents for maintenance treatment. Analyses were performed by using a Markov cohort model accounting for time-specific fracture protection effects of bone-forming agents followed by antiresorptive treatment with denosumab. The alternative bone-forming agent profiles were defined by using assumptions regarding the onset and total magnitude of protection against fractures with teriparatide. The model cohort comprised 70-year-old female patients with T scores below -2.5 and a previous vertebral fracture. Outcomes included clinical fractures, direct costs, and quality-adjusted life years. The simulated treatment strategies were compared by calculating their incremental "value" (net monetary benefit). Improvements in the onset and magnitude of fracture protection (vs the teriparatide reference case) produced a net monetary benefit of $17,000,000 per 10,000 treated patients during the (1.5-year) bone-forming agent treatment period and $80,000,000 over a lifetime horizon that included 3.5 years of maintenance treatment with denosumab. Incorporating time-specific fracture effects in the Markov cohort model allowed for estimation of a range of cost savings, quality-adjusted life years gained, and clinical fractures avoided at different levels of fracture protection onset and magnitude. Results provide a first estimate of the potential "value" new bone-forming agents (romosozumab and abaloparatide) may confer relative to teriparatide. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Revisiting the Cooling Flow Problem in Galaxies, Groups, and Clusters of Galaxies
NASA Astrophysics Data System (ADS)
McDonald, M.; Gaspari, M.; McNamara, B. R.; Tremblay, G. R.
2018-05-01
We present a study of 107 galaxies, groups, and clusters spanning ∼3 orders of magnitude in mass, ∼5 orders of magnitude in central galaxy star formation rate (SFR), ∼4 orders of magnitude in the classical cooling rate ({\\dot{M}}cool}\\equiv {M}gas}(r< {r}cool})/{t}cool}) of the intracluster medium (ICM), and ∼5 orders of magnitude in the central black hole accretion rate. For each system in this sample, we measure the ICM cooling rate, {\\dot{M}}cool}, using archival Chandra X-ray data and acquire the SFR and systematic uncertainty in the SFR by combining over 330 estimates from dozens of literature sources. With these data, we estimate the efficiency with which the ICM cools and forms stars, finding {ε }cool}\\equiv {SFR}/{\\dot{M}}cool}=1.4 % +/- 0.4% for systems with {\\dot{M}}cool}> 30 M ⊙ yr‑1. For these systems, we measure a slope in the SFR–{\\dot{M}}cool} relation greater than unity, suggesting that the systems with the strongest cool cores are also cooling more efficiently. We propose that this may be related to, on average, higher black hole accretion rates in the strongest cool cores, which could influence the total amount (saturating near the Eddington rate) and dominant mode (mechanical versus radiative) of feedback. For systems with {\\dot{M}}cool}< 30 M ⊙ yr‑1, we find that the SFR and {\\dot{M}}cool} are uncorrelated and show that this is consistent with star formation being fueled at a low (but dominant) level by recycled ISM gas in these systems. We find an intrinsic log-normal scatter in SFR at a fixed {\\dot{M}}cool} of 0.52 ± 0.06 dex (1σ rms), suggesting that cooling is tightly self-regulated over very long timescales but can vary dramatically on short timescales. There is weak evidence that this scatter may be related to the feedback mechanism, with the scatter being minimized (∼0.4 dex) for systems for which the mechanical feedback power is within a factor of two of the cooling luminosity.
Roebuck, M Christopher; Liberman, Joshua N
2009-06-01
To study the impact of various elements of pharmacy benefit design on both the absolute and relative utilization of generics, brands, retail pharmacy, and mail service. Panel data on 1,074 plan sponsors covering 21.6 million individuals over 12 calendar quarters (2005-2007). A retrospective analysis of pharmacy claims. To control for potential endogeneity, linear fixed effects models were estimated for each of six dependent variables: the generic utilization rate, the brand utilization rate, the generic dispensing rate (GDR), the retail pharmacy utilization rate, the mail service utilization rate, and the mail distribution rate. Most member cost-share variables were nonlinearly associated with changes in prescription drug utilization. Marginal effects were generally greater in magnitude for brand out-of-pocket costs than for generic out-of-pocket costs. Time dummies, as well as other pharmacy benefit design elements, also yielded significant results. Prior estimates of the effect of member cost sharing on prescription drug utilization may be biased if complex benefit designs, mail service fulfillment, and unmeasured factors such as pharmaceutical pipelines are not accounted for. Commonly cited relative utilization metrics, such as GDR, may be misleading if not examined alongside absolute prescription drug utilization.
Residential building codes, affordability, and health protection: a risk-tradeoff approach.
Hammitt, J K; Belsky, E S; Levy, J I; Graham, J D
1999-12-01
Residential building codes intended to promote health and safety may produce unintended countervailing risks by adding to the cost of construction. Higher construction costs increase the price of new homes and may increase health and safety risks through "income" and "stock" effects. The income effect arises because households that purchase a new home have less income remaining for spending on other goods that contribute to health and safety. The stock effect arises because suppression of new-home construction leads to slower replacement of less safe housing units. These countervailing risks are not presently considered in code debates. We demonstrate the feasibility of estimating the approximate magnitude of countervailing risks by combining the income effect with three relatively well understood and significant home-health risks. We estimate that a code change that increases the nationwide cost of constructing and maintaining homes by $150 (0.1% of the average cost to build a single-family home) would induce offsetting risks yielding between 2 and 60 premature fatalities or, including morbidity effects, between 20 and 800 lost quality-adjusted life years (both discounted at 3%) each year the code provision remains in effect. To provide a net health benefit, the code change would need to reduce risk by at least this amount. Future research should refine these estimates, incorporate quantitative uncertainty analysis, and apply a full risk-tradeoff approach to real-world case studies of proposed code changes.
Region-specific S-wave attenuation for earthquakes in northwestern Iran
NASA Astrophysics Data System (ADS)
Heidari, Reza; Mirzaei, Noorbakhsh
2017-11-01
In this study, continuous wavelet transform is applied to estimate the frequency-dependent quality factor of shear waves, Q S , in northwestern Iran. The dataset used in this study includes velocigrams of more than 50 events with magnitudes between 4.0 and 6.5, which have occurred in the study area. The CWT-based method shows a high-resolution technique for the estimation of S-wave frequency-dependent attenuation. The quality factor values are determined in the form of a power law as Q S ( f) = (147 ± 16) f 0.71 ± 0.02 and (126 ± 12) f 0.73 ± 0.02 for vertical and horizontal components, respectively, where f is between 0.9 and 12 Hz. Furthermore, in order to verify the reliability of the suggested Q S estimator method, an additional test is performed by using accelerograms of Ahar-Varzaghan dual earthquakes on August 11, 2012, of moment magnitudes 6.4 and 6.3 and their aftershocks. Results indicate that the estimated Q S values from CWT-based method are not very sensitive to the numbers and types of waveforms used (velocity or acceleration).
NASA Technical Reports Server (NTRS)
Bozyan, Elizabeth P.; Hemenway, Paul D.; Argue, A. Noel
1990-01-01
Observations of a set of 89 extragalactic objects (EGOs) will be made with the Hubble Space Telescope Fine Guidance Sensors and Planetary Camera in order to link the HIPPARCOS Instrumental System to an extragalactic coordinate system. Most of the sources chosen for observation contain compact radio sources and stellarlike nuclei; 65 percent are optical variables beyond a 0.2 mag limit. To ensure proper exposure times, accurate mean magnitudes are necessary. In many cases, the average magnitudes listed in the literature were not adequate. The literature was searched for all relevant photometric information for the EGOs, and photometric parameters were derived, including mean magnitude, maximum range, and timescale of variability. This paper presents the results of that search and the parameters derived. The results will allow exposure times to be estimated such that an observed magnitude different from the tabular magnitude by 0.5 mag in either direction will not degrade the astrometric centering ability on a Planetary Camera CCD frame.
Hospital costs estimation and prediction as a function of patient and admission characteristics.
Ramiarina, Robert; Almeida, Renan Mvr; Pereira, Wagner Ca
2008-01-01
The present work analyzed the association between hospital costs and patient admission characteristics in a general public hospital in the city of Rio de Janeiro, Brazil. The unit costs method was used to estimate inpatient day costs associated to specific hospital clinics. With this aim, three "cost centers" were defined in order to group direct and indirect expenses pertaining to the clinics. After the costs were estimated, a standard linear regression model was developed for correlating cost units and their putative predictors (the patients gender and age, the admission type (urgency/elective), ICU admission (yes/no), blood transfusion (yes/no), the admission outcome (death/no death), the complexity of the medical procedures performed, and a risk-adjustment index). Data were collected for 3100 patients, January 2001-January 2003. Average inpatient costs across clinics ranged from (US$) 1135 [Orthopedics] to 3101 [Cardiology]. Costs increased according to increases in the risk-adjustment index in all clinics, and the index was statistically significant in all clinics except Urology, General surgery, and Clinical medicine. The occupation rate was inversely correlated to costs, and age had no association with costs. The (adjusted) per cent of explained variance varied between 36.3% [Clinical medicine] and 55.1% [Thoracic surgery clinic]. The estimates are an important step towards the standardization of hospital costs calculation, especially for countries that lack formal hospital accounting systems.
The thermodynamic efficiency of computations made in cells across the range of life
NASA Astrophysics Data System (ADS)
Kempes, Christopher P.; Wolpert, David; Cohen, Zachary; Pérez-Mercader, Juan
2017-11-01
Biological organisms must perform computation as they grow, reproduce and evolve. Moreover, ever since Landauer's bound was proposed, it has been known that all computation has some thermodynamic cost-and that the same computation can be achieved with greater or smaller thermodynamic cost depending on how it is implemented. Accordingly an important issue concerning the evolution of life is assessing the thermodynamic efficiency of the computations performed by organisms. This issue is interesting both from the perspective of how close life has come to maximally efficient computation (presumably under the pressure of natural selection), and from the practical perspective of what efficiencies we might hope that engineered biological computers might achieve, especially in comparison with current computational systems. Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound. However, this efficiency depends strongly on the size and architecture of the cell in question. In particular, we show that the useful efficiency of an amino acid operation, defined as the bulk energy per amino acid polymerization, decreases for increasing bacterial size and converges to the polymerization cost of the ribosome. This cost of the largest bacteria does not change in cells as we progress through the major evolutionary shifts to both single- and multicellular eukaryotes. However, the rates of total computation per unit mass are non-monotonic in bacteria with increasing cell size, and also change across different biological architectures, including the shift from unicellular to multicellular eukaryotes. This article is part of the themed issue 'Reconceptualizing the origins of life'.
Reducing Contingency through Sampling at the Luckey FUSRAP Site - 13186
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frothingham, David; Barker, Michelle; Buechi, Steve
2013-07-01
Typically, the greatest risk in developing accurate cost estimates for the remediation of hazardous, toxic, and radioactive waste sites is the uncertainty in the estimated volume of contaminated media requiring remediation. Efforts to address this risk in the remediation cost estimate can result in large cost contingencies that are often considered unacceptable when budgeting for site cleanups. Such was the case for the Luckey Formerly Utilized Sites Remedial Action Program (FUSRAP) site near Luckey, Ohio, which had significant uncertainty surrounding the estimated volume of site soils contaminated with radium, uranium, thorium, beryllium, and lead. Funding provided by the American Recoverymore » and Reinvestment Act (ARRA) allowed the U.S. Army Corps of Engineers (USACE) to conduct additional environmental sampling and analysis at the Luckey Site between November 2009 and April 2010, with the objective to further delineate the horizontal and vertical extent of contaminated soils in order to reduce the uncertainty in the soil volume estimate. Investigative work included radiological, geophysical, and topographic field surveys, subsurface borings, and soil sampling. Results from the investigative sampling were used in conjunction with Argonne National Laboratory's Bayesian Approaches for Adaptive Spatial Sampling (BAASS) software to update the contaminated soil volume estimate for the site. This updated volume estimate was then used to update the project cost-to-complete estimate using the USACE Cost and Schedule Risk Analysis process, which develops cost contingencies based on project risks. An investment of $1.1 M of ARRA funds for additional investigative work resulted in a reduction of 135,000 in-situ cubic meters (177,000 in-situ cubic yards) in the estimated base volume estimate. This refinement of the estimated soil volume resulted in a $64.3 M reduction in the estimated project cost-to-complete, through a reduction in the uncertainty in the contaminated soil volume estimate and the associated contingency costs. (authors)« less
Moral-Vico, Javier; Barallat, Jaume; Abad, Llibertat; Olivé-Monllau, Rosa; Muñoz-Pascual, Francesc Xavier; Galán Ortega, Amparo; del Campo, F Javier; Baldrich, Eva
2015-07-15
In this work we report on the production of a low cost microfluidic device for the multiplexed electrochemical detection of magneto bioassays. As a proof of concept, the device has been used to detect myeloperoxidase (MPO), a cardiovascular biomarker. With this purpose, two bioassays have been optimized in parallel onto magnetic beads (MBs) for the simultaneous detection of MPO endogenous peroxidase activity and quantification of total MPO. Since the two bioassays produced signals of different magnitude for each concentration of MPO tested, two detection strategies have been compared, which entailed registering steady state currents (Iss) under substrate flow, and measuring the peak currents (Ip) produced in a stopped flow approach. As it will be shown, appropriate tuning of the detection and flow conditions can provide extremely sensitive detection, but also allow simultaneous detection of assays or parameters that would produce signals of different orders of magnitude when measured by a single detection strategy. In order to demonstrate the feasibility of the detection strategy reported, a dual MPO mass and activity assay has been finally applied to the study of 10 real plasma samples, allowing patient classification according to the risk of suffering a cardiovascular event. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
1983-01-01
An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.
Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions
NASA Astrophysics Data System (ADS)
Vermeulen, Petrus
2017-04-01
A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.
Beyond statistical inference: a decision theory for science.
Killeen, Peter R
2006-08-01
Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests--which place all value on the replicability of an effect and none on its magnitude--as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.
Ion plated electronic tube device
Meek, T.T.
1983-10-18
An electronic tube and associated circuitry which is produced by ion plating techniques. The process is carried out in an automated process whereby both active and passive devices are produced at very low cost. The circuitry is extremely reliable and is capable of functioning in both high radiation and high temperature environments. The size of the electronic tubes produced are more than an order of magnitude smaller than conventional electronic tubes.
Hueth, Kyle D; Jackson, Brian R; Schmidt, Robert L
2018-05-31
To evaluate the prevalence of potentially unnecessary repeat testing (PURT) and the associated economic burden for an inpatient population at a large academic medical facility. We evaluated all inpatient test orders during 2016 for PURT by comparing the intertest times to published recommendations. Potential cost savings were estimated using the Centers for Medicare & Medicaid Services maximum allowable reimbursement rate. We evaluated result positivity as a determinant of PURT through logistic regression. Of the evaluated 4,242 repeated target tests, 1,849 (44%) were identified as PURT, representing an estimated cost-savings opportunity of $37,376. Collectively, the association of result positivity and PURT was statistically significant (relative risk, 1.2; 95% confidence interval, 1.1-1.3; P < .001). PURT contributes to unnecessary health care costs. We found that a small percentage of providers account for the majority of PURT, and PURT is positively associated with result positivity.
Sobocki, Patrik; Jönsson, Bengt; Angst, Jules; Rehnberg, Clas
2006-06-01
Depression is one of the most disabling diseases, and causes a significant burden both to the individual and to society. WHO data suggests that depression causes 6% of the burden of all diseases in Europe in terms of disability adjusted life years (DALYs). Yet, the knowledge of the economic impact of depression has been relatively little researched in Europe. The present study aims at estimating the total cost of depression in Europe based on published epidemiologic and economic evidence. A model was developed to combine epidemiological and economic data on depression in Europe to estimate the cost. The model was populated with data collected from extensive literature reviews of the epidemiology and economic burden of depression in Europe. The cost data was calculated as annual cost per patient, and epidemiologic data was reported as 12-month prevalence estimates. National and international statistics for the model were retrieved from the OECD and Eurostat databases. The aggregated annual cost estimates were presented in Euro for 2004. In 28 countries with a population of 466 million, at least 21 million were affected by depression. The total annual cost of depression in Europe was estimated at Euro 118 billion in 2004, which corresponds to a cost of Euro 253 per inhabitant. Direct costs alone totalled dollar 42 billion, comprised of outpatient care (Euro 22 billion), drug cost (Euro 9 billion) and hospitalization (Euro 10 billion). Indirect costs due to morbidity and mortality were estimated at Euro 76 billion. This makes depression the most costly brain disorder in Europe, accounting for 33% of the total cost. The cost of depression corresponds to 1% of the total economy of Europe (GDP). Our cost results are in good agreement with previous research findings. The cost estimates in the present study are based on model simulations for countries where no data was available. The predictability of our model is limited to the accuracy of the input data employed. As there is no earlier cost-of-illness study conducted on depression in Europe, it is, however, difficult to evaluate the validity of our results for individual countries and thus further research is needed. The cost of depression poses a significant economic burden to European society. The simulation model employed shows good predictability of the cost of depression in Europe and is a novel approach to estimate the cost-of-illness in Europe. IMPLICATIONS FOR HEALTH CARE PROVISION AND POLICIES: Health and social care policy and commissioning must be evidence-based. The empirical results from this study confirm previous findings, that depression is a major concern to the economic welfare in Europe which has consequences to both healthcare providers and policy makers. One important way to stop this explosion in cost is through increased research efforts in the field. Moreover, better detection, prevention, treatment and patient management are imperatives to reduce the burden of depression and its costs. Mental healthcare policies and better access to healthcare for mentally ill are other challenges to improve for Europe. This study has identified several research gaps which are of interest for future research. In order to better understand the impact of depression to European society long-term prospective epidemiology and cost-of-illness studies are needed. In particular data is lacking for Central European countries. On the basis of our findings, further economic evaluations of treatments for depression are necessary in order to ensure a cost-effective use of European healthcare budgets.
Fiedler, John L; Lividini, Keith; Kabaghe, Gladys; Zulu, Rodah; Tehinse, John; Bermudez, Odilia I; Jallier, Vincent; Guyondet, Christophe
2013-12-01
Background. Since fortification of sugar with vitamin A was mandated in 1998, Zambia's fortification program has not changed, while the country remains plagued by high rates ofmicronutrient deficiencies. Objective. To provide evidence-based fortification options with the hope of reinvigorating the Zambian fortification program. Methods. Zambia's 2006 Living Conditions Monitoring Survey is used to estimate the apparent intakes of vitamin A, iron, and zinc, as well as the apparent consumption levels and coverage of four fortification vehicles. Fourteen alternativefoodfortification portfolios are modeled, and their costs, impacts, average cost-effectiveness, and incremental cost-effectiveness are calculated using three alternative impact measures. Results. Alternative impact measures result in different rank orderings of the portfolios. The most cost-effective portfolio is vegetable oil, which has a cost per disability-adjusted life-year (DALY) saved ranging from 12% to 25% of that of sugar, depending on the impact measure used. The public health impact of fortified vegetable oil, however, is relatively modest. Additional criteria beyond cost-effectiveness are introduced and used to rank order the portfolios. The size of the public health impact, the total cost, and the incremental cost-effectiveness of phasing in multiple vehicle portfolios over time are analyzed. Conclusions. Assessing fortification portfolios by measuring changes in the prevalence of inadequate intakes underestimates impact. A more sensitive measure, which also takes into account change in the Estimated Average Requirement (EAR) gap, is provided by a dose-response-based approach to estimating the number ofDALYs saved. There exist highly cost-effective fortification intervention portfolios with substantial public health impacts and variable price tags that could help improve Zambians' nutrition status.
DOE Office of Scientific and Technical Information (OSTI.GOV)
HAASS, C.C.
1999-10-14
Identifies, evaluates and recommends interim measures for reducing or eliminating water sources and preferential pathways within the vadose zone of the single-shell tank farms. Features studied: surface water infiltration and leaking water lines that provide recharge moisture, and wells that could provide pathways for contaminant migration. An extensive data base, maps, recommended mitigations, and rough order of magnitude costs are included.
Economic and Environmental Impacts of Harmful Non-Indigenous Species in Southeast Asia
Nghiem, Le T. P.; Soliman, Tarek; Yeo, Darren C. J.; Tan, Hugh T. W.; Evans, Theodore A.; Mumford, John D.; Keller, Reuben P.; Baker, Richard H. A.; Corlett, Richard T.; Carrasco, Luis R.
2013-01-01
Harmful non-indigenous species (NIS) impose great economic and environmental impacts globally, but little is known about their impacts in Southeast Asia. Lack of knowledge of the magnitude of the problem hinders the allocation of appropriate resources for NIS prevention and management. We used benefit-cost analysis embedded in a Monte-Carlo simulation model and analysed economic and environmental impacts of NIS in the region to estimate the total burden of NIS in Southeast Asia. The total annual loss caused by NIS to agriculture, human health and the environment in Southeast Asia is estimated to be US$33.5 billion (5th and 95th percentile US$25.8–39.8 billion). Losses and costs to the agricultural sector are estimated to be nearly 90% of the total (US$23.4–33.9 billion), while the annual costs associated with human health and the environment are US$1.85 billion (US$1.4–2.5 billion) and US$2.1 billion (US$0.9–3.3 billion), respectively, although these estimates are based on conservative assumptions. We demonstrate that the economic and environmental impacts of NIS in low and middle-income regions can be considerable and that further measures, such as the adoption of regional risk assessment protocols to inform decisions on prevention and control of NIS in Southeast Asia, could be beneficial. PMID:23951120
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
Investigation of low-cost ablative heat shield fabrication for space shuttles
NASA Technical Reports Server (NTRS)
Chandler, H. H.
1972-01-01
Improvements in the processes and design to reduce the manufacturing costs for low density ablative panels for the space shuttle are discussed. The areas that were studied included methods of loading honeycomb core, alternative reinforcement concepts, and the use of reusable subpanels. A review of previous studies on the fabrication of low-cost ablative panels and on permissible defects that do not affect thermal performance was conducted. Considerable differences in the quoted prices for ablative panels, even though the various contractors had reported similar fabrication times were discovered. How these cost differences arise from different estimating criteria and which estimating assumptions and other costs must be included in order to arrive at a realistic price are discussed.
Farther on down the road: transport costs, trade and urban growth in sub-Saharan Africa
2018-01-01
This paper investigates the role of inter-city transport costs in determining the income of sub-Saharan African cities. In particular, focusing on fifteen countries whose largest city is a port, I find that an oil price increase of the magnitude experienced between 2002 and 2008 induces the income of cities near that port to increase by 7 percent relative to otherwise identical cities 500 kilometers farther away. Combined with external estimates, this implies an elasticity of city economic activity with respect to transport costs of −0.28 at 500 kilometers from the port. Moreover, the effect differs by the surface of roads between cities. Cities connected to the port by paved roads are chiefly affected by transport costs to the port, while cities connected to the port by unpaved roads are more affected by connections to secondary centers. PMID:29743731
Energy-Systems Economic Analysis
NASA Technical Reports Server (NTRS)
Doane, J.; Slonski, M. L.; Borden, C. S.
1982-01-01
Energy Systems Economic Analysis (ESEA) program is flexible analytical tool for rank ordering of alternative energy systems. Basic ESEA approach derives an estimate of those costs incurred as result of purchasing, installing and operating an energy system. These costs, suitably aggregated into yearly costs over lifetime of system, are divided by expected yearly energy output to determine busbar energy costs. ESEA, developed in 1979, is written in FORTRAN IV for batch execution.
Estimating the costs of landslide damage in the United States
Fleming, Robert W.; Taylor, Fred A.
1980-01-01
Landslide damages are one of the most costly natural disasters in the United States. A recent estimate of the total annual cost of landslide damage is in excess of $1 billion {Schuster, 1978}. The damages can be significantly reduced, however, through the combined action of technical experts, government, and the public. Before they can be expected to take action, local governments need to have an appreciation of costs of damage in their areas of responsibility and of the reductions in losses that can be achieved. Where studies of cost of landslide damages have been conducted, it is apparent that {1} costs to the public and private sectors of our economy due to landslide damage are much larger than anticipated; {2} taxpayers and public officials generally are unaware of the magnitude of the cost, owing perhaps to the lack of any centralization of data; and {3} incomplete records and unavailability of records result in lower reported costs than actually were incurred. The U.S. Geological Survey has developed a method to estimate the cost of landslide damages in regional and local areas and has applied the method in three urban areas and one rural area. Costs are for different periods and are unadjusted for inflation; therefore, strict comparisons of data from different years should be avoided. Estimates of the average annual cost of landslide damage for the urban areas studied are $5,900,000 in the San Francisco Bay area; $4,000,000 in Allegheny County, Pa.; and $5,170,000 in Hamilton County, Ohio. Adjusting these figures for the population of each area, the annual cost of damages per capita are $1.30 in the nine-county San Francisco Bay region; $2.50 in Allegheny County, Pa.; and $5.80 in Hamilton County, Ohio. On the basis of data from other sources, the estimated annual damages on a per capita basis for the City of Los Angeles, Calif., are about $1.60. If the costs were available for the damages from landslides in Los Angeles in 1977-78 and 1979-80, the annual per capita costs probably would be much larger. The landslide near the rural community of Manti, Utah, caused an expenditure of about $1,800,000 or about $1,000 per person during the period 1974-76. Because a recurrence for such a landslide cannot be established, it is not possible to develop a meaningful estimate of annual per capita damages. Communities are urged to examine their costs of landslide damage and to evaluate the feasibility of several alternative programs that, for a modest investment, could significantly reduce these losses.
Residual acceleration data on IML-1: Development of a data reduction and dissemination plan
NASA Technical Reports Server (NTRS)
Rogers, Melissa J. B.; Alexander, J. Iwan D.
1993-01-01
The research performed consisted of three stages: (1) identification of sensitive IML-1 experiments and sensitivity ranges by order of magnitude estimates, numerical modeling, and investigator input; (2) research and development towards reduction, supplementation, and dissemination of residual acceleration data; and (3) implementation of the plan on existing acceleration databases.
Bias in Magnitude Estimation Following Left Hemisphere Injury
Woods, Adam J.; Mennemeier, Mark; Garcia-Rill, Edgar; Meythaler, Jay; Mark, Victor W.; Jewel, George R.; Murphy, Heather
2015-01-01
There is a growing interest both in identifying the neural mechanisms of magnitude estimation and in identifying forms of bias that can explain aspects of behavioral syndromes like unilateral neglect. Magnitude estimation is associated with activation of temporo-parietal cortex in both cerebral hemispheres of normal subjects; however, it is unclear if and how left hemisphere lesions bias magnitude estimation because the infrequency of neglect and the presence of aphasia in these subjects confound examination. In contrast, we examined magnitude estimation using 12 different types of sensory stimuli that spanned five sensory domains in two patients with very different clinical presentations following unilateral left hemisphere stroke. One patient had neglect sub-acutely without aphasia. The other had aphasia chronically after a temporo-parietal lesion but not neglect. The neglect patient was re-examined 48 hours after being treated with modafinil (Provigil) for decreased arousal. Both patients demonstrated bias in magnitude estimation relative to normal subjects (n=83). Alertness improved in the neglect patient after taking modafinil. His neglect also resolved and his magnitude estimates more closely resembled those of normal subjects. This is the first evidence, to our knowledge, that the left hemisphere injury can bias magnitude estimation in a manner similar but not identical to that associated with right hemisphere injury. PMID:16434066
The cost-effectiveness of etanercept in patients with severe ankylosing spondylitis in the UK.
Ara, R M; Reynolds, A V; Conway, P
2007-08-01
To examine the costs and benefits associated with long-term etanercept (ETN) treatment in patients with severe ankylosing spondylitis (AS) in the UK in accordance with the BSR guidelines. A mathematical model was constructed to estimate the costs and benefits associated with ETN plus non-steroidal anti-inflammatory drugs (NSAIDs) compared with NSAIDs alone. Individual patient data from Phase III RCTs was used to inform the proportion and magnitude of initial response to treatment and changes in health-related quality of life. A retrospective costing exercise on patients attending a UK secondary care rheumatology unit was used to inform disease costs. Published evidence on long-term disease progression was extrapolated over a 25-yr horizon. Uncertainty was examined using probabilistic sensitivity analyses. Over a 25-yr horizon, ETN plus NSAIDs gave 1.58 more QALYs at an additional cost of 35,978 pounds when compared with NSAID treatment alone. This equates to a central estimate of 22,700 pounds per QALY. The incremental cost per QALYs using shorter time periods were 27,600 pounds, 23,600 pounds and 22,600 pounds at 2, 5 and 15 yrs, respectively. Using a 25-yr horizon, 93% of results from the probabilistic analyses fall below a threshold of 25,000 pounds per QALY. This study demonstrates the potential cost-effectiveness of ETN plus NSAIDs compared with NSAIDs alone in patients with severe AS treated according to the BSR guidelines in the UK.
NASA Astrophysics Data System (ADS)
Herrington, C.; Gonzalez-Pinzon, R.; Covino, T. P.; Mortensen, J.
2015-12-01
Solute transport studies in streams and rivers often begin with the introduction of conservative and reactive tracers into the water column. Information on the transport of these substances is then captured within tracer breakthrough curves (BTCs) and used to estimate, for instance, travel times and dissolved nutrient and carbon dynamics. Traditionally, these investigations have been limited to systems with small discharges (< 200 L/s) and with small reach lengths (< 500 m), partly due to the need for a priori information of the reach's hydraulic characteristics (e.g., channel geometry, resistance and dispersion coefficients) to predict arrival times, times to peak concentrations of the solute and mean travel times. Current techniques to acquire these channel characteristics through preliminary tracer injections become cost prohibitive at higher stream orders and the use of semi-continuous water quality sensors for collecting real-time information may be affected from erroneous readings that are masked by high turbidity (e.g., nitrate signals with SUNA instruments or fluorescence measures) and/or high total dissolved solids (e.g., making prohibitively expensive the use of salt tracers such as NaCl) in larger systems. Additionally, a successful time-of-travel study is valuable for only a single discharge and river stage. We have developed a method to predict tracer BTCs to inform sampling frequencies at small and large stream orders using empirical relationships developed from multiple tracer injections spanning several orders of magnitude in discharge and reach length. This method was successfully tested in 1st to 8th order systems along the Middle Rio Grande River Basin in New Mexico, USA.
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.
1991-01-01
Work was completed on all aspects of the following tasks: order of magnitude estimates; thermo-capillary convection - two-dimensional (fixed planar surface); thermo-capillary convection - three-dimensional and axisymmetric; liquid bridge/floating zone sensitivity; transport in closed containers; interaction: design and development stages; interaction: testing flight hardware; and reporting. Results are included in the Appendices.
Reduced Order Modeling in General Relativity
NASA Astrophysics Data System (ADS)
Tiglio, Manuel
2014-03-01
Reduced Order Modeling is an emerging yet fast developing filed in gravitational wave physics. The main goals are to enable fast modeling and parameter estimation of any detected signal, along with rapid matched filtering detecting. I will focus on the first two. Some accomplishments include being able to replace, with essentially no lost of physical accuracy, the original models with surrogate ones (which are not effective ones, that is, they do not simplify the physics but go on a very different track, exploiting the particulars of the waveform family under consideration and state of the art dimensional reduction techniques) which are very fast to evaluate. For example, for EOB models they are at least around 3 orders of magnitude faster than solving the original equations, with physically equivalent results. For numerical simulations the speedup is at least 11 orders of magnitude. For parameter estimation our current numbers are about bringing ~100 days for a single SPA inspiral binary neutron star Bayesian parameter estimation analysis to under a day. More recently, it has been shown that the full precessing problem for, say, 200 cycles, can be represented, through some new ideas, by a remarkably compact set of carefully chosen reduced basis waveforms (~10-100, depending on the accuracy requirements). I will highlight what I personally believe are the challenges to face next in this subarea of GW physics and where efforts should be directed. This talk will summarize work in collaboration with: Harbir Antil (GMU), Jonathan Blackman (Caltech), Priscila Canizares (IoA, Cambridge, UK), Sarah Caudill (UWM), Jonathan Gair (IoA. Cambridge. UK), Scott Field (UMD), Chad R. Galley (Caltech), Frank Herrmann (Germany), Han Hestahven (EPFL, Switzerland), Jason Kaye (Brown, Stanford & Courant). Evan Ochsner (UWM), Ricardo Nochetto (UMD), Vivien Raymond (LIGO, Caltech), Rory Smith (LIGO, Caltech) Bela Ssilagyi (Caltech) and MT (UMD & Caltech).
Magnitude Estimation for Large Earthquakes from Borehole Recordings
NASA Astrophysics Data System (ADS)
Eshaghi, A.; Tiampo, K. F.; Ghofrani, H.; Atkinson, G.
2012-12-01
We present a simple and fast method for magnitude determination technique for earthquake and tsunami early warning systems based on strong ground motion prediction equations (GMPEs) in Japan. This method incorporates borehole strong motion records provided by the Kiban Kyoshin network (KiK-net) stations. We analyzed strong ground motion data from large magnitude earthquakes (5.0 ≤ M ≤ 8.1) with focal depths < 50 km and epicentral distances of up to 400 km from 1996 to 2010. Using both peak ground acceleration (PGA) and peak ground velocity (PGV) we derived GMPEs in Japan. These GMPEs are used as the basis for regional magnitude determination. Predicted magnitudes from PGA values (Mpga) and predicted magnitudes from PGV values (Mpgv) were defined. Mpga and Mpgv strongly correlate with the moment magnitude of the event, provided sufficient records for each event are available. The results show that Mpgv has a smaller standard deviation in comparison to Mpga when compared with the estimated magnitudes and provides a more accurate early assessment of earthquake magnitude. We test this new method to estimate the magnitude of the 2011 Tohoku earthquake and we present the results of this estimation. PGA and PGV from borehole recordings allow us to estimate the magnitude of this event 156 s and 105 s after the earthquake onset, respectively. We demonstrate that the incorporation of borehole strong ground-motion records immediately available after the occurrence of large earthquakes significantly increases the accuracy of earthquake magnitude estimation and the associated improvement in earthquake and tsunami early warning systems performance. Moment magnitude versus predicted magnitude (Mpga and Mpgv).
Isoprene emissions from oak trees in the eastern USA play an important role in tropospheric ozone pollution. Oak trees (Quercus) emit an order of magnitude more isoprene than most other emitting tree species and are by far the largest source of biogenic isoprene in the eastern US...
Copley, Vicky R; Cavill, Nick; Wolstenholme, Jane; Fordham, Richard; Rutter, Harry
2017-08-22
Adult obesity is linked to a greater need for social care because of its association with the development of long term conditions and because obese adults can have physical and social difficulties which inhibit daily living. Obesity thus has considerable social care cost implications but the magnitude of these costs is currently unknown. This paper outlines an approach to estimating obesity-related social care costs in adults aged over 65 in England. We used univariable and multivariable logistic regression models to investigate the relation between the self-reported need for social care and potential determinants, including body mass index (BMI), using data from Health Survey for England. We combined these modelled estimates of need for social care with the mean hours of help received, conditional on receiving any help, to calculate the expected hours of social care received per adult by BMI. BMI is positively associated with self-reported need for social care. A one unit (ie 1 kg/m 2 ) increase in BMI is on average associated with a 5% increase in the odds of need for help with social care (odds ratio 1.05, 95% CI 1.04 to 1.07) in an unadjusted model. Adjusting for long term illness and sociodemographic characteristics we estimate the annual cost of local authority funded care for those who receive it is £599 at a BMI of 23 but £1086 at a BMI of 40. BMI is positively associated with self-reported need for social care after adjustment for sociodemographic factors and limiting long term illness. The increase in need for care with BMI gives rise to additional costs in social care provision which should be borne in mind when calculating the cost-effectiveness of interventions aimed at reducing obesity.
ESTIMATING TREATMENT EFFECTS ON HEALTHCARE COSTS UNDER EXOGENEITY: IS THERE A ‘MAGIC BULLET’?
Polsky, Daniel; Manning, Willard G.
2011-01-01
Methods for estimating average treatment effects, under the assumption of no unmeasured confounders, include regression models; propensity score adjustments using stratification, weighting, or matching; and doubly robust estimators (a combination of both). Researchers continue to debate about the best estimator for outcomes such as health care cost data, as they are usually characterized by an asymmetric distribution and heterogeneous treatment effects,. Challenges in finding the right specifications for regression models are well documented in the literature. Propensity score estimators are proposed as alternatives to overcoming these challenges. Using simulations, we find that in moderate size samples (n= 5000), balancing on propensity scores that are estimated from saturated specifications can balance the covariate means across treatment arms but fails to balance higher-order moments and covariances amongst covariates. Therefore, unlike regression model, even if a formal model for outcomes is not required, propensity score estimators can be inefficient at best and biased at worst for health care cost data. Our simulation study, designed to take a ‘proof by contradiction’ approach, proves that no one estimator can be considered the best under all data generating processes for outcomes such as costs. The inverse-propensity weighted estimator is most likely to be unbiased under alternate data generating processes but is prone to bias under misspecification of the propensity score model and is inefficient compared to an unbiased regression estimator. Our results show that there are no ‘magic bullets’ when it comes to estimating treatment effects in health care costs. Care should be taken before naively applying any one estimator to estimate average treatment effects in these data. We illustrate the performance of alternative methods in a cost dataset on breast cancer treatment. PMID:22199462
NASA Astrophysics Data System (ADS)
Arno, Matthew Gordon
Texas is investigating building a long-term waste storage facility, also known as an Assured Isolation Facility. This is an above-ground low-level radioactive waste storage facility that is actively maintained and from which waste may be retrieved. A preliminary, scoping-level analysis has been extended to consider more complex scenarios of radiation streaming and skyshine by using the computer code Monte Carlo N-Particle (MCNP) to model the facility in greater detail. Accidental release scenarios have been studied in more depth to better assess the potential dose to off-site individuals. Using bounding source term assumptions, the projected radiation doses and dose rates are estimated to exceed applicable limits by an order of magnitude. By altering the facility design to fill in the hollow cores of the prefabricated concrete slabs used in the roof over the "high-gamma rooms," where the waste with the highest concentration of gamma emitting radioactive material is stored, dose rates outside the facility decrease by an order of magnitude. With the modified design, the annual dose at the site fenceline is estimated at 86 mrem, below the 100 mrem annual limit for exposure of the public. Within the site perimeter, the dose rates are lowered sufficiently such that it is not necessary to categorize many workers and contractor personnel as radiation workers, saving on costs as well as being advisable under ALARA principles. A detailed analysis of bounding accidents incorporating information on the local meteorological conditions indicate that the maximum committed effective dose equivalent from the passage of a plume of material released in an accident at any of the cities near the facility is 59 :rem in the city of Eunice, NM based on the combined day and night meteorological conditions. Using the daytime meteorological conditions, the maximum dose at any city is 7 :rem, also in the city of Eunice. The maximum dose at the site boundary was determined to be 230 mrem using the combined day and night meteorological conditions and 33 mrem using the daytime conditions.
Revised Estimates for the Number of Human and Bacteria Cells in the Body.
Sender, Ron; Fuchs, Shai; Milo, Ron
2016-08-01
Reported values in the literature on the number of cells in the body differ by orders of magnitude and are very seldom supported by any measurements or calculations. Here, we integrate the most up-to-date information on the number of human and bacterial cells in the body. We estimate the total number of bacteria in the 70 kg "reference man" to be 3.8·1013. For human cells, we identify the dominant role of the hematopoietic lineage to the total count (≈90%) and revise past estimates to 3.0·1013 human cells. Our analysis also updates the widely-cited 10:1 ratio, showing that the number of bacteria in the body is actually of the same order as the number of human cells, and their total mass is about 0.2 kg.
Revised Estimates for the Number of Human and Bacteria Cells in the Body
Milo, Ron
2016-01-01
Reported values in the literature on the number of cells in the body differ by orders of magnitude and are very seldom supported by any measurements or calculations. Here, we integrate the most up-to-date information on the number of human and bacterial cells in the body. We estimate the total number of bacteria in the 70 kg "reference man" to be 3.8·1013. For human cells, we identify the dominant role of the hematopoietic lineage to the total count (≈90%) and revise past estimates to 3.0·1013 human cells. Our analysis also updates the widely-cited 10:1 ratio, showing that the number of bacteria in the body is actually of the same order as the number of human cells, and their total mass is about 0.2 kg. PMID:27541692
Social cost impact assessment of pipeline infrastructure projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, John C., E-mail: matthewsj@battelle.org; Allouche, Erez N., E-mail: allouche@latech.edu; Sterling, Raymond L., E-mail: sterling@latech.edu
A key advantage of trenchless construction methods compared with traditional open-cut methods is their ability to install or rehabilitate underground utility systems with limited disruption to the surrounding built and natural environments. The equivalent monetary values of these disruptions are commonly called social costs. Social costs are often ignored by engineers or project managers during project planning and design phases, partially because they cannot be calculated using standard estimating methods. In recent years some approaches for estimating social costs were presented. Nevertheless, the cost data needed for validation of these estimating methods is lacking. Development of such social cost databasesmore » can be accomplished by compiling relevant information reported in various case histories. This paper identifies eight most important social cost categories, presents mathematical methods for calculating them, and summarizes the social cost impacts for two pipeline construction projects. The case histories are analyzed in order to identify trends for the various social cost categories. The effectiveness of the methods used to estimate these values is also discussed. These findings are valuable for pipeline infrastructure engineers making renewal technology selection decisions by providing a more accurate process for the assessment of social costs and impacts. - Highlights: • Identified the eight most important social cost factors for pipeline construction • Presented mathematical methods for calculating those social cost factors • Summarized social cost impacts for two pipeline construction projects • Analyzed those projects to identify trends for the social cost factors.« less
Substantial inorganic carbon sink in closed drainage basins globally
NASA Astrophysics Data System (ADS)
Li, Yu; Zhang, Chengqi; Wang, Naiang; Han, Qin; Zhang, Xinzhong; Liu, Yuan; Xu, Lingmei; Ye, Wangting
2017-07-01
Arid and semi-arid ecosystems are increasingly recognized as important carbon storage sites. In these regions, extensive sequestration of dissolved inorganic carbon can occur in the terminal lakes of endorheic basins--basins that do not drain to external bodies of water. However, the global magnitude of this dissolved inorganic carbon sink is uncertain. Here we present isotopic, radiocarbon, and chemical analyses of groundwater, river water, and sediments from the terminal region of the endorheic Shiyang River drainage basin, in arid northwest China. We estimate that 0.13 Pg of dissolved inorganic carbon was stored in the basin during the mid-Holocene. Pollen-based reconstructions of basin-scale productivity suggest that the mid-Holocene dissolved inorganic carbon sink was two orders of magnitude smaller than terrestrial productivity in the basin. We use estimates of dissolved inorganic carbon storage based on sedimentary data from 11 terminal lakes of endorheic basins around the world as the basis for a global extrapolation of the sequestration of dissolved organic carbon in endorheic basins. We estimate that 0.152 Pg of dissolved inorganic carbon is buried per year today, compared to about 0.211 Pg C yr-1 during the mid-Holocene. We conclude that endorheic basins represent an important carbon sink on the global scale, with a magnitude similar to deep ocean carbon burial.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Argon nucleation in a cryogenic supersonic nozzle
NASA Astrophysics Data System (ADS)
Sinha, Somnath; Bhabhe, Ashutosh; Laksmono, Hartawan; Wölk, Judith; Strey, Reinhard; Wyslouzil, Barbara
2010-02-01
We have measured pressures p and temperatures T corresponding to the maximum nucleation rate of argon in a cryogenic supersonic nozzle apparatus where the estimated nucleation rates are J =1017±1 cm-3 s-1. As T increases from 34 to 53 K, p increases from 0.47 to 8 kPa. Under these conditions, classical nucleation theory predicts nucleation rates of 11-13 orders of magnitude lower than the observed rates while mean field kinetic nucleation theory predicts the observed rates within 1 order of magnitude. The current data set appears consistent with the measurements of Iland et al. [J. Chem. Phys. 127, 154506 (2007)] in the cryogenic nucleation pulse chamber. Combining the two data sets suggests that classical nucleation theory fails because it overestimates both the critical cluster size and the excess internal energy of the critical clusters.
Public transit, obesity, and medical costs: assessing the magnitudes.
Edwards, Ryan D
2008-01-01
This paper assesses the potential benefits of increased walking and reduced obesity associated with taking public transit in terms of dollars of medical costs saved and disability avoided. I conduct a new analysis of a nationally representative U.S. transportation survey to gauge the net increase in walking associated with public transit usage. I translate minutes spent walking into energy expenditures and reductions in obesity prevalence, estimating the present value of costs and disability that may be avoided. Taking public transit is associated with walking 8.3 more minutes per day on average, or an additional 25.7-39.0 kcal. Hill et al. [Hill, J.O., Wyatt, H.R., Reed, G.W., Peters, J.C., 2003. Obesity and the environment: Where do we go from here? Science 299 (5608), 853-855] estimate that an increase in net expenditure of 100 kcal/day can stop the increase in obesity in 90% of the population. Additional walking associated with public transit could save $5500 per person in present value by reducing obesity-related medical costs. Savings in quality-adjusted life years could be even higher. While no silver bullet, walking associated with public transit can have a substantial impact on obesity, costs, and well-being. Further research is warranted on the net impact of transit usage on all behaviors, including caloric intake and other types of exercise, and on whether policies can promote transit usage at acceptable cost.
Shifts in deep-sea community structure linked to climate and food supply.
Ruhl, Henry A; Smith, Kenneth L
2004-07-23
A major change in the community structure of the dominant epibenthic megafauna was observed at 4100 meters depth in the northeast Pacific and was synchronous to a major El Niño/La Niña event that occurred between 1997 and 1999. Photographic abundance estimates of epibenthic megafauna from 1989 to 2002 show that two taxa decreased in abundance after 1998 by 2 to 3 orders of magnitude, whereas several other species increased in abundance by 1 to 2 orders of magnitude. These faunal changes are correlated to climate fluctuations dominated by El Niño/La Niña. Megafauna even in remote marine areas appear to be affected by contemporary climatic fluctuations. Such faunal changes highlight the importance of an adequate temporal perspective in describing biodiversity, ecology, and anthropogenic impacts in deep-sea communities.
Golestaneh, S Alireza; Karam, Lina
2016-08-24
Perceptual image quality assessment (IQA) attempts to use computational models to estimate the image quality in accordance with subjective evaluations. Reduced-reference (RR) image quality assessment (IQA) methods make use of partial information or features extracted from the reference image for estimating the quality of distorted images. Finding a balance between the number of RR features and accuracy of the estimated image quality is essential and important in IQA. In this paper we propose a training-free low-cost RRIQA method that requires a very small number of RR features (6 RR features). The proposed RRIQA algorithm is based on the discrete wavelet transform (DWT) of locally weighted gradient magnitudes.We apply human visual system's contrast sensitivity and neighborhood gradient information to weight the gradient magnitudes in a locally adaptive manner. The RR features are computed by measuring the entropy of each DWT subband, for each scale, and pooling the subband entropies along all orientations, resulting in L RR features (one average entropy per scale) for an L-level DWT. Extensive experiments performed on seven large-scale benchmark databases demonstrate that the proposed RRIQA method delivers highly competitive performance as compared to the state-of-the-art RRIQA models as well as full reference ones for both natural and texture images. The MATLAB source code of REDLOG and the evaluation results are publicly available online at https://http://lab.engineering.asu.edu/ivulab/software/redlog/.
Defining Tsunami Magnitude as Measure of Potential Impact
NASA Astrophysics Data System (ADS)
Titov, V. V.; Tang, L.
2016-12-01
The goal of tsunami forecast, as a system for predicting potential impact of a tsunami at coastlines, requires quick estimate of a tsunami magnitude. This goal has been recognized since the beginning of tsunami research. The work of Kajiura, Soloviev, Abe, Murty, and many others discussed several scales for tsunami magnitude based on estimates of tsunami energy. However, difficulties of estimating tsunami energy based on available tsunami measurements at coastal sea-level stations has carried significant uncertainties and has been virtually impossible in real time, before tsunami impacts coastlines. The slow process of tsunami magnitude estimates, including collection of vast amount of available coastal sea-level data from affected coastlines, made it impractical to use any tsunami magnitude scales in tsunami warning operations. Uncertainties of estimates made tsunami magnitudes difficult to use as universal scale for tsunami analysis. Historically, the earthquake magnitude has been used as a proxy of tsunami impact estimates, since real-time seismic data is available of real-time processing and ample amount of seismic data is available for an elaborate post event analysis. This measure of tsunami impact carries significant uncertainties in quantitative tsunami impact estimates, since the relation between the earthquake and generated tsunami energy varies from case to case. In this work, we argue that current tsunami measurement capabilities and real-time modeling tools allow for establishing robust tsunami magnitude that will be useful for tsunami warning as a quick estimate for tsunami impact and for post-event analysis as a universal scale for tsunamis inter-comparison. We present a method for estimating the tsunami magnitude based on tsunami energy and present application of the magnitude analysis for several historical events for inter-comparison with existing methods.
Cost of fetal alcohol spectrum disorder diagnosis in Canada.
Popova, Svetlana; Lange, Shannon; Burd, Larry; Chudley, Albert E; Clarren, Sterling K; Rehm, Jürgen
2013-01-01
Fetal Alcohol Spectrum Disorder (FASD) is underdiagnosed in Canada. The diagnosis of FASD is not simple and currently, the recommendation is that a comprehensive, multidisciplinary assessment of the individual be done. The purpose of this study was to estimate the annual cost of FASD diagnosis on Canadian society. The diagnostic process breakdown was based on recommendations from the Fetal Alcohol Spectrum Disorder Canadian Guidelines for Diagnosis. The per person cost of diagnosis was calculated based on the number of hours (estimated based on expert opinion) required by each specialist involved in the diagnostic process. The average rate per hour for each respective specialist was estimated based on hourly costs across Canada. Based on the existing clinical capacity of all FASD multidisciplinary clinics in Canada, obtained from the 2005 and 2011 surveys conducted by the Canada Northwest FASD Research Network, the number of FASD cases diagnosed per year in Canada was estimated. The per person cost of FASD diagnosis was then applied to the number of cases diagnosed per year in Canada in order to calculated the overall annual cost. Using the most conservative approach, it was estimated that an FASD evaluation requires 32 to 47 hours for one individual to be screened, referred, admitted, and diagnosed with an FASD diagnosis, which results in a total cost of $3,110 to $4,570 per person. The total cost of FASD diagnostic services in Canada ranges from $3.6 to $5.2 million (lower estimate), up to $5.0 to $7.3 million (upper estimate) per year. As a result of using the most conservative approach, the cost of FASD diagnostic services presented in the current study is most likely underestimated. The reasons for this likelihood and the limitations of the study are discussed.
Salkever, David
2013-02-01
A recent policy analysis argued that expanding access to evidence-based supported employment can provide savings in major components of social costs. This article extends the scope of this policy analysis by placing the argument within a recently developed economic framework for social cost-effectiveness analysis that defines a program's social cost impact as its effect on net consumption of all goods and services. A total of 27 studies over the past two decades are reviewed to synthesize evidence of the social cost impacts of expanding access to the individual placement and support model of supported employment (IPS-SE). Most studies have focused primarily on agency costs of providing IPS-SE services, cost offsets when clients shift from "traditional" rehabilitation to IPS-SE, and impacts on clients' earnings. Because costs and cost offsets are similar in magnitude, incremental costs of expanding services to persons who would otherwise receive traditional services are probably small or even negative. The population served by an expansion could be sizable, but the feasibility of a policy targeting IPS-SE expansion in this way has yet to be demonstrated. IPS-SE has positive impacts on competitive job earnings, but these may not fully translate into social cost offsets. Additional empirical support is needed for the argument that large-scale expansion would yield substantial mental health treatment cost offsets. Other gaps in evidence of policy impacts include take-up rate estimates, cost impact estimates from longer-term studies (exceeding two years), and longer-term studies of whether IPS-SE prevents younger clients from becoming recipients of Supplemental Security Income or Social Security Disability Insurance
Cost-Conscious of Anesthesia Physicians: An awareness survey.
Hakimoglu, Sedat; Hancı, Volkan; Karcıoglu, Murat; Tuzcu, Kasım; Davarcı, Isıl; Kiraz, Hasan Ali; Turhanoglu, Selim
2015-01-01
Increasing competitive pressure and health performance system in the hospitals result in pressure to reduce the resources allocated. The aim of this study was to evaluate the anesthesiology and intensive care physicians awareness of the cost of the materials used and to determine the factors that influence it. This survey was conducted between September 2012 and September 2013 after the approval of the local ethics committee. Overall 149 anesthetists were included in the study. Participants were asked to estimate the cost of 30 products used by anesthesiology and intensive care units. One hundred forty nine doctors, 45% female and 55% male, participated in this study. Of the total 30 questions the averages of cost estimations were 5.8% accurate estimation, 35.13% underestimation and 59.16% overestimation. When the participants were divided into the different groups of institution, duration of working in this profession and sex, there were no statistically significant differences regarding accurate estimation. However, there was statistically significant difference in underestimation. In underestimation, there was no significant difference between 16-20 year group and >20 year group but these two groups have more price overestimation than the other groups (p=0.031). Furthermore, when all the participants were evaluated there were no significant difference between age-accurate cost estimation and profession time-accurate cost estimation. Anesthesiology and intensive care physicians in this survey have an insufficient awareness of the cost of the drugs and materials that they use. The institution and experience are not effective factors for accurate estimate. Programs for improving the health workers knowledge creating awareness of cost should be planned in order to use the resources more efficiently and cost effectively.
Etcheverrigaray, F; Bulteau, S; Machon, L O; Riche, V P; Mauduit, N; Tricot, R; Sellal, O; Sauvaget, A
2015-08-01
Repetitive transcranial magnetic stimulation (rTMS) is an effective and well-tolerated treatment in resistant depression with mild to moderate intensity. This indication has not yet been approved in France. The cost and medico-economic value of rTMS in psychiatry remains unknown. The aim of this preliminary study was to assess rTMS cost production analysis as an in-hospital treatment for depression. The methodology, derived from analytical accounts, was validated by a multidisciplinary task force (clinicians, public health doctors, pharmacists, administrative officials and health economist). It was pragmatic, based on official and institutional documentary sources and from field practice. It included equipment, staff, and structure costs, to get an estimate as close to reality as possible. First, we estimated the production cost of rTMS session, based on our annual activity. We then estimated the cost of a cure, which includes 15 sessions. A sensitivity analysis was also performed. The hospital production cost of a cure for treating depression was estimated at € 1932.94 (€ 503.55 for equipment, € 1082.75 for the staff, and € 346.65 for structural expenses). This cost-estimate has resulted from an innovative, pragmatic, and cooperative approach. It is slightly higher but more comprehensive than the costs estimated by the few international studies. However, it is limited due to structure-specific problems and activity. This work could be repeated in other circumstances in order to obtain a more general estimate, potentially helpful for determining an official price for the French health care system. Moreover, budgetary constraints and public health choices should be taken into consideration. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
The evaluation of cost-of-illness due to use of cost-of-illness-based chemicals.
Hong, Jiyeon; Lee, Yongjin; Lee, Geonwoo; Lee, Hanseul; Yang, Jiyeon
2015-01-01
This study is conducted to estimate the cost paid by the public suffering from disease possibly caused by chemical and to examine the effect on public health. Cost-benefit analysis is an important factor in analysis and decision-making and is an important policy decision tool in many countries. Cost-of-illness (COI), a kind of scale-based analysis method, estimates the potential value lost as a result of illness as a monetary unit and calculates the cost in terms of direct, indirect and psychological costs. This study estimates direct medical costs, transportation fees for hospitalization and outpatient treatment, and nursing fees through a number of patients suffering from disease caused by chemicals in order to analyze COI, taking into account the cost of productivity loss as an indirect cost. The total yearly cost of the diseases studied in 2012 is calculated as 77 million Korean won (KRW) per person. The direct and indirect costs being 52 million KRW and 23 million KRW, respectively. Within the total cost of illness, mental and behavioral disability costs amounted to 16 million KRW, relevant blood immunological parameters costs were 7.4 million KRW, and disease of the nervous system costs were 6.7 million KRW. This study reports on a survey conducted by experts regarding diseases possibly caused by chemicals and estimates the cost for the general public. The results can be used to formulate a basic report for a social-economic evaluation of the permitted use of chemicals and limits of usage.
Local magnitude calibration of the Hellenic Unified Seismic Network
NASA Astrophysics Data System (ADS)
Scordilis, E. M.; Kementzetzidou, D.; Papazachos, B. C.
2016-01-01
A new relation is proposed for accurate determination of local magnitudes in Greece. This relation is based on a large number of synthetic Wood-Anderson (SWA) seismograms corresponding to 782 regional shallow earthquakes which occurred during the period 2007-2013 and recorded by 98 digital broad-band stations. These stations are installed and operated by the following: (a) the National Observatory of Athens (HL), (b) the Department of Geophysics of the Aristotle University of Thessaloniki (HT), (c) the Seismological Laboratory of the University of Athens (HA), and (d) the Seismological Laboratory of the Patras University (HP). The seismological networks of the above institutions constitute the recently (2004) established Hellenic Unified Seismic Network (HUSN). These records are used to calculate a refined geometrical spreading factor and an anelastic attenuation coefficient, representative for Greece and surrounding areas, proper for accurate calculation of local magnitudes in this region. Individual station corrections depending on the crustal structure variations in their vicinity and possible inconsistencies in instruments responses are also considered in order to further ameliorate magnitude estimation accuracy. Comparison of such calculated local magnitudes with corresponding original moment magnitudes, based on an independent dataset, revealed that these magnitude scales are equivalent for a wide range of values.
NASA Astrophysics Data System (ADS)
Garrote, J.; Alvarenga, F. M.; Díez-Herrero, A.
2016-10-01
The village of Pajares de Pedraza (Segovia, Spain) is located in the floodplain of the Cega River, a left bank tributary of the Douro River. Repeated flash flood events occur in this small village because of its upstream catchment area, mountainous character and impermeable lithology, which reduce concentration time to just a few hours. River overbank flow has frequently caused flooding and property damage to homes and rural properties, most notably in 1927, 1991, 1996, 2001, 2013 and 2014. Consequently, a detailed analysis was carried out to quantify the economic risk of flash floods in peri-urban and rural areas. Magnitudes and exceedance probabilities were obtained from a flood frequency analysis of maximum discharges. To determine the extent and characteristics of the flooded area, we performed 2D hydraulic modeling (Iber 2.0 software) based on LIDAR (1 m) topography and considering three different scenarios associated with the initial construction (1997) and subsequent extension (2013) of a linear defense structure (rockfill dike or levee) to protect the population. Specific stage-damage functions were expressly developed using in situ data collection for exposed elements, with special emphasis on urban-type categories. The average number of elements and their unit value were established. The relationship between water depth and the height at which electric outlets, furniture, household goods, etc. were located was analyzed; due to its effect on the form of the function. Other nonspecific magnitude-damage functions were used in order to compare both economic estimates. The results indicate that the use of non-specific magnitude-damage functions leads to a significant overestimation of economic losses, partly linked to the use of general economic cost data. Furthermore, a detailed classification and financial assessment of exposed assets is the most important step to ensure a correct estimate of financial losses. In both cases, this should include a consideration of the socio-economic and cultural conditions prevailing in the area, as well as the types of flood that affect it.
Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake
Hayes, Gavin P.
2011-01-01
On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.
Sari, Nazmi; Rotter, Thomas; Goodridge, Donna; Harrison, Liz; Kinsman, Leigh
2017-08-03
The costs of investing in health care reform initiatives to improve quality and safety have been underreported and are often underestimated. This paper reports direct and indirect cost estimates for the initial phase of the province-wide implementation of Lean activities in Saskatchewan, Canada. In order to obtain detailed information about each type of Lean event, as well as the total number of corresponding Lean events, we used the Provincial Kaizen Promotion Office (PKPO) Kaizen database. While the indirect cost of Lean implementation has been estimated using the corresponding wage rate for the event participants, the direct cost has been estimated using the fees paid to the consultant and other relevant expenses. The total cost for implementation of Lean over two years (2012-2014), including consultants and new hires, ranged from $44 million CAD to $49.6 million CAD, depending upon the assumptions used. Consultant costs accounted for close to 50% of the total. The estimated cost of Lean events alone ranged from $16 million CAD to $19.5 million CAD, with Rapid Process Improvement Workshops requiring the highest input of resources. Recognizing the substantial financial and human investments required to undertake reforms designed to improve quality and contain cost, policy makers must carefully consider whether and how these efforts result in the desired transformations. Evaluation of the outcomes of these investments must be part of the accountability framework, even prior to implementation.
Are cooler surfaces a cost-effect mitigation of urban heat islands?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pomerantz, Melvin
Much research has gone into technologies to mitigate urban heat islands by making urban surfaces cooler by increasing their albedos. To be practical, the benefit of the technology must be greater than its cost. Here, this report provides simple methods for quantifying the maxima of some benefits that albedo increases may provide. The method used is an extension of an earlier paper that estimated the maximum possible electrical energy saving achievable in an entire city in a year by a change of albedo of its surfaces. The present report estimates the maximum amounts and monetary savings of avoided CO 2more » emissions and the decreases in peak power demands. As examples, for several warm cities in California, a 0.2 increase in albedo of pavements is found to reduce CO 2 emissions by < 1 kg per m 2 per year. At the current price of CO 2 reduction in California, the monetary saving is < US$ 0.01 per year per m 2 modified. The resulting maximum peak-power reductions are estimated to be < 7% of the base power of the city. In conclusion, the magnitudes of the savings are such that decision-makers should choose carefully which urban heat island mitigation techniques are cost effective.« less
Are cooler surfaces a cost-effect mitigation of urban heat islands?
Pomerantz, Melvin
2017-04-20
Much research has gone into technologies to mitigate urban heat islands by making urban surfaces cooler by increasing their albedos. To be practical, the benefit of the technology must be greater than its cost. Here, this report provides simple methods for quantifying the maxima of some benefits that albedo increases may provide. The method used is an extension of an earlier paper that estimated the maximum possible electrical energy saving achievable in an entire city in a year by a change of albedo of its surfaces. The present report estimates the maximum amounts and monetary savings of avoided CO 2more » emissions and the decreases in peak power demands. As examples, for several warm cities in California, a 0.2 increase in albedo of pavements is found to reduce CO 2 emissions by < 1 kg per m 2 per year. At the current price of CO 2 reduction in California, the monetary saving is < US$ 0.01 per year per m 2 modified. The resulting maximum peak-power reductions are estimated to be < 7% of the base power of the city. In conclusion, the magnitudes of the savings are such that decision-makers should choose carefully which urban heat island mitigation techniques are cost effective.« less
A cost-effectiveness evaluation of hospital discharge counseling by pharmacists.
Chinthammit, Chanadda; Armstrong, Edward P; Warholak, Terri L
2012-04-01
This study estimated the cost-effectiveness of pharmacist discharge counseling on medication-related morbidity in both the high-risk elderly and general US population. A cost-effectiveness decision analytic model was developed using a health care system perspective based on published clinical trials. Costs included direct medical costs, and the effectiveness unit was patients discharged without suffering a subsequent adverse drug event. A systematic review of published studies was conducted to estimate variable probabilities in the cost-effectiveness model. To test the robustness of the results, a second-order probabilistic sensitivity analysis (Monte Carlo simulation) was used to run 10 000 cases through the model sampling across all distributions simultaneously. Pharmacist counseling at hospital discharge provided a small, but statistically significant, clinical improvement at a similar overall cost. Pharmacist counseling was cost saving in approximately 48% of scenarios and in the remaining scenarios had a low willingness-to-pay threshold for all scenarios being cost-effective. In addition, discharge counseling was more cost-effective in the high-risk elderly population compared to the general population. This cost-effectiveness analysis suggests that discharge counseling by pharmacists is quite cost-effective and estimated to be cost saving in over 48% of cases. High-risk elderly patients appear to especially benefit from these pharmacist services.
Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M; Latash, Mark L
2017-02-01
We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force-moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task.
Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M.; Latash, Mark L.
2016-01-01
We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force/moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task. PMID:27785549
Manns, Braden; McKenzie, Susan Q.; Au, Flora; Gignac, Pamela M.; Geller, Lawrence Ian
2017-01-01
Background: Many working-age individuals with advanced chronic kidney disease (CKD) are unable to work, or are only able to work at a reduced capacity and/or with a reduction in time at work, and receive disability payments, either from the Canadian government or from private insurers, but the magnitude of those payments is unknown. Objective: The objective of this study was to estimate Canada Pension Plan Disability Benefit and private disability insurance benefits paid to Canadians with advanced kidney failure, and how feasible improvements in prevention, identification, and early treatment of CKD and increased use of kidney transplantation might mitigate those costs. Design: This study used an analytical model combining Canadian data from various sources. Setting and Patients: This study included all patients with advanced CKD in Canada, including those with estimated glomerular filtration rate (eGFR) <30 mL/min/m2 and those on dialysis. Measurements: We combined disability estimates from a provincial kidney care program with the prevalence of advanced CKD and estimated disability payments from the Canada Pension Plan and private insurance plans to estimate overall disability benefit payments for Canadians with advanced CKD. Results: We estimate that Canadians with advanced kidney failure are receiving disability benefit payments of at least Can$217 million annually. These estimates are sensitive to the proportion of individuals with advanced kidney disease who are unable to work, and plausible variation in this estimate could mean patients with advanced kidney disease are receiving up to Can$260 million per year. Feasible strategies to reduce the proportion of individuals with advanced kidney disease, either through prevention, delay or reduction in severity, or increasing the rate of transplantation, could result in reductions in the cost of Canada Pension Plan and private disability insurance payments by Can$13.8 million per year within 5 years. Limitations: This study does not estimate how CKD prevention or increasing the rate of kidney transplantation might influence health care cost savings more broadly, and does not include the cost to provincial governments for programs that provide income for individuals without private insurance and who do not qualify for Canada Pension Plan disability payments. Conclusions: Private disability insurance providers and federal government programs incur high costs related to individuals with advanced kidney failure, highlighting the significance of kidney disease not only to patients, and their families, but also to these other important stakeholders. Improvements in care of individuals with kidney disease could reduce these costs. PMID:28491340
Manns, Braden; McKenzie, Susan Q; Au, Flora; Gignac, Pamela M; Geller, Lawrence Ian
2017-01-01
Many working-age individuals with advanced chronic kidney disease (CKD) are unable to work, or are only able to work at a reduced capacity and/or with a reduction in time at work, and receive disability payments, either from the Canadian government or from private insurers, but the magnitude of those payments is unknown. The objective of this study was to estimate Canada Pension Plan Disability Benefit and private disability insurance benefits paid to Canadians with advanced kidney failure, and how feasible improvements in prevention, identification, and early treatment of CKD and increased use of kidney transplantation might mitigate those costs. This study used an analytical model combining Canadian data from various sources. This study included all patients with advanced CKD in Canada, including those with estimated glomerular filtration rate (eGFR) <30 mL/min/m 2 and those on dialysis. We combined disability estimates from a provincial kidney care program with the prevalence of advanced CKD and estimated disability payments from the Canada Pension Plan and private insurance plans to estimate overall disability benefit payments for Canadians with advanced CKD. We estimate that Canadians with advanced kidney failure are receiving disability benefit payments of at least Can$217 million annually. These estimates are sensitive to the proportion of individuals with advanced kidney disease who are unable to work, and plausible variation in this estimate could mean patients with advanced kidney disease are receiving up to Can$260 million per year. Feasible strategies to reduce the proportion of individuals with advanced kidney disease, either through prevention, delay or reduction in severity, or increasing the rate of transplantation, could result in reductions in the cost of Canada Pension Plan and private disability insurance payments by Can$13.8 million per year within 5 years. This study does not estimate how CKD prevention or increasing the rate of kidney transplantation might influence health care cost savings more broadly, and does not include the cost to provincial governments for programs that provide income for individuals without private insurance and who do not qualify for Canada Pension Plan disability payments. Private disability insurance providers and federal government programs incur high costs related to individuals with advanced kidney failure, highlighting the significance of kidney disease not only to patients, and their families, but also to these other important stakeholders. Improvements in care of individuals with kidney disease could reduce these costs.
An ultrasensitive and low-cost graphene sensor based on layer-by-layer nano self-assembly
NASA Astrophysics Data System (ADS)
Zhang, Bo; Cui, Tianhong
2011-02-01
The flexible cancer sensor based on layer-by-layer self-assembled graphene reported in this letter demonstrates features including ultrahigh sensitivity and low cost due to graphene material properties in nature, self-assembly technique, and polyethylene terephthalate substrate. According to the conductance change of self-assembled graphene, the label free and labeled graphene sensors are capable of detecting very low concentrations of prostate specific antigen down to 4 fg/ml (0.11 fM) and 0.4 pg/ml (11 fM), respectively, which are three orders of magnitude lower than carbon nanotube sensors under the same conditions of design, manufacture, and measurement.
Impact-Actuated Digging Tool for Lunar Excavation
NASA Technical Reports Server (NTRS)
Wilson, Jak; Chu, Philip; Craft, Jack; Zacny, Kris; Santoro, Chris
2013-01-01
NASA s plans for a lunar outpost require extensive excavation. The Lunar Surface Systems Project Office projects that thousands of tons of lunar soil will need to be moved. Conventional excavators dig through soil by brute force, and depend upon their substantial weight to react to the forces generated. This approach will not be feasible on the Moon for two reasons: (1) gravity is 1/6th that on Earth, which means that a kg on the Moon will supply 1/6 the down force that it does on Earth, and (2) transportation costs (at the time of this reporting) of $50K to $100K per kg make massive excavators economically unattractive. A percussive excavation system was developed for use in vacuum or nearvacuum environments. It reduces the down force needed for excavation by an order of magnitude by using percussion to assist in soil penetration and digging. The novelty of this excavator is that it incorporates a percussive mechanism suited to sustained operation in a vacuum environment. A percussive digger breadboard was designed, built, and successfully tested under both ambient and vacuum conditions. The breadboard was run in vacuum to more than 2..times the lifetime of the Apollo Lunar Surface Drill, throughout which the mechanism performed and held up well. The percussive digger was demonstrated to reduce the force necessary for digging in lunar soil simulant by an order of magnitude, providing reductions as high as 45:1. This is an enabling technology for lunar site preparation and ISRU (In Situ Resource Utilization) mining activities. At transportation costs of $50K to $100K per kg, reducing digging forces by an order of magnitude translates into billions of dollars saved by not launching heavier systems to accomplish excavation tasks necessary to the establishment of a lunar outpost. Applications on the lunar surface include excavation for habitats, construction of roads, landing pads, berms, foundations, habitat shielding, and ISRU.
43 CFR 11.84 - Damage determination phase-implementation guidance.
Code of Federal Regulations, 2012 CFR
2012-10-01
... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...
43 CFR 11.84 - Damage determination phase-implementation guidance.
Code of Federal Regulations, 2013 CFR
2013-10-01
... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...
43 CFR 11.84 - Damage determination phase-implementation guidance.
Code of Federal Regulations, 2014 CFR
2014-10-01
... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...
On-line estimation of error covariance parameters for atmospheric data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1995-01-01
A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.
78 FR 50143 - Proposed Collection; Comment Request for Announcement 2004-38
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-16
... made in order for certain employers to take advantage of the alternative deficit reduction contribution... this time. Type of Review: Extension of a currently approved collection. Affected Public: Business or... forms of information technology; and (e) estimates of capital or start-up costs and costs of operation...
Demystifying the Cost Estimation Process
ERIC Educational Resources Information Center
Obi, Samuel C.
2010-01-01
In manufacturing today, nothing is more important than giving a customer a clear and straight-forward accounting of what their money has purchased. Many potentially promising return business orders are lost because of unclear, ambiguous, or improper billing. One of the best ways of resolving cost bargaining conflicts is by providing a…
NASA Astrophysics Data System (ADS)
Juszczyk, Michał; Leśniak, Agnieszka; Zima, Krzysztof
2013-06-01
Conceptual cost estimation is important for construction projects. Either underestimation or overestimation of building raising cost may lead to failure of a project. In the paper authors present application of a multicriteria comparative analysis (MCA) in order to select factors influencing residential building raising cost. The aim of the analysis is to indicate key factors useful in conceptual cost estimation in the early design stage. Key factors are being investigated on basis of the elementary information about the function, form and structure of the building, and primary assumptions of technological and organizational solutions applied in construction process. The mentioned factors are considered as variables of the model which aim is to make possible conceptual cost estimation fast and with satisfying accuracy. The whole analysis included three steps: preliminary research, choice of a set of potential variables and reduction of this set to select the final set of variables. Multicriteria comparative analysis is applied in problem solution. Performed analysis allowed to select group of factors, defined well enough at the conceptual stage of the design process, to be used as a describing variables of the model.
Redondo, Jonatan Pajares; González, Lisardo Prieto; Guzman, Javier García; Boada, Beatriz L; Díaz, Vicente
2018-02-06
Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices.
Díaz, Vicente
2018-01-01
Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices. PMID:29415507
dc-plasma-sprayed electronic-tube device
Meek, T.T.
1982-01-29
An electronic tube and associated circuitry which is produced by dc plasma arc spraying techniques is described. The process is carried out in a single step automated process whereby both active and passive devices are produced at very low cost. The circuitry is extremely reliable and is capable of functioning in both high radiation and high temperature environments. The size of the electronic tubes produced are more than an order of magnitude smaller than conventional electronic tubes.
Stability of individual loudness functions obtained by magnitude estimation and production
NASA Technical Reports Server (NTRS)
Hellman, R. P.
1981-01-01
A correlational analysis of individual magnitude estimation and production exponents at the same frequency is performed, as is an analysis of individual exponents produced in different sessions by the same procedure across frequency (250, 1000, and 3000 Hz). Taken as a whole, the results show that individual exponent differences do not decrease by counterbalancing magnitude estimation with magnitude production and that individual exponent differences remain stable over time despite changes in stimulus frequency. Further results show that although individual magnitude estimation and production exponents do not necessarily obey the .6 power law, it is possible to predict the slope of an equal-sensation function averaged for a group of listeners from individual magnitude estimation and production data. On the assumption that individual listeners with sensorineural hearing also produce stable and reliable magnitude functions, it is also shown that the slope of the loudness-recruitment function measured by magnitude estimation and production can be predicted for individuals with bilateral losses of long duration. Results obtained in normal and pathological ears thus suggest that individual listeners can produce loudness judgements that reveal, although indirectly, the input-output characteristic of the auditory system.
Advanced Radioisotope Power Systems Segmented Thermoelectric Research
NASA Technical Reports Server (NTRS)
Caillat, Thierry
2004-01-01
Flight times are long; - Need power systems with >15 years life. Mass is at an absolute premium; - Need power systems with high specific power and scalability. 3 orders of magnitude reduction in solar irradiance from Earth to Pluto. Nuclear power sources preferable. The Overall objective is to develop low mass, high efficiency, low-cost Advanced Radioisotope Power System with double the Specific Power and Efficiency over state-of-the-art Radioisotope Thermoelectric Generators (RTGs).
Advanced UV Source for Biological Agent Destruction
2006-01-01
protection against chemical agents. The AUVS can be inserted into HVAC air ducts to eliminate BW agents, used to purify water, and / or used to reduce...operating costs are very low. The technology has been shown to be very effective for destroying Bacillus pumilus endospores that are significantly more...resistant to UV than anthrax spores . Up to7 orders of magnitude (7 logs) kill of B. pumilus spores have been demonstrated with the AUVS technology
Overview of SDCM - The Spacecraft Design and Cost Model
NASA Technical Reports Server (NTRS)
Ferebee, Melvin J.; Farmer, Jeffery T.; Andersen, Gregory C.; Flamm, Jeffery D.; Badi, Deborah M.
1988-01-01
The Spacecraft Design and Cost Model (SDCM) is a computer-aided design and analysis tool for synthesizing spacecraft configurations, integrating their subsystems, and generating information concerning on-orbit servicing and costs. SDCM uses a bottom-up method in which the cost and performance parameters for subsystem components are first calculated; the model then sums the contributions from individual components in order to obtain an estimate of sizes and costs for each candidate configuration within a selected spacecraft system. An optimum spacraft configuration can then be selected.
Some insight on censored cost estimators.
Zhao, H; Cheng, Y; Bang, H
2011-08-30
Censored survival data analysis has been studied for many years. Yet, the analysis of censored mark variables, such as medical cost, quality-adjusted lifetime, and repeated events, faces a unique challenge that makes standard survival analysis techniques invalid. Because of the 'informative' censorship imbedded in censored mark variables, the use of the Kaplan-Meier (Journal of the American Statistical Association 1958; 53:457-481) estimator, as an example, will produce biased estimates. Innovative estimators have been developed in the past decade in order to handle this issue. Even though consistent estimators have been proposed, the formulations and interpretations of some estimators are less intuitive to practitioners. On the other hand, more intuitive estimators have been proposed, but their mathematical properties have not been established. In this paper, we prove the analytic identity between some estimators (a statistically motivated estimator and an intuitive estimator) for censored cost data. Efron (1967) made similar investigation for censored survival data (between the Kaplan-Meier estimator and the redistribute-to-the-right algorithm). Therefore, we view our study as an extension of Efron's work to informatively censored data so that our findings could be applied to other marked variables. Copyright © 2011 John Wiley & Sons, Ltd.
A Study of the Characteristics, Costs, and Magnitude of Interlibrary Loans in Academic Libraries.
ERIC Educational Resources Information Center
Palmour, Vernon E., Comp.; And Others
A national probability sample was made to survey the costs, the characteristics of materials loaned and borrowed, and the present and future magnitude of interlibrary loans for academic libraries. From a sample of 80 libraries it was found that the cost of a filled loan request varied between two and seven dollars and that lending cost per…
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Cratering time scales for the Galilean satellites
NASA Technical Reports Server (NTRS)
Shoemaker, E. M.; Wolfe, R. F.
1982-01-01
An attempt is made to estimate the present cratering rate for each Galilean satellite within the correct order of magnitude and to extend the cratering rates back into the geologic past on the basis of evidence from the earth-moon system. For collisions with long and short period comets, the magnitudes and size distributions of the comet nuclei, the distribution of their perihelion distances, and the completeness of discovery are addressed. The diameters and masses of cometary nuclei are assessed, as are crater diameters and cratering rates. The dynamical relations between long period and short period comets are discussed, and the population of Jupiter-crossing asteroids is assessed. Estimated present cratering rates on the Galilean satellites are compared and variations of cratering rate with time are considered. Finally, the consistency of derived cratering time scales with the cratering record of the icy Galilean satellites is discussed.
Maximum magnitude in the Lower Rhine Graben
NASA Astrophysics Data System (ADS)
Vanneste, Kris; Merino, Miguel; Stein, Seth; Vleminckx, Bart; Brooks, Eddie; Camelbeeck, Thierry
2014-05-01
Estimating Mmax, the assumed magnitude of the largest future earthquakes expected on a fault or in an area, involves large uncertainties. No theoretical basis exists to infer Mmax because even where we know the long-term rate of motion across a plate boundary fault, or the deformation rate across an intraplate zone, neither predict how strain will be released. As a result, quite different estimates can be made based on the assumptions used. All one can say with certainty is that Mmax is at least as large as the largest earthquake in the available record. However, because catalogs are often short relative to the average recurrence time of large earthquakes, larger earthquakes than anticipated often occur. Estimating Mmax is especially challenging within plates, where deformation rates are poorly constrained, large earthquakes are rarer and variable in space and time, and often occur on previously unrecognized faults. We explore this issue for the Lower Rhine Graben seismic zone where the largest known earthquake, the 1756 Düren earthquake, has magnitude 5.7 and should occur on average about every 400 years. However, paleoseismic studies suggest that earthquakes with magnitudes up to 6.7 occurred during the Late Pleistocene and Holocene. What to assume for Mmax is crucial for critical facilities like nuclear power plants that should be designed to withstand the maximum shaking in 10,000 years. Using the observed earthquake frequency-magnitude data, we generate synthetic earthquake histories, and sample them over shorter intervals corresponding to the real catalog's completeness. The maximum magnitudes appearing most often in the simulations tend to be those of earthquakes with mean recurrence time equal to the catalog length. Because catalogs are often short relative to the average recurrence time of large earthquakes, we expect larger earthquakes than observed to date to occur. In a next step, we will compute hazard maps for different return periods based on the synthetic catalogs, in order to determine the influence of underestimating Mmax.
NASA Astrophysics Data System (ADS)
Cupola, F.; Tanda, M. G.; Zanini, A.
2014-12-01
The interest in approaches that allow the estimation of pollutant source release in groundwater has increased exponentially over the last decades. This is due to the large number of groundwater reclamation procedures that have been carried out: the remediation is expensive and the costs can be easily shared among the different actors if the release history is known. Moreover, a reliable release history can be a useful tool for predicting the plume evolution and for minimizing the harmful effects of the contamination. In this framework, Woodbury and Ulrych (1993, 1996) adopted and improved the minimum relative entropy (MRE) method to solve linear inverse problems for the recovery of the pollutant release history in an aquifer. In this work, the MRE method has been improved to detect the source release history in 2-D aquifer characterized by a non-uniform flow-field. The approach has been tested on two cases: a 2-D homogeneous conductivity field and a strong heterogeneous one (the hydraulic conductivity presents three orders of magnitude in terms of variability). In the latter case the transfer function could not be described with an analytical formulation, thus, the transfer functions were estimated by means of the method developed by Butera et al. (2006). In order to demonstrate its scope, this method was applied with two different datasets: observations collected at the same time at 20 different monitoring points, and observations collected at 2 monitoring points at different times (15-25 monitoring points). The data observed were considered affected by a random error. These study cases have been carried out considering a Boxcar and a Gaussian function as expected value of the prior distribution of the release history. The agreement between the true and the estimated release history has been evaluated through the calculation of the normalized Root Mean Square (nRMSE) error: this has shown the ability of the method of recovering the release history even in the most severe cases. Finally, the forward simulation has been carried out by using the estimated release history in order to compare the true data with the estimated one: the best agreement has been obtained in the homogeneous case, even if also in the heterogenous one the nRMSE is acceptable.
Rawlins, B G; Scheib, C; Tyler, A N; Beamish, D
2012-12-01
Regulatory authorities need ways to estimate natural terrestrial gamma radiation dose rates (nGy h⁻¹) across the landscape accurately, to assess its potential deleterious health effects. The primary method for estimating outdoor dose rate is to use an in situ detector supported 1 m above the ground, but such measurements are costly and cannot capture the landscape-scale variation in dose rates which are associated with changes in soil and parent material mineralogy. We investigate the potential for improving estimates of terrestrial gamma dose rates across Northern Ireland (13,542 km²) using measurements from 168 sites and two sources of ancillary data: (i) a map based on a simplified classification of soil parent material, and (ii) dose estimates from a national-scale, airborne radiometric survey. We used the linear mixed modelling framework in which the two ancillary variables were included in separate models as fixed effects, plus a correlation structure which captures the spatially correlated variance component. We used a cross-validation procedure to determine the magnitude of the prediction errors for the different models. We removed a random subset of 10 terrestrial measurements and formed the model from the remainder (n = 158), and then used the model to predict values at the other 10 sites. We repeated this procedure 50 times. The measurements of terrestrial dose vary between 1 and 103 (nGy h⁻¹). The median absolute model prediction errors (nGy h⁻¹) for the three models declined in the following order: no ancillary data (10.8) > simple geological classification (8.3) > airborne radiometric dose (5.4) as a single fixed effect. Estimates of airborne radiometric gamma dose rate can significantly improve the spatial prediction of terrestrial dose rate.
Team X Report #1401: Exoplanet Coronagraph STDT Study 2013-06
NASA Technical Reports Server (NTRS)
Warfield, Keith
2013-01-01
This document is intended to stimulate discussion of the topic described. All technical and cost analyses are preliminary. This document is not a commitment to work, but is a precursor to a formal proposal if it generates sufficient mutual interest. The data contained in this document may not be modified in any way. Cost estimates described or summarized in this document were generated as part of a preliminary, first-order cost class identification as part of an early trade space study, are based on JPL-internal parametric cost modeling, assume a JPL in-house build, and do not constitute a commitment on the part of JPL or Caltech. JPL and Team X add cost reserves for development and operations. Unadjusted estimate totals and cost reserve allocations would be revised as needed in future more-detailed studies as appropriate for the specific cost-risks for a given mission concept.
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
NASA Technical Reports Server (NTRS)
Thomas, Dale; Smith, Charles; Thomas, Leann; Kittredge, Sheryl
2002-01-01
The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd-generation system by 2 orders of magnitude - equivalent to a crew risk of 1-in-10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. To best direct technology development decisions, analytical models are employed to accurately predict the benefits of each technology toward potential space transportation architectures as well as the risks associated with each technology. Rigorous systems analysis provides the foundation for assessing progress toward safety and cost goals. The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.
NASA Technical Reports Server (NTRS)
Thomas, Dale; Smith, Charles; Thomas, Leann; Kittredge, Sheryl
2002-01-01
The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd generation system by 2 orders of magnitude - equivalent to a crew risk of 1-in-10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. To best direct technology development decisions, analytical models are employed to accurately predict the benefits of each technology toward potential space transportation architectures as well as the risks associated with each technology. Rigorous systems analysis provides the foundation for assessing progress toward safety and cost goals. The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.
Estimation of Local Bone Loads for the Volume of Interest.
Kim, Jung Jin; Kim, Youkyung; Jang, In Gwun
2016-07-01
Computational bone remodeling simulations have recently received significant attention with the aid of state-of-the-art high-resolution imaging modalities. They have been performed using localized finite element (FE) models rather than full FE models due to the excessive computational costs of full FE models. However, these localized bone remodeling simulations remain to be investigated in more depth. In particular, applying simplified loading conditions (e.g., uniform and unidirectional loads) to localized FE models have a severe limitation in a reliable subject-specific assessment. In order to effectively determine the physiological local bone loads for the volume of interest (VOI), this paper proposes a novel method of estimating the local loads when the global musculoskeletal loads are given. The proposed method is verified for the three VOI in a proximal femur in terms of force equilibrium, displacement field, and strain energy density (SED) distribution. The effect of the global load deviation on the local load estimation is also investigated by perturbing a hip joint contact force (HCF) in the femoral head. Deviation in force magnitude exhibits the greatest absolute changes in a SED distribution due to its own greatest deviation, whereas angular deviation perpendicular to a HCF provides the greatest relative change. With further in vivo force measurements and high-resolution clinical imaging modalities, the proposed method will contribute to the development of reliable patient-specific localized FE models, which can provide enhanced computational efficiency for iterative computing processes such as bone remodeling simulations.
Robust Tracking of Small Displacements with a Bayesian Estimator
Dumont, Douglas M.; Byram, Brett C.
2016-01-01
Radiation-force-based elasticity imaging describes a group of techniques that use acoustic radiation force (ARF) to displace tissue in order to obtain qualitative or quantitative measurements of tissue properties. Because ARF-induced displacements are on the order of micrometers, tracking these displacements in vivo can be challenging. Previously, it has been shown that Bayesian-based estimation can overcome some of the limitations of a traditional displacement estimator like normalized cross-correlation (NCC). In this work, we describe a Bayesian framework that combines a generalized Gaussian-Markov random field (GGMRF) prior with an automated method for selecting the prior’s width. We then evaluate its performance in the context of tracking the micrometer-order displacements encountered in an ARF-based method like acoustic radiation force impulse (ARFI) imaging. The results show that bias, variance, and mean-square error performance vary with prior shape and width, and that an almost one order-of-magnitude reduction in mean-square error can be achieved by the estimator at the automatically-selected prior width. Lesion simulations show that the proposed estimator has a higher contrast-to-noise ratio but lower contrast than NCC, median-filtered NCC, and the previous Bayesian estimator, with a non-Gaussian prior shape having better lesion-edge resolution than a Gaussian prior. In vivo results from a cardiac, radiofrequency ablation ARFI imaging dataset show quantitative improvements in lesion contrast-to-noise ratio over NCC as well as the previous Bayesian estimator. PMID:26529761
ERIC Educational Resources Information Center
Peterson, Cora
2009-01-01
Schools that participate in the National School Lunch Program receive a portion of their federal funding as commodity foods rather than cash payments. This research compared the product costs and estimated total procurement costs of commodity and commercial foods from the school district perspective using data from 579 Minnesota ordering sites in…
NASA Astrophysics Data System (ADS)
Gavin, D. G.; Colombaroli, D.; Morey, A. E.
2015-12-01
The inclusion of paleo-flood events greatly affects estimates of peak magnitudes (e.g., Q100) in flood-frequency analysis. Likewise, peak events also are associated with certain synoptic climatic patterns that vary on all time scales. Geologic records preserved in lake sediments have the potential to capture the non-stationarity in frequency-magnitude relationships, but few such records preserve a continuous history of event magnitudes. We present a 10-meter 2000-yr record from Upper Squaw Lake, Oregon, that contains finely laminated silt layers that reflect landscape erosion events from the 40 km2 watershed. CT-scans of the core (<1 mm resolution) and a 14C-dated chronology yielded a pseudo-annual time series of erosion magnitudes. The most recent 80 years of the record correlates strongly with annual peak stream discharge and road construction. We examined the frequency-magnitude relationship for the entire pre-road period and show that the seven largest events fall above a strongly linear relationship, suggesting a distinct process (e.g., severe fires or earthquakes) operating at low-frequency to generate large-magnitude events. Expressing the record as cumulative sediment accumulation anomalies showed the importance of the large events in "returning the system" to the long-term mean rate. Applying frequency-magnitude analysis in a moving window showed that the Q100 and Q10 of watershed erosion varied by 1.7 and 1.0 orders of magnitude, respectively. The variations in watershed erosion are weakly correlated with temperature and precipitation reconstructions at the decadal to centennial scale. This suggests that dynamics both internal (i.e., sediment production) and external (i.e., earthquakes) to the system, as well as more stochastic events (i.e., single severe wildfires) can at least partially over-ride external climate forcing of watershed erosion at decadal to centennial time scales.
Predicting First Traversal Times for Virions and Nanoparticles in Mucus with Slowed Diffusion
Erickson, Austen M.; Henry, Bruce I.; Murray, John M.; Klasse, Per Johan; Angstmann, Christopher N.
2015-01-01
Particle-tracking experiments focusing on virions or nanoparticles in mucus have measured mean-square displacements and reported diffusion coefficients that are orders of magnitude smaller than the diffusion coefficients of such particles in water. Accurate description of this subdiffusion is important to properly estimate the likelihood of virions traversing the mucus boundary layer and infecting cells in the epithelium. However, there are several candidate models for diffusion that can fit experimental measurements of mean-square displacements. We show that these models yield very different estimates for the time taken for subdiffusive virions to traverse through a mucus layer. We explain why fits of subdiffusive mean-square displacements to standard diffusion models may be misleading. Relevant to human immunodeficiency virus infection, using computational methods for fractional subdiffusion, we show that subdiffusion in normal acidic mucus provides a more effective barrier against infection than previously thought. By contrast, the neutralization of the mucus by alkaline semen, after sexual intercourse, allows virions to cross the mucus layer and reach the epithelium in a short timeframe. The computed barrier protection from fractional subdiffusion is some orders of magnitude greater than that derived by fitting standard models of diffusion to subdiffusive data. PMID:26153713
High-magnitude flooding across Britain since AD 1750
NASA Astrophysics Data System (ADS)
Macdonald, Neil; Sangster, Heather
2017-03-01
The last decade has witnessed severe flooding across much of the globe, but have these floods really been exceptional? Globally, relatively few instrumental river flow series extend beyond 50 years, with short records presenting significant challenges in determining flood risk from high-magnitude floods. A perceived increase in extreme floods in recent years has decreased public confidence in conventional flood risk estimates; the results affect society (insurance costs), individuals (personal vulnerability) and companies (e.g. water resource managers). Here, we show how historical records from Britain have improved understanding of high-magnitude floods, by examining past spatial and temporal variability. The findings identify that whilst recent floods are notable, several comparable periods of increased flooding are identifiable historically, with periods of greater frequency (flood-rich periods). Statistically significant relationships between the British flood index, the Atlantic Meridional Oscillation and the North Atlantic Oscillation Index are identified. The use of historical records identifies that the largest floods often transcend single catchments affecting regions and that the current flood-rich period is not unprecedented.
The human and economic cost of hidden hunger.
Stein, Alexander J; Qaim, Matin
2007-06-01
Micronutrient malnutrition is a public health problem in many developing countries. Its negative impact on income growth is recognized in principle, but there are widely varying estimates of the related economic cost. To discuss available studies that quantify the cost of micronutrient malnutrition, and to develop an alternative framework and apply it to India. Detailed burden of disease calculations are used to estimate the economic cost of micronutrient malnutrition based on disability-adjusted life years (DALYs) lost. The short-term economic cost of micronutrient malnutrition in India amounts to 0.8% to 2.5% of the gross domestic product. Although the results confirm that micronutrient malnutrition is a huge economic problem, the estimates are lower than those of most previous studies. The differences may be due to differences in underlying assumptions, quality of data, and precision of calculation, but also to dynamic interactions between nutrition, health, and economic productivity, which are difficult to capture. Clear explanation of all calculation details would be desirable for future studies in order to increase credibility and transparency.
Buja, Alessandra; Sartor, Gino; Scioni, Manuela; Vecchiato, Antonella; Bolzan, Mario; Rebba, Vincenzo; Sileni, Vanna Chiarion; Palozzo, Angelo Claudio; Montesco, Maria; Del Fiore, Paolo; Baldo, Vincenzo; Rossi, Carlo Riccardo
2018-02-07
Cutaneous melanoma is a major concern in terms of healthcare systems and economics. The aim of this study was to estimate the direct costs of melanoma by disease stage, phase of diagnosis, and treatment according to the pre-set clinical guidelines drafted by the AIOM (Italian Medical Oncological Association). Based on the AIOM guidelines for malignant cutaneous melanoma, a highly detailed decision-making model was developed describing the patient's pathway from diagnosis through the subsequent phases of disease staging, surgical and medical treatment, and follow-up. The model associates each phase potentially involving medical procedures with a likelihood measure and a cost, thus enabling an estimation of the expected costs by disease stage and clinical phase of melanoma diagnosis and treatment according to the clinical guidelines. The mean per-patient cost of the whole melanoma pathway (including one year of follow-up) ranged from €149 for stage 0 disease to €66,950 for stage IV disease. The costs relating to each phase of the disease's diagnosis and treatment depended on disease stage. It is essential to calculate the direct costs of managing malignant cutaneous melanoma according to clinical guidelines in order to estimate the economic burden of this disease and to enable policy-makers to allocate appropriate resources.
Variation in the costs of delivering routine immunization services in Peru.
Walker, D; Mosqueira, N R; Penny, M E; Lanata, C F; Clark, A D; Sanderson, C F B; Fox-Rushby, J A
2004-09-01
Estimates of vaccination costs usually provide only point estimates at national level with no information on cost variation. In practice, however, such information is necessary for programme managers. This paper presents information on the variations in costs of delivering routine immunization services in three diverse districts of Peru: Ayacucho (a mountainous area), San Martin (a jungle area) and Lima (a coastal area). We consider the impact of variability on predictions of cost and reflect on the likely impact on expected cost-effectiveness ratios, policy decisions and future research practice. All costs are in 2002 prices in US dollars and include the costs of providing vaccination services incurred by 19 government health facilities during the January-December 2002 financial year. Vaccine wastage rates have been estimated using stock records. The cost per fully vaccinated child ranged from 16.63-24.52 U.S. Dollars in Ayacucho, 21.79-36.69 U.S. Dollars in San Martin and 9.58-20.31 U.S. Dollars in Lima. The volume of vaccines administered and wastage rates are determinants of the variation in costs of delivering routine immunization services. This study shows there is considerable variation in the costs of providing vaccines across geographical regions and different types of facilities. Information on how costs vary can be used as a basis from which to generalize to other settings and provide more accurate estimates for decision-makers who do not have disaggregated data on local costs. Future studies should include sufficiently large sample sizes and ensure that regions are carefully selected in order to maximize the interpretation of cost variation.
A 1400-MHz survey of 1478 Abell clusters of galaxies
NASA Technical Reports Server (NTRS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1982-01-01
Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.
Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models
NASA Astrophysics Data System (ADS)
Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel
2014-07-01
We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a surrogate. As waveform generation is one of the dominant costs in parameter estimation algorithms and parameter space exploration, surrogate models offer a new and practical way to dramatically accelerate such studies without impacting accuracy. Surrogates built in this paper, as well as others, are available from GWSurrogate, a publicly available python package.
Manso, J; García-Barrera, T; Gómez-Ariza, J L; González, A G
2014-02-01
The present paper describes a method based on the extraction of analytes by multiple hollow fibre liquid-phase microextraction and detection by ion-trap mass spectrometry and electron capture detectors after gas chromatographic separation. The limits of detection are in the range of 0.13-0.67 μg kg(-1), five orders of magnitude lower than those reached with the European Commission Official method of analysis, with three orders of magnitude of linear range (from the quantification limits to 400 μg kg(-1) for all the analytes) and recoveries in fortified olive oils in the range of 78-104 %. The main advantages of the analytical method are the absence of sample carryover (due to the disposable nature of the membranes), high enrichment factors in the range of 79-488, high throughput and low cost. The repeatability of the analytical method ranged from 8 to 15 % for all the analytes, showing a good performance.
NASA Astrophysics Data System (ADS)
Colby, Eric R.; Len, L. K.
Most particle accelerators today are expensive devices found only in the largest laboratories, industries, and hospitals. Using techniques developed nearly a century ago, the limiting performance of these accelerators is often traceable to material limitations, power source capabilities, and the cost tolerance of the application. Advanced accelerator concepts aim to increase the gradient of accelerators by orders of magnitude, using new power sources (e.g. lasers and relativistic beams) and new materials (e.g. dielectrics, metamaterials, and plasmas). Worldwide, research in this area has grown steadily in intensity since the 1980s, resulting in demonstrations of accelerating gradients that are orders of magnitude higher than for conventional techniques. While research is still in the early stages, these techniques have begun to demonstrate the potential to radically change accelerators, making them much more compact, and extending the reach of these tools of science into the angstrom and attosecond realms. Maturation of these techniques into robust, engineered devices will require sustained interdisciplinary, collaborative R&D and coherent use of test infrastructure worldwide. The outcome can potentially transform how accelerators are used.
NASA Astrophysics Data System (ADS)
Colby, Eric R.; Len, L. K.
Most particle accelerators today are expensive devices found only in the largest laboratories, industries, and hospitals. Using techniques developed nearly a century ago, the limiting performance of these accelerators is often traceable to material limitations, power source capabilities, and the cost tolerance of the application. Advanced accelerator conceptsa aim to increase the gradient of accelerators by orders of magnitude, using new power sources (e.g. lasers and relativistic beams) and new materials (e.g. dielectrics, metamaterials, and plasmas). Worldwide, research in this area has grown steadily in intensity since the 1980s, resulting in demonstrations of accelerating gradients that are orders of magnitude higher than for conventional techniques. While research is still in the early stages, these techniques have begun to demonstrate the potential to radically change accelerators, making them much more compact, and extending the reach of these tools of science into the angstrom and attosecond realms. Maturation of these techniques into robust, engineered devices will require sustained interdisciplinary, collaborative R&D and coherent use of test infrastructure worldwide. The outcome can potentially transform how accelerators are used.
Large low-field magnetoresistance in Fe3O4/molecule nanoparticles at room temperature
NASA Astrophysics Data System (ADS)
Yue, F. J.; Wang, S.; Lin, L.; Zhang, F. M.; Li, C. H.; Zuo, J. L.; Du, Y. W.; Wu, D.
2011-01-01
Acetic acid molecule-coated Fe3O4 nanoparticles, 450-650 nm in size, have been synthesized using a chemical solvothermal reduction method. Fourier transform infrared spectroscopy measurements confirm one monolayer acetic acid molecules chemically bond to the Fe3O4 nanoparticles. The low-field magnetoresistance (LFMR) of more than -10% at room temperature and -23% at 140 K is achieved with saturation field of less than 2 kOe. In comparison, the resistivity of cold-pressed bare Fe3O4 nanoparticles is six orders of magnitudes smaller than that of Fe3O4/molecule nanoparticles, and the LFMR ratio is one order of magnitude smaller. Our results indicate that the large LFMR in Fe3O4/molecule nanoparticles is associated with spin-polarized electrons tunnelling through molecules instead of direct nanoparticle contacts. These results suggest that magnetic oxide-molecule hybrid materials are an alternative type of materials to develop spin-based devices by a simple low-cost approach.
Accelerating MP2C dispersion corrections for dimers and molecular crystals
NASA Astrophysics Data System (ADS)
Huang, Yuanhang; Shao, Yihan; Beran, Gregory J. O.
2013-06-01
The MP2C dispersion correction of Pitonak and Hesselmann [J. Chem. Theory Comput. 6, 168 (2010)], 10.1021/ct9005882 substantially improves the performance of second-order Møller-Plesset perturbation theory for non-covalent interactions, albeit with non-trivial computational cost. Here, the MP2C correction is computed in a monomer-centered basis instead of a dimer-centered one. When applied to a single dimer MP2 calculation, this change accelerates the MP2C dispersion correction several-fold while introducing only trivial new errors. More significantly, in the context of fragment-based molecular crystal studies, combination of the new monomer basis algorithm and the periodic symmetry of the crystal reduces the cost of computing the dispersion correction by two orders of magnitude. This speed-up reduces the MP2C dispersion correction calculation from a significant computational expense to a negligible one in crystals like aspirin or oxalyl dihydrazide, without compromising accuracy.
Economic valuation of subsistence harvest of wildlife in Madagascar.
Golden, Christopher D; Bonds, Matthew H; Brashares, Justin S; Rasolofoniaina, B J Rodolph; Kremen, Claire
2014-02-01
Wildlife consumption can be viewed as an ecosystem provisioning service (the production of a material good through ecological functioning) because of wildlife's ability to persist under sustainable levels of harvest. We used the case of wildlife harvest and consumption in northeastern Madagascar to identify the distribution of these services to local households and communities to further our understanding of local reliance on natural resources. We inferred these benefits from demand curves built with data on wildlife sales transactions. On average, the value of wildlife provisioning represented 57% of annual household cash income in local communities from the Makira Natural Park and Masoala National Park, and harvested areas produced an economic return of U.S.$0.42 ha(-1) · year(-1). Variability in value of harvested wildlife was high among communities and households with an approximate 2 orders of magnitude difference in the proportional value of wildlife to household income. The imputed price of harvested wildlife and its consumption were strongly associated (p< 0.001), and increases in price led to reduced harvest for consumption. Heightened monitoring and enforcement of hunting could increase the costs of harvesting and thus elevate the price and reduce consumption of wildlife. Increased enforcement would therefore be beneficial to biodiversity conservation but could limit local people's food supply. Specifically, our results provide an estimate of the cost of offsetting economic losses to local populations from the enforcement of conservation policies. By explicitly estimating the welfare effects of consumed wildlife, our results may inform targeted interventions by public health and development specialists as they allocate sparse funds to support regions, households, or individuals most vulnerable to changes in access to wildlife. © 2013 Society for Conservation Biology.
Sensitivity to experimental data of pollutant site mean concentration in stormwater runoff.
Mourad, M; Bertrand-Krajewski, J L; Chebbo, G
2005-01-01
Urban wet weather discharges are known to be a great source of pollutants for receiving waters, which protection requires the estimation of long-term discharged pollutant loads. Pollutant loads can be estimated by multiplying a site mean concentration (SMC) by the total runoff volume during a given period of time. The estimation of the SMC value as a weighted mean value with event runoff volumes as weights is affected by uncertainties due to the variability of event mean concentrations and to the number of events used. This study carried out on 13 catchments gives orders of magnitude of these uncertainties and shows the limitations of usual practices using few measured events. The results obtained show that it is not possible to propose a standard minimal number of events to be measured on any catchment in order to evaluate the SMC value with a given uncertainty.
NASA Astrophysics Data System (ADS)
Watkinson, Catherine A.; Majumdar, Suman; Pritchard, Jonathan R.; Mondal, Rajesh
2017-12-01
In this paper, we establish the accuracy and robustness of a fast estimator for the bispectrum - the 'FFT-bispectrum estimator'. The implementation of the estimator presented here offers speed and simplicity benefits over a direct-measurement approach. We also generalize the derivation so it may be easily be applied to any order polyspectra, such as the trispectrum, with the cost of only a handful of Fast-Fourier Transforms (FFTs). All lower order statistics can also be calculated simultaneously for little extra cost. To test the estimator, we make use of a non-linear density field, and for a more strongly non-Gaussian test case, we use a toy-model of reionization in which ionized bubbles at a given redshift are all of equal size and are randomly distributed. Our tests find that the FFT-estimator remains accurate over a wide range of k, and so should be extremely useful for analysis of 21-cm observations. The speed of the FFT-bispectrum estimator makes it suitable for sampling applications, such as Bayesian inference. The algorithm we describe should prove valuable in the analysis of simulations and observations, and whilst, we apply it within the field of cosmology, this estimator is useful in any field that deals with non-Gaussian data.
Cost Of Compliance On Munitions Consolidation From Lualualei To West Loch
2017-12-01
from the perspective of the Department of Defense in order to capture all costs and benefits associated with the Army and Navy, the main stakeholders...weaknesses of the available alternative options. The model identifies tangible costs and benefits to estimate a net present value for each option. To...the robustness of the average net present value and to show the probability of net costs exceeding the net benefits . The analysis conducted in this
Economic impact of medication non-adherence by disease groups: a systematic review
Fernandez-Llimos, Fernando; Frommer, Michael; Benrimoj, Charlie; Garcia-Cardenas, Victoria
2018-01-01
Objective To determine the economic impact of medication non-adherence across multiple disease groups. Design Systematic review. Evidence review A comprehensive literature search was conducted in PubMed and Scopus in September 2017. Studies quantifying the cost of medication non-adherence in relation to economic impact were included. Relevant information was extracted and quality assessed using the Drummond checklist. Results Seventy-nine individual studies assessing the cost of medication non-adherence across 14 disease groups were included. Wide-scoping cost variations were reported, with lower levels of adherence generally associated with higher total costs. The annual adjusted disease-specific economic cost of non-adherence per person ranged from $949 to $44 190 (in 2015 US$). Costs attributed to ‘all causes’ non-adherence ranged from $5271 to $52 341. Medication possession ratio was the metric most used to calculate patient adherence, with varying cut-off points defining non-adherence. The main indicators used to measure the cost of non-adherence were total cost or total healthcare cost (83% of studies), pharmacy costs (70%), inpatient costs (46%), outpatient costs (50%), emergency department visit costs (27%), medical costs (29%) and hospitalisation costs (18%). Drummond quality assessment yielded 10 studies of high quality with all studies performing partial economic evaluations to varying extents. Conclusion Medication non-adherence places a significant cost burden on healthcare systems. Current research assessing the economic impact of medication non-adherence is limited and of varying quality, failing to provide adaptable data to influence health policy. The correlation between increased non-adherence and higher disease prevalence should be used to inform policymakers to help circumvent avoidable costs to the healthcare system. Differences in methods make the comparison among studies challenging and an accurate estimation of true magnitude of the cost impossible. Standardisation of the metric measures used to estimate medication non-adherence and development of a streamlined approach to quantify costs is required. PROSPERO registration number CRD42015027338. PMID:29358417
Deep uncertainty and broad heterogeneity in country-level social cost of carbon
NASA Astrophysics Data System (ADS)
Ricke, K.; Drouet, L.; Caldeira, K.; Tavoni, M.
2017-12-01
The social cost of carbon (SCC) is a commonly employed metric of the expected economic damages expected from carbon dioxide (CO2) emissions. Recent estimates of SCC range from approximately 10/tonne of CO2 to as much as 1000/tCO2, but these have been computed at the global level. While useful in an optimal policy context, a world-level approach obscures the heterogeneous geography of climate damages and vast differences in country-level contributions to global SCC, as well as climate and socio-economic uncertainties, which are much larger at the regional level. For the first time, we estimate country-level contributions to SCC using recent climate and carbon-cycle model projections, empirical climate-driven economic damage estimations, and information from the Shared Socio-economic Pathways. Central specifications show high global SCC values (median: 417 /tCO2, 66% confidence intervals: 168 - 793 /tCO2) with country-level contributions ranging from -11 (-8 - -14) /tCO2 to 86 (50 - 158) /tCO2. We quantify climate-, scenario- and economic damage- driven uncertainties associated with the calculated values of SCC. We find that while the magnitude of country-level social cost of carbon is highly uncertain, the relative positioning among countries is consistent. Countries incurring large fractions of the global cost include India, China, and the United States. The share of SCC distributed among countries is robust, indicating climate change winners and losers from a geopolitical perspective.
Vargas-Martínez, Ana Magdalena; Trapero-Bertran, Marta; Gil-García, Eugenia; Lima-Serrano, Marta
2018-04-15
Nowadays, one of the most prevalent patterns of alcohol consumption is called binge drinking (BD). In 2015, the European School Survey Project on Alcohol and Drugs (ESPAD) Group estimated that about 35% of adolescents of 15-16 years old have had at least one BD occasion in the past 30 days while at national level, the series of surveys on the use of drugs in adolescents of secondary education (ESTUDES, 2014-2015) determined that 32.2% of adolescents stated having performed BD in the last month. The aim of this editorial was to update the context of adolescence drinking and analysing the impact of BD by ages, including health and social costs derived. Once the magnitude of the problem was set, some research and action lines have been established in order to guide future work for the prevention of alcohol misuse and for establishing future preventive policies on alcohol. Finally, the need for evaluating these interventions from the efficiency point of view was discussed and assessed.
A Study of the Utilization of Advanced Composites in Fuselage Structures of Commercial Aircraft
NASA Technical Reports Server (NTRS)
Watts, D. J.; Sumida, P. T.; Bunin, B. L.; Janicki, G. S.; Walker, J. V.; Fox, B. R.
1985-01-01
A study was conducted to define the technology and data needed to support the introduction of advanced composites in the future production of fuselage structure in large transport aircraft. Fuselage structures of six candidate airplanes were evaluated for the baseline component. The MD-100 was selected on the basis of its representation of 1990s fuselage structure, an available data base, its impact on the schedule and cost of the development program, and its availability and suitability for flight service evaluation. Acceptance criteria were defined, technology issues were identified, and a composite fuselage technology development plan, including full-scale tests, was identified. The plan was based on composite materials to be available in the mid to late 1980s. Program resources required to develop composite fuselage technology are estimated at a rough order of magnitude to be 877 man-years exclusive of the bird strike and impact dynamic test components. A conceptual composite fuselage was designed, retaining the basic MD-100 structural arrangement for doors, windows, wing, wheel wells, cockpit enclosure, major bulkheads, etc., resulting in a 32 percent weight savings.
Environmental Co-Benefit Opportunities of Solar Energy
NASA Astrophysics Data System (ADS)
Hernandez, R. R.; Armstrong, A.; Burney, J. A.; Easter, S. B.; Hoffacker, M. K.; Moore, K. A.
2015-12-01
Solar energy reduces greenhouse gas emissions by an order of magnitude when substituted for fossil fuels. Nonetheless, the strategic deployment of solar energy—from single, rooftop modules to utility-scale solar energy power plants—can confer additional environmental co-benefits beyond its immediate use as a low carbon energy source. In this study, we identify a diverse portfolio of environmental co-benefit opportunities of solar energy technologies resulting from synergistic innovations in land, food, energy, and water systems. For each opportunity, we provide a demonstrative, quantitative framework for environmental co-benefit valuation—including, equations, models, or case studies for estimating carbon dioxide equivalent (CO2-eq) and cost savings ($US) averted by environmental co-benefit opportunities of solar energy—and imminent research questions to improve certainty of valuations. As land-energy-food-water nexus issues are increasingly exigent in 21st century, we show that environmental co-benefit opportunities of solar energy are feasible in numerous environments and at a wide range of spatial scales thereby able to contribute to local and regional environmental goals and for the mitigation of climate change.
SSL Pricing and Efficacy Trend Analysis for Utility Program Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuenge, J. R.
2013-10-01
Report to help utilities and energy efficiency organizations forecast the order in which important SSL applications will become cost-effective and estimate when each "tipping point" will be reached. Includes performance trend analysis from DOE's LED Lighting Facts® and CALiPER programs plus cost analysis from various sources.
Schelle, E; Rawlins, B G; Lark, R M; Webster, R; Staton, I; McLeod, C W
2008-09-01
We investigated the use of metals accumulated on tree bark for mapping their deposition across metropolitan Sheffield by sampling 642 trees of three common species. Mean concentrations of metals were generally an order of magnitude greater than in samples from a remote uncontaminated site. We found trivially small differences among tree species with respect to metal concentrations on bark, and in subsequent statistical analyses did not discriminate between them. We mapped the concentrations of As, Cd and Ni by lognormal universal kriging using parameters estimated by residual maximum likelihood (REML). The concentrations of Ni and Cd were greatest close to a large steel works, their probable source, and declined markedly within 500 m of it and from there more gradually over several kilometres. Arsenic was much more evenly distributed, probably as a result of locally mined coal burned in domestic fires for many years. Tree bark seems to integrate airborne pollution over time, and our findings show that sampling and analysing it are cost-effective means of mapping and identifying sources.
Progress in the Development of a Continuous Adiabatic Demagnetization Refrigerator
NASA Technical Reports Server (NTRS)
Shirron, Peter; Canavan, Edgar; DiPirro, Michael; Jackson, Michael; King, Todd; Tuttle, James; Krebs, Carolyn A. (Technical Monitor)
2002-01-01
We report on recent progress in the development of a continuous adiabatic demagnetization refrigerator (CADR). Continuous operation avoids the constraints of long hold times and short recycle times that lead to the generally large mass of single-shot ADRs, allowing us to achieve an order of magnitude larger cooling power per unit mass. Our current design goal is 10 micro W of cooling at 50 mK using a 6-10 K heat sink. The estimated mass is less than 10 kg, including magnetic shielding of each stage. The relatively high heat rejection capability allows it to operate with a mechanical cryocooler as part of a cryogen-free, low temperature cooling system. This has the advantages of long mission life and reduced complexity and cost. We have assembled a three-stage CADR and have demonstrated continuous cooling using a superfluid helium bath as the heat sink. The temperature stability is 8 micro K rms or better over the entire cycle, and the cooling power is 2.5 micro W at 60 mK rising to 10 micro W at 100 mK.
Regional Variation in Gravel Riverbed Mobility, Controlled by Hydrologic Regime and Sediment Supply
NASA Astrophysics Data System (ADS)
Pfeiffer, Allison M.; Finnegan, Noah J.
2018-04-01
The frequency and intensity of riverbed mobility are of paramount importance to the inhabitants of river ecosystems as well as to the evolution of bed surface structure. Because sediment supply varies by orders of magnitude across North America, the intensity of bedload transport varies by over an order of magnitude. Climate also varies widely across the continent, yielding a range of flood timing, duration, and intermittency. Together, the differences in sediment supply and hydroclimate result in diverse regimes of bed surface stability. To quantitatively characterize this regional variation, we calculate multidecadal time series of estimated bed surface mobility for 29 rivers using sediment transport equations. We use these data to compare predicted bed mobility between rivers and regions. There are statistically significant regional differences in the (a) exceedance probability of bed-mobilizing flows (W* > 0.002), (b) maximum bed mobility, and (c) number of discrete bed-mobilizing events in a year.
A Continuous Adiabatic Demagnetization Refrigerator for Far-IR/Sub-mm Astronomy
NASA Technical Reports Server (NTRS)
Shirron, Peter; Canavan, Edgar; DiPirro, Michael; Jackson, Michael; King, Todd; Tuttle, James
2004-01-01
We report on recent progress in the development of a continuous adiabatic demagnetization refrigerator (CADR). Continuous operation avoids the constraints of long hold times and short recycle times that lead to the generally large mass of single-shot ADRs, allowing us to achieve an order of magnitude larger cooling power per unit mass. Our current design goal is 10 microW of cooling at 50 mK using a 6-10 K heat sink. The estimated mass is less than 10 kg, including magnetic shielding of each stage. The relatively high heat rejection capability allows it to operate with a mechanical cryocooler as part of a cryogen-free, low temperature cooling system. This has the advantages of long mission life and reduced complexity and cost. We have assembled a three-stage CADR and have demonstrated continuous cooling using a superfluid helium bath as the heat sink. The temperature stability is 8 microK rms or better over the entire cycle, and the cooling power is 2.5 microW at 60 mK rising to 10 microW at 100 mK.
NASA Astrophysics Data System (ADS)
Yeghikyan, Ararat
2018-04-01
Based on the analogy between interacting stellar winds of planetary nebulae and WR-nebulae, on the one hand, and the heliosphere and the expanding envelopes of supernovae, on the other, an attempt is made to calculate the differential intensity of the energetic protons accelerated to energies of 100 MeV by the shock wave. The proposed one-parameter formula for estimating the intensity at 1-100 MeV, when applied to the heliosphere, shows good agreement with the Voyager-1 data, to within a factor of less than 2. The same estimate for planetary (and WR-) nebulae yields a value 7-8 (3-4) orders of magnitude higher than the mean galactic intensity value. The obtained estimate of the intensity of energetic protons in mentioned kinds of nebulae was used to estimate the doses of irradiation of certain substances, in order to show that such accelerated particles play an important role in radiation-chemical transformations in such nebulae.
Brodin, Nina; Lohela-Karlsson, Malin; Swärdh, Emma; Opava, Christina H
2015-01-01
To describe cost-effectiveness of the Physical Activity in Rheumatoid Arthritis (PARA) study intervention. Costs were collected and estimated retrospectively. Cost-effectiveness was calculated based on the intervention cost per patient with respect to change in health status (EuroQol global visual analog scale--EQ-VAS and EuroQol--EQ-5D) and activity limitation (Health assessment questionnaire - HAQ) using cost-effectiveness- and cost-minimization analyses. Total cost of the one-year intervention program was estimated to be €67 317 or €716 per participant. Estimated difference in total societal cost between the intervention (IG) and control (CG) was €580 per participant. Incremental cost-effectiveness ratio (ICER) for one point (1/100) of improvement in EQ-VAS was estimated to be €116. By offering the intervention to more affected participants in the IG compared to less affected participants, 15.5 extra points of improvement in EQ-VAS and 0.13 points of improvement on HAQ were gained at the same cost. "Ordinary physiotherapy" was most cost-effective with regard to EQ-5D. The intervention resulted in improved effect in health status for the IG with a cost of €116 per extra point in VAS. The intervention was cost-effective if targeted towards a subgroup of more affected patients when evaluating the effect using VAS and HAQ. The physical activity coaching intervention resulted in an improved effect on VAS for the intervention group, to a higher cost. In order to maximize cost-effectiveness, this type of physical activity coaching intervention should be targeted towards patients largely affected by their RA. The intervention is cost-effective from the patients' point of view, but not from that of the general population.
Contoyannis, Paul; Hurley, Jeremiah; Grootendorst, Paul; Jeon, Sung-Hee; Tamblyn, Robyn
2005-09-01
The price elasticity of demand for prescription drugs is a crucial parameter of interest in designing pharmaceutical benefit plans. Estimating the elasticity using micro-data, however, is challenging because insurance coverage that includes deductibles, co-insurance provisions and maximum expenditure limits create a non-linear price schedule, making price endogenous (a function of drug consumption). In this paper we exploit an exogenous change in cost-sharing within the Quebec (Canada) public Pharmacare program to estimate the price elasticity of expenditure for drugs using IV methods. This approach corrects for the endogeneity of price and incorporates the concept of a 'rational' consumer who factors into consumption decisions the price they expect to face at the margin given their expected needs. The IV method is adapted from an approach developed in the public finance literature used to estimate income responses to changes in tax schedules. The instrument is based on the price an individual would face under the new cost-sharing policy if their consumption remained at the pre-policy level. Our preferred specification leads to expenditure elasticities that are in the low range of previous estimates (between -0.12 and -0.16). Naïve OLS estimates are between 1 and 4 times these magnitudes. (c) 2005 John Wiley & Sons, Ltd.
76 FR 9806 - Agency Information Collection Activities: Notice of Detention
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-22
... keepers from the collection of information (a total capital/startup costs and operations and maintenance... in order to facilitate the determination for admissibility or may ask for an extension of time to... of Responses per Respondent: 1. Estimated Number of Total Annual Responses: 1,350. Estimated Time per...
New, national bottom-up estimate for tree-based biological ...
Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating
Marcellusi, Andrea; Viti, Raffaella; Incorvaia, Cristoforo; Mennini, Francesco Saverio
2015-10-01
The respiratory allergies, including allergic rhinitis and allergic asthma, represent a substantial medical and economic burden worldwide. Despite their dimension and huge economic-social burden, no data are available on the costs associated with the management of respiratory allergic diseases in Italy. The objective of this study was to estimate the average annual cost incurred by the National Health Service (NHS), as well as society, due to respiratory allergies and their main co-morbidities in Italy. A probabilistic prevalence-based cost of illness model was developed to estimate an aggregate measure of the economic burden associated with respiratory allergies and their main co-morbidities in terms of direct and indirect costs. A systematic literature review was performed in order to identify both the cost per case (expressed in present value) and the number of affected patients, by applying an incidence-based estimation method. Direct costs were estimated multiplying the hospitalization, drugs and management costs derived by the literature with the Italian epidemiological data. Indirect costs were calculated based on lost productivity according to the human capital approach. Furthermore, a one-way and probabilistic sensitivity analysis with 5,000 Monte Carlo simulations were performed, in order to test the robustness of the results and define the proper 95% Confidence Interval (CI). Overall, the total economic burden associated with respiratory allergies and their main co-morbidities was € 7.33 billion (95% CI: € 5.99-€ 8.82). A percentage of 27.5% was associated with indirect costs (€ 2.02; 95% CI: € 1.72-€ 2.34 billion) and 72.5% with direct costs (€ 5.32; 95% CI: € 4.04-€ 6.77 billion). In allergic asthma, allergic rhinitis, combined allergic rhinitis and asthma, turbinate hypertrophy and allergic conjunctivitis, the model estimate an average annual economic burden of € 1,35 (95% CI: € 1,14-€ 1,58) billion, € 1,72 (95% CI: € 1,14-€ 2,43) billion, € 1,62 billion (€ 0,91-€ 2,53) billion, € 0,12 (€ 0,07-€ 0,17) billion, € 0,46 (€ 0,16-€ 0,92) billion respectively. To our knowledge, this is the first study in which direct costs (incurred by NHS) and indirect ones (incurred by the society) were taken into account to estimate the overall burden associated with respiratory allergies and their main co-morbidities in our Country. In conclusion, this work may be considered an efficient tool for public decision-makers to correctly understand the economic aspects involved by the management and treatment of respiratory allergies-induced diseases in Italy.
Planning Inmarsat's second generation of spacecraft
NASA Astrophysics Data System (ADS)
Williams, W. P.
1982-09-01
The next generation of studies of the Inmarsat service are outlined, such as traffic forecasting studies, communications capacity estimates, space segment design, cost estimates, and financial analysis. Traffic forecasting will require future demand estimates, and a computer model has been developed which estimates demand over the Atlantic, Pacific, and Indian ocean regions. Communications estimates are based on traffic estimates, as a model converts traffic demand into a required capacity figure for a given area. The Erlang formula is used, requiring additional data such as peak hour ratios and distribution estimates. Basic space segment technical requirements are outlined (communications payload, transponder arrangements, etc), and further design studies involve such areas as space segment configuration, launcher and spacecraft studies, transmission planning, and earth segment configurations. Cost estimates of proposed design parameters will be performed, but options must be reduced to make construction feasible. Finally, a financial analysis will be carried out in order to calculate financial returns.
Cost/Benefit considerations for recent saltcedar control, Middle Pecos River, New Mexico.
Barz, Dave; Watson, Richard P; Kanney, Joseph F; Roberts, Jesse D; Groeneveld, David P
2009-02-01
Major benefits were weighed against major costs associated with recent saltcedar control efforts along the Middle Pecos River, New Mexico. The area of study was restricted to both sides of the channel and excluded tributaries along the 370 km between Sumner and Brantley dams. Direct costs (helicopter spraying, dead tree removal, and revegetation) within the study area were estimated to be $2.2 million but possibly rising to $6.4 million with the adoption of an aggressive revegetation program. Indirect costs associated with increased potential for erosion and reservoir sedimentation would raise the costs due to increased evaporation from more extensive shallows in the Pecos River as it enters Brantley Reservoir. Actions such as dredging are unlikely given the conservative amount of sediment calculated (about 1% of the reservoir pool). The potential for water salvage was identified as the only tangible benefit likely to be realized under the current control strategy. Estimates of evapotranspiration (ET) using Landsat TM data allowed estimation of potential water salvage as the difference in ET before and after treatment, an amount totaling 7.41 million m(3) (6010 acre-ft) per year. Previous saltcedar control efforts of roughly the same magnitude found that salvaged ET recharged groundwater and no additional flows were realized within the river. Thus, the value of this recharge is probably less than the lowest value quoted for actual in-channel flow, and estimated to be <$63,000 per year. Though couched in terms of costs and benefits, this paper is focused on what can be considered the key trade-off under a complete eradication strategy: water salvage vs. erosion and sedimentation. It differs from previous efforts by focusing on evaluating the impacts of actual control efforts within a specific system. Total costs (direct plus potential indirect) far outweighed benefits in this simple comparison and are expected to be ongoing. Problems induced by saltcedar control may permanently reduce reservoir capacity and increase reservoir evaporation rates, which could further deplete supplies on this water short system. These potential negative consequences highlight that such costs and benefits need to be considered before initiating extensive saltcedar control programs on river systems of the western United States.
Estimation of ballistic block landing energy during 2014 Mount Ontake eruption
NASA Astrophysics Data System (ADS)
Tsunematsu, Kae; Ishimine, Yasuhiro; Kaneko, Takayuki; Yoshimoto, Mitsuhiro; Fujii, Toshitsugu; Yamaoka, Koshun
2016-05-01
The 2014 Mount Ontake eruption started just before noon on September 27, 2014. It killed 58 people, and five are still missing (as of January 1, 2016). The casualties were mainly caused by the impact of ballistic blocks around the summit area. It is necessary to know the magnitude of the block velocity and energy to construct a hazard map of ballistic projectiles and design effective shelters and mountain huts. The ejection velocities of the ballistic projectiles were estimated by comparing the observed distribution of the ballistic impact craters on the ground with simulated distributions of landing positions under various sets of conditions. A three-dimensional numerical multiparticle ballistic model adapted to account for topographic effect was used to estimate the ejection angles. From these simulations, we have obtained an ejection angle of γ = 20° from vertical to horizontal and α = 20° from north to east. With these ejection angle conditions, the ejection speed was estimated to be between 145 and 185 m/s for a previously obtained range of drag coefficients of 0.62-1.01. The order of magnitude of the mean landing energy obtained using our numerical simulation was 104 J.
NASA Astrophysics Data System (ADS)
Gross, K.; Prías Barragán, J. J.; Sangiao, S.; De Teresa, J. M.; Lajaunie, L.; Arenal, R.; Ariza Calderón, H.; Prieto, P.
2016-09-01
The large-scale production of graphene and reduced-graphene oxide (rGO) requires low-cost and eco-friendly synthesis methods. We employed a new, simple, cost-effective pyrolytic method to synthetize oxidized-graphenic nanoplatelets (OGNP) using bamboo pyroligneous acid (BPA) as a source. Thorough analyses via high-resolution transmission electron microscopy and electron energy-loss spectroscopy provides a complete structural and chemical description at the local scale of these samples. In particular, we found that at the highest carbonization temperature the OGNP-BPA are mainly in a sp2 bonding configuration (sp2 fraction of 87%). To determine the electrical properties of single nanoplatelets, these were contacted by Pt nanowires deposited through focused-ion-beam-induced deposition techniques. Increased conductivity by two orders of magnitude is observed as oxygen content decreases from 17% to 5%, reaching a value of 2.3 × 103 S m-1 at the lowest oxygen content. Temperature-dependent conductivity reveals a semiconductor transport behavior, described by the Mott three-dimensional variable range hopping mechanism. From the localization length, we estimate a band-gap value of 0.22(2) eV for an oxygen content of 5%. This investigation demonstrates the great potential of the OGNP-BPA for technological applications, given that their structural and electrical behavior is similar to the highly reduced rGO sheets obtained by more sophisticated conventional synthesis methods.
Shin, Hyeong-Moo; McKone, Thomas E.; Nishioka, Marcia G.; Fallin, M. Daniele; Croen, Lisa A.; Hertz-Picciotto, Irva; Newschaffer, Craig J.; Bennett, Deborah H.
2014-01-01
Consumer products and building materials emit a number of semivolatile organic compounds (SVOCs) in the indoor environment. Because indoor SVOCs accumulate in dust, we explore the use of dust to determine source strength and report here on analysis of dust samples collected in 30 U.S. homes for six phthalates, four personal care product ingredients, and five flame retardants. We then use a fugacity-based indoor mass-balance model to estimate the whole house emission rates of SVOCs that would account for the measured dust concentrations. Di-2-ethylhexyl phthalate (DEHP) and di-iso-nonyl phthalate (DiNP) were the most abundant compounds in these dust samples. On the other hand, the estimated emission rate of diethyl phthalate (DEP) is the largest among phthalates, although its dust concentration is over two orders of magnitude smaller than DEHP and DiNP. The magnitude of the estimated emission rate that corresponds to the measured dust concentration is found to be inversely correlated with the vapor pressure of the compound, indicating that dust concentrations alone cannot be used to determine which compounds have the greatest emission rates. The combined dust-assay modeling approach shows promise for estimating indoor emission rates for SVOCs. PMID:24118221
Hospital treatment for fluid overload in the Medicare hemodialysis population.
Arneson, Thomas J; Liu, Jiannong; Qiu, Yang; Gilbertson, David T; Foley, Robert N; Collins, Allan J
2010-06-01
Fluid overload in hemodialysis patients sometimes requires emergent dialysis, but the magnitude of this care has not been characterized. This study aimed to estimate the magnitude of fluid overload treatment episodes for the Medicare hemodialysis population in hospital settings, including emergency departments. Point-prevalent hemodialysis patients were identified from the Centers for Medicare and Medicaid Renal Management Information System and Standard Analytical Files. Fluid overload treatment episodes were defined by claims for care in inpatient, hospital observation, or emergency department settings with primary discharge diagnoses of fluid overload, heart failure, or pulmonary edema, and dialysis performed on the day of or after admission. Exclusion criteria included stays >5 days. Cost was defined as total Medicare allowable costs for identified episodes. Associations between patient characteristics and episode occurrence and cost were analyzed. For 25,291 patients (14.3%), 41,699 care episodes occurred over a mean follow-up time of 2 years: 86% inpatient, 9% emergency department, and 5% hospital observation. Heart failure was the primary diagnosis in 83% of episodes, fluid overload in 11%, and pulmonary edema in 6%. Characteristics associated with more frequent events included age <45 years, female sex, African-American race, causes of ESRD other than diabetes, dialysis duration of 1 to 3 years, fewer dialysis sessions per week at baseline, hospitalizations during baseline, and most comorbid conditions. Average cost was $6,372 per episode; total costs were approximately $266 million. Among U.S. hemodialysis patients, fluid overload treatment is common and expensive. Further study is necessary to identify prevention opportunities.
Diffractive interference optical analyzer (DiOPTER)
NASA Astrophysics Data System (ADS)
Sasikumar, Harish; Prasad, Vishnu; Pal, Parama; Varma, Manoj M.
2016-03-01
This report demonstrates a method for high-resolution refractometric measurements using, what we have termed as, a Diffractive Interference Optical Analyzer (DiOpter). The setup consists of a laser, polarizer, a transparent diffraction grating and Si-photodetectors. The sensor is based on the differential response of diffracted orders to bulk refractive index changes. In these setups, the differential read-out of the diffracted orders suppresses signal drifts and enables time-resolved determination of refractive index changes in the sample cell. A remarkable feature of this device is that under appropriate conditions, the measurement sensitivity of the sensor can be enhanced by more than two orders of magnitude due to interference between multiply reflected diffracted orders. A noise-equivalent limit of detection (LoD) of 6x10-7 RIU was achieved in glass. This work focuses on devices with integrated sample well, made on low-cost PDMS. As the detection methodology is experimentally straightforward, it can be used across a wide array of applications, ranging from detecting changes in surface adsorbates via binding reactions to estimating refractive index (and hence concentration) variations in bulk samples. An exciting prospect of this technique is the potential integration of this device to smartphones using a simple interface based on transmission mode configuration. In a transmission configuration, we were able to achieve an LoD of 4x10-4 RIU which is sufficient to explore several applications in food quality testing and related fields. We are envisioning the future of this platform as a personal handheld optical analyzer for applications ranging from environmental sensing to healthcare and quality testing of food products.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-01
... respondent in understanding the types of information the OCC needs in order to process a filing. An applicant... system. Type of Review: Regular. Affected Public: Individuals or households; Businesses or other for... information technology; and (e) Estimates of capital or startup costs and costs of operation, maintenance, and...
Zoni Berisso, M; Landolina, M; Ermini, G; Parretti, D; Zingarini, G L; Degli Esposti, L; Cricelli, C; Boriani, G
2017-01-01
Atrial fibrillation (AF) is a relevant item of expenditure for the National Healthcare systems. The aim of the study was to estimate the annual costs of AF in Italy. The Italian Survey of Atrial Fibrillation Management Study enrolled 6.036 patients with AF among 295.906 subjects representative of the Italian population. Data were collected by 233 General Practitioners (GPs) distributed across Italy. Quantities of resources used during the 5 years preceding the ISAF screening were inferred from the survey data and multiplied by the current Italian unit costs of 2015 in order to estimate the mean per patient annual cumulative cost of AF. Patients were subdivided on the basis of the number of hospitalizations, invasive/non-invasive diagnostic tests and invasive therapeutic procedures in 3 different clinical subsets: "low cost", " medium cost" and "high cost clinical scenario". The estimated mean costs per patient per year were 613 €, 891 € and 1213 € for the "Low cost", "Medium cost" and "High Cost Clinical Scenario" respectively. Hospitalizations and inpatient interventional procedures accounted for more than 80% of the cumulative annual costs. The mean annual costs among patients pursuing "Rhythm control" strategy was 956 €. In Italy, the estimated costs of AF per patient per year are lower than those reported in other developed countries and vary widely related to the different characteristics of AF patients. Hospitalizations and interventional procedures are the main drivers of costs. The mean annual cost of AF is mainly influenced by the duration of the period of observation and the patients' characteristics. Measures to reduce hospitalizations are needed.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
NASA Astrophysics Data System (ADS)
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.