Sample records for reduction estimating methodology

  1. Reduction of CO2 Emissions Due to Wind Energy - Methods and Issues in Estimating Operational Emission Reductions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holttinen, Hannele; Kiviluoma, Juha; McCann, John

    2015-10-05

    This paper presents ways of estimating CO2 reductions of wind power using different methodologies. Estimates based on historical data have more pitfalls in methodology than estimates based on dispatch simulations. Taking into account exchange of electricity with neighboring regions is challenging for all methods. Results for CO2 emission reductions are shown from several countries. Wind power will reduce emissions for about 0.3-0.4 MtCO2/MWh when replacing mainly gas and up to 0.7 MtCO2/MWh when replacing mainly coal powered generation. The paper focuses on CO2 emissions from power system operation phase, but long term impacts are shortly discussed.

  2. COST OF SELECTIVE CATALYTIC REDUCTION (SCR) APPLICATION FOR NOX CONTROL ON COAL-FIRED BOILERS

    EPA Science Inventory

    The report provides a methodology for estimating budgetary costs associated with retrofit applications of selective catalytic reduction (SCR) technology on coal-fired boilers. SCR is a postcombustion nitrogen oxides (NOx) control technology capable of providing NOx reductions >90...

  3. Multi-point estimation of total energy expenditure: a comparison between zinc-reduction and platinum-equilibration methodologies.

    PubMed

    Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V

    2003-12-15

    Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.

  4. COST OF SELECTIVE CATALYTIC REDUCTION (SCR) APPLICATION FOR NOX CONTROL ON COAL-FIRED BOILERS

    EPA Science Inventory

    The report provides a methodology for estimating budgetary costs associ-ated with retrofit applications of selec-tive catalytic reduction (SCR) technology on coal-fired boilers. SCR is a post-combustion nitrogen oxides (NOX) con-trol technology capable of providing NOX reductions...

  5. Potential and Limitations of an Improved Method to Produce Dynamometric Wheels

    PubMed Central

    García de Jalón, Javier

    2018-01-01

    A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427

  6. Quantitative health impact assessment of transport policies: two simulations related to speed limit reduction and traffic re-allocation in the Netherlands.

    PubMed

    Schram-Bijkerk, D; van Kempen, E; Knol, A B; Kruize, H; Staatsen, B; van Kamp, I

    2009-10-01

    Few quantitative health impact assessments (HIAs) of transport policies have been published so far and there is a lack of a common methodology for such assessments. To evaluate the usability of existing HIA methodology to quantify health effects of transport policies at the local level. Health impact of two simulated but realistic transport interventions - speed limit reduction and traffic re-allocation - was quantified by selecting traffic-related exposures and health endpoints, modelling of population exposure, selecting exposure-effect relations and estimating the number of local traffic-related cases and disease burden, expressed in disability-adjusted life-years (DALYs), before and after the intervention. Exposure information was difficult to retrieve because of the local scale of the interventions, and exposure-effect relations for subgroups and combined effects were missing. Given uncertainty in the outcomes originating from this kind of missing information, simulated changes in population health by two local traffic interventions were estimated to be small (<5%), except for the estimated reduction in DALYs by less traffic accidents (60%) due to speed limit reduction. Quantitative HIA of transport policies at a local scale is possible, provided that data on exposures, the exposed population and their baseline health status are available. The interpretation of the HIA information should be carried out in the context of the quality of input data and assumptions and uncertainties of the analysis.

  7. Educational Inequality in the United States: Methodology and Historical Estimation of Education Gini Coefficients

    ERIC Educational Resources Information Center

    Bennett, Daniel L.

    2011-01-01

    This paper estimates historical measures of equality in the distribution of education in the United States by age group and sex. Using educational attainment data for the population, the EduGini measure indicates that educational inequality in the U.S. declined significantly between 1950 and 2009. Reductions in educational inequality were more…

  8. Projecting effects of improvements in passive safety of the New Zealand light vehicle fleet.

    PubMed

    Keall, Michael; Newstead, Stuart; Jones, Wayne

    2007-09-01

    In the year 2000, as part of the process for setting New Zealand road safety targets, a projection was made for a reduction in social cost of 15.5 percent associated with improvements in crashworthiness, which is a measure of the occupant protection of the light passenger vehicle fleet. Since that document was produced, new estimates of crashworthiness have become available, allowing for a more accurate projection. The objective of this paper is to describe a methodology for projecting changes in casualty rates associated with passive safety features and to apply this methodology to produce a new prediction. The shape of the age distribution of the New Zealand light passenger vehicle fleet was projected to 2010. Projected improvements in crashworthiness and associated reductions in social cost were also modeled based on historical trends. These projections of changes in the vehicle fleet age distribution and of improvements in crashworthiness together provided a basis for estimating the future performance of the fleet in terms of secondary safety. A large social cost reduction of about 22 percent for 2010 compared to the year 2000 was predicted due to the expected huge impact of improvements in passive vehicle features on road trauma in New Zealand. Countries experiencing improvements in their vehicle fleets can also expect significant reductions in road injury compared to a less crashworthy passenger fleet. Such road safety gains can be analyzed using some of the methodology described here.

  9. Methods to assess geological CO2 storage capacity: Status and best practice

    USGS Publications Warehouse

    Heidug, Wolf; Brennan, Sean T.; Holloway, Sam; Warwick, Peter D.; McCoy, Sean; Yoshimura, Tsukasa

    2013-01-01

    To understand the emission reduction potential of carbon capture and storage (CCS), decision makers need to understand the amount of CO2 that can be safely stored in the subsurface and the geographical distribution of storage resources. Estimates of storage resources need to be made using reliable and consistent methods. Previous estimates of CO2 storage potential for a range of countries and regions have been based on a variety of methodologies resulting in a correspondingly wide range of estimates. Consequently, there has been uncertainty about which of the methodologies were most appropriate in given settings, and whether the estimates produced by these methods were useful to policy makers trying to determine the appropriate role of CCS. In 2011, the IEA convened two workshops which brought together experts for six national surveys organisations to review CO2 storage assessment methodologies and make recommendations on how to harmonise CO2 storage estimates worldwide. This report presents the findings of these workshops and an internationally shared guideline for quantifying CO2 storage resources.

  10. A comparison of the poverty impact of transfers, taxes and market income across five OECD countries.

    PubMed

    Bibi, Sami; Duclos, Jean-Yves

    2010-01-01

    This paper compares the poverty reduction impact of income sources, taxes and transfers across five OECD countries. Since the estimation of that impact can depend on the order in which the various income sources are introduced into the analysis, it is done by using the Shapley value. Estimates of the poverty reduction impact are presented in a normalized and unnormalized fashion, in order to take into account the total as well as the per dollar impacts. The methodology is applied to data from the Luxembourg Income Study database.

  11. Software for Probabilistic Risk Reduction

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto

    2004-01-01

    A computer program implements a methodology, denoted probabilistic risk reduction, that is intended to aid in planning the development of complex software and/or hardware systems. This methodology integrates two complementary prior methodologies: (1) that of probabilistic risk assessment and (2) a risk-based planning methodology, implemented in a prior computer program known as Defect Detection and Prevention (DDP), in which multiple requirements and the beneficial effects of risk-mitigation actions are taken into account. The present methodology and the software are able to accommodate both process knowledge (notably of the efficacy of development practices) and product knowledge (notably of the logical structure of a system, the development of which one seeks to plan). Estimates of the costs and benefits of a planned development can be derived. Functional and non-functional aspects of software can be taken into account, and trades made among them. It becomes possible to optimize the planning process in the sense that it becomes possible to select the best suite of process steps and design choices to maximize the expectation of success while remaining within budget.

  12. Potential uncertainty reduction in model-averaged benchmark dose estimates informed by an additional dose study.

    PubMed

    Shao, Kan; Small, Mitchell J

    2011-10-01

    A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.

  13. Benzene Case Study Final Report - Second Prospective Report Study Science Advisory Board Review, July 2009

    EPA Pesticide Factsheets

    EPA developed a methodology for estimating the health benefits of benzene reductions and has applied it in a metropolitan-scale case study of the benefits of CAA controls on benzene emissions to accompany the main 812 analysis.

  14. Draft Benzene Case Study Review - Second Prospective Report Study Science Advisory Board Review, March 2008

    EPA Pesticide Factsheets

    EPA developed a methodology for estimating the health benefits of benzene reductions and has applied it in a metropolitan-scale case study of the benefits of CAA controls on benzene emissions to accompany the main 812 analysis.

  15. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs.

    PubMed

    Dexter, Franklin; Abouleish, Amr E; Epstein, Richard H; Whitten, Charles W; Lubarsky, David A

    2003-10-01

    Potential benefits to reducing turnover times are both quantitative (e.g., complete more cases and reduce staffing costs) and qualitative (e.g., improve professional satisfaction). Analyses have shown the quantitative arguments to be unsound except for reducing staffing costs. We describe a methodology by which each surgical suite can use its own numbers to calculate its individual potential reduction in staffing costs from reducing its turnover times. Calculations estimate optimal allocated operating room (OR) time (based on maximizing OR efficiency) before and after reducing the maximum and average turnover times. At four academic tertiary hospitals, reductions in average turnover times of 3 to 9 min would result in 0.8% to 1.8% reductions in staffing cost. Reductions in average turnover times of 10 to 19 min would result in 2.5% to 4.0% reductions in staffing costs. These reductions in staffing cost are achieved predominantly by reducing allocated OR time, not by reducing the hours that staff work late. Heads of anesthesiology groups often serve on OR committees that are fixated on turnover times. Rather than having to argue based on scientific studies, this methodology provides the ability to show the specific quantitative effects (small decreases in staffing costs and allocated OR time) of reducing turnover time using a surgical suite's own data. Many anesthesiologists work at hospitals where surgeons and/or operating room (OR) committees focus repeatedly on turnover time reduction. We developed a methodology by which the reductions in staffing cost as a result of turnover time reduction can be calculated for each facility using its own data. Staffing cost reductions are generally very small and would be achieved predominantly by reducing allocated OR time to the surgeons.

  16. Methodology to predict a maximum follow-up period for breast cancer patients without significantly reducing the chance of detecting a local recurrence

    NASA Astrophysics Data System (ADS)

    Mould, Richard F.; Asselain, Bernard; DeRycke, Yann

    2004-03-01

    For breast cancer where the prognosis of early stage disease is very good and even when local recurrences do occur they can present several years after treatment, the hospital resources required for annual follow-up examinations of what can be several hundreds of patients are financially significant. If, therefore, there is some method to estimate a maximum length of follow-up Tmax necessary, then cost savings of physicians' time as well as outpatient workload reductions can be achieved. In modern oncology where expenses continue to increase exponentially due to staff salaries and the expense of chemotherapy drugs and of new treatment and imaging technology, the economic situation can no longer be ignored. The methodology of parametric modelling, based on the lognormal distribution is described, showing that useful estimates for Tmax can be made, by making a trade-off between Tmax and the fraction of patients who will experience a delay in detection of their local recurrence. This trade-off depends on the chosen tail of the lognormal. The methodology is described for stage T1 and T2 breast cancer and it is found that Tmax = 4 years which is a significant reduction on the usual maximum of 10 years of follow-up which is employed by many hospitals for breast cancer patients. The methodology is equally applicable for cancers at other sites where the prognosis is good and some local recurrences may not occur until several years post-treatment.

  17. Multidimensional Poverty in China: Findings Based on the CHNS

    ERIC Educational Resources Information Center

    Yu, Jiantuo

    2013-01-01

    This paper estimates multidimensional poverty in China by applying the Alkire-Foster methodology to the China Health and Nutrition Survey 2000-2009 data. Five dimensions are included: income, living standard, education, health and social security. Results suggest that rapid economic growth has resulted not only in a reduction in income poverty but…

  18. Integrating risk assessment and life cycle assessment: a case study of insulation.

    PubMed

    Nishioka, Yurika; Levy, Jonathan I; Norris, Gregory A; Wilson, Andrew; Hofstetter, Patrick; Spengler, John D

    2002-10-01

    Increasing residential insulation can decrease energy consumption and provide public health benefits, given changes in emissions from fuel combustion, but also has cost implications and ancillary risks and benefits. Risk assessment or life cycle assessment can be used to calculate the net impacts and determine whether more stringent energy codes or other conservation policies would be warranted, but few analyses have combined the critical elements of both methodologies In this article, we present the first portion of a combined analysis, with the goal of estimating the net public health impacts of increasing residential insulation for new housing from current practice to the latest International Energy Conservation Code (IECC 2000). We model state-by-state residential energy savings and evaluate particulate matter less than 2.5 microm in diameter (PM2.5), NOx, and SO2 emission reductions. We use past dispersion modeling results to estimate reductions in exposure, and we apply concentration-response functions for premature mortality and selected morbidity outcomes using current epidemiological knowledge of effects of PM2.5 (primary and secondary). We find that an insulation policy shift would save 3 x 10(14) British thermal units or BTU (3 x 10(17) J) over a 10-year period, resulting in reduced emissions of 1,000 tons of PM2.5, 30,000 tons of NOx, and 40,000 tons of SO2. These emission reductions yield an estimated 60 fewer fatalities during this period, with the geographic distribution of health benefits differing from the distribution of energy savings because of differences in energy sources, population patterns, and meteorology. We discuss the methodology to be used to integrate life cycle calculations, which can ultimately yield estimates that can be compared with costs to determine the influence of external costs on benefit-cost calculations.

  19. SRB ascent aerodynamic heating design criteria reduction study, volume 1

    NASA Technical Reports Server (NTRS)

    Crain, W. K.; Frost, C. L.; Engel, C. D.

    1989-01-01

    An independent set of solid rocket booster (SRB) convective ascent design environments were produced which would serve as a check on the Rockwell IVBC-3 environments used to design the ascent phase of flight. In addition, support was provided for lowering the design environments such that Thermal Protection System (TPS), based on conservative estimates, could be removed leading to a reduction in SRB refurbishment time and cost. Ascent convective heating rates and loads were generated at locations in the SRB where lowering the thermal environment would impact the TPS design. The ascent thermal environments are documented along with the wind tunnel/flight test data base used as well as the trajectory and environment generation methodology. Methodology, as well as, environment summaries compared to the 1980 Design and Rockwell IVBC-3 Design Environment are presented in this volume, 1.

  20. Multi crop model climate risk country-level management design: case study on the Tanzanian maize production system

    NASA Astrophysics Data System (ADS)

    Chavez, E.

    2015-12-01

    Future climate projections indicate that a very serious consequence of post-industrial anthropogenic global warming is the likelihood of the greater frequency and intensity of extreme hydrometeorological events such as heat waves, droughts, storms, and floods. The design of national and international policies targeted at building more resilient and environmentally sustainable food systems needs to rely on access to robust and reliable data which is largely absent. In this context, the improvement of the modelling of current and future agricultural production losses using the unifying language of risk is paramount. In this study, we use a methodology that allows the integration of the current understanding of the various interacting systems of climate, agro-environment, crops, and the economy to determine short to long-term risk estimates of crop production loss, in different environmental, climate, and adaptation scenarios. This methodology is applied to Tanzania to assess optimum risk reduction and maize production increase paths in different climate scenarios. The simulations carried out use inputs from three different crop models (DSSAT, APSIM, WRSI) run in different technological scenarios and thus allowing to estimate crop model-driven risk exposure estimation bias. The results obtained also allow distinguishing different region-specific optimum climate risk reduction policies subject to historical as well as RCP2.5 and RCP8.5 climate scenarios. The region-specific risk profiles obtained provide a simple framework to determine cost-effective risk management policies for Tanzania and allow to optimally combine investments in risk reduction and risk transfer.

  1. High-global warming potential F-gas emissions in California: comparison of ambient-based versus inventory-based emission estimates, and implications of refined estimates.

    PubMed

    Gallagher, Glenn; Zhan, Tao; Hsu, Ying-Kuang; Gupta, Pamela; Pederson, James; Croes, Bart; Blake, Donald R; Barletta, Barbara; Meinardi, Simone; Ashford, Paul; Vetter, Arnie; Saba, Sabine; Slim, Rayan; Palandre, Lionel; Clodic, Denis; Mathis, Pamela; Wagner, Mark; Forgie, Julia; Dwyer, Harry; Wolf, Katy

    2014-01-21

    To provide information for greenhouse gas reduction policies, the California Air Resources Board (CARB) inventories annual emissions of high-global-warming potential (GWP) fluorinated gases, the fastest growing sector of greenhouse gas (GHG) emissions globally. Baseline 2008 F-gas emissions estimates for selected chlorofluorocarbons (CFC-12), hydrochlorofluorocarbons (HCFC-22), and hydrofluorocarbons (HFC-134a) made with an inventory-based methodology were compared to emissions estimates made by ambient-based measurements. Significant discrepancies were found, with the inventory-based emissions methodology resulting in a systematic 42% under-estimation of CFC-12 emissions from older refrigeration equipment and older vehicles, and a systematic 114% overestimation of emissions for HFC-134a, a refrigerant substitute for phased-out CFCs. Initial, inventory-based estimates for all F-gas emissions had assumed that equipment is no longer in service once it reaches its average lifetime of use. Revised emission estimates using improved models for equipment age at end-of-life, inventories, and leak rates specific to California resulted in F-gas emissions estimates in closer agreement to ambient-based measurements. The discrepancies between inventory-based estimates and ambient-based measurements were reduced from -42% to -6% for CFC-12, and from +114% to +9% for HFC-134a.

  2. Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models

    NASA Technical Reports Server (NTRS)

    Coppolino, Robert N.

    2018-01-01

    Responses to challenges associated with verification and validation (V&V) of Space Launch System (SLS) structural dynamics models are presented in this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA), which has gained acceptance by various principals in the NASA community, defines efficient and accurate FEM modal sensitivity models that are useful in test-analysis correlation and reconciliation and parametric uncertainty studies. (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976), developed to remedy difficulties encountered with the widely used Classical Guyan Reduction (CGR) method, are presented. MGR and HR are particularly relevant for estimation of "body dominant" target modes of shell-type SLS assemblies that have numerous "body", "breathing" and local component constituents. Realities associated with configuration features and "imperfections" cause "body" and "breathing" mode characteristics to mix resulting in a lack of clarity in the understanding and correlation of FEM- and test-derived modal data. (3) Mode Consolidation (MC) is a newly introduced procedure designed to effectively "de-feature" FEM and experimental modes of detailed structural shell assemblies for unambiguous estimation of "body" dominant target modes. Finally, (4) Experimental Mode Verification (EMV) is a procedure that addresses ambiguities associated with experimental modal analysis of complex structural systems. Specifically, EMV directly separates well-defined modal data from spurious and poorly excited modal data employing newly introduced graphical and coherence metrics.

  3. A method for the analysis of the benefits and costs for aeronautical research and technology

    NASA Technical Reports Server (NTRS)

    Williams, L. J.; Hoy, H. H.; Anderson, J. L.

    1978-01-01

    A relatively simple, consistent, and reasonable methodology for performing cost-benefit analyses which can be used to guide, justify, and explain investments in aeronautical research and technology is presented. The elements of this methodology (labeled ABC-ART for the Analysis of the Benefits and Costs of Aeronautical Research and Technology) include estimation of aircraft markets; manufacturer costs and return on investment versus aircraft price; airline costs and return on investment versus aircraft price and passenger yield; and potential system benefits--fuel savings, cost savings, and noise reduction. The application of this methodology is explained using the introduction of an advanced turboprop powered transport aircraft in the medium range market in 1978 as an example.

  4. Applying the conservativeness principle to REDD to deal with the uncertainties of the estimates

    NASA Astrophysics Data System (ADS)

    Grassi, Giacomo; Monni, Suvi; Federici, Sandro; Achard, Frederic; Mollicone, Danilo

    2008-07-01

    A common paradigm when the reduction of emissions from deforestations is estimated for the purpose of promoting it as a mitigation option in the context of the United Nations Framework Convention on Climate Change (UNFCCC) is that high uncertainties in input data—i.e., area change and C stock change/area—may seriously undermine the credibility of the estimates and therefore of reduced deforestation as a mitigation option. In this paper, we show how a series of concepts and methodological tools—already existing in UNFCCC decisions and IPCC guidance documents—may greatly help to deal with the uncertainties of the estimates of reduced emissions from deforestation.

  5. A method for estimating Dekkera/Brettanomyces populations in wines.

    PubMed

    Benito, S; Palomero, F; Morata, A; Calderón, F; Suárez-Lepe, J A

    2009-05-01

    The formation of ethylphenols in wines, a consequence of Dekkera/Brettanomyces metabolism, can affect their quality. The main aims of this work were to further our knowledge of Dekkera/Brettanomyces with respect to ethylphenol production, and to develop a methodology for detecting this spoilage yeast and for estimating its population size in wines using differential-selective media and high performance liquid chromatography (HPLC). This work examines the reduction of p-coumaric acid and the formation of 4-vinylphenol and 4-ethylphenol (recorded by HPLC-DAD) in a prepared medium because of the activities of different yeast species and populations. A regression model was constructed for estimating the population of Dekkera/Brettanomyces at the beginning of fermentation via the conversion of hydroxycinnamic acids into ethylphenols. The proposed methodology allows the populations of Dekkera/Brettanomyces at the beginning of fermentation to be estimated in problem wines. Moreover, it avoids false positives because of yeasts resistant to the effects of the selective elements of the medium. This may help prevent the appearance of organoleptic anomalies in wines at the winery level.

  6. Power fluctuation reduction methodology for the grid-connected renewable power systems

    NASA Astrophysics Data System (ADS)

    Aula, Fadhil T.; Lee, Samuel C.

    2013-04-01

    This paper presents a new methodology for eliminating the influence of the power fluctuations of the renewable power systems. The renewable energy, which is to be considered an uncertain and uncontrollable resource, can only provide irregular electrical power to the power grid. This irregularity creates fluctuations of the generated power from the renewable power systems. These fluctuations cause instability to the power system and influence the operation of conventional power plants. Overall, the power system is vulnerable to collapse if necessary actions are not taken to reduce the impact of these fluctuations. This methodology aims at reducing these fluctuations and makes the generated power capability for covering the power consumption. This requires a prediction tool for estimating the generated power in advance to provide the range and the time of occurrence of the fluctuations. Since most of the renewable energies are weather based, as a result a weather forecast technique will be used for predicting the generated power. The reduction of the fluctuation also requires stabilizing facilities to maintain the output power at a desired level. In this study, a wind farm and a photovoltaic array as renewable power systems and a pumped-storage and batteries as stabilizing facilities are used, since they are best suitable for compensating the fluctuations of these types of power suppliers. As an illustrative example, a model of wind and photovoltaic power systems with battery energy and pumped hydro storage facilities for power fluctuation reduction is included, and its power fluctuation reduction is verified through simulation.

  7. Plate motions and deformations from geologic and geodetic data

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.

    1986-01-01

    Research effort on behalf of the Crustal Dynamics Project focused on the development of methodologies suitable for the analysis of space-geodetic data sets for the estimation of crustal motions, in conjunction with results derived from land-based geodetic data, neo-tectonic studies, and other geophysical data. These methodologies were used to provide estimates of both global plate motions and intraplate deformation in the western U.S. Results from the satellite ranging experiment for the rate of change of the baseline length between San Diego and Quincy, California indicated that relative motion between the North American and Pacific plates over the course of the observing period during 1972 to 1982 were consistent with estimates calculated from geologic data averaged over the past few million years. This result, when combined with other kinematic constraints on western U.S. deformation derived from land-based geodesy, neo-tectonic studies, and other geophysical data, places limits on the possible extension of the Basin and Range province, and implies significant deformation is occurring west of the San Andreas fault. A new methodology was developed to analyze vector-position space-geodetic data to provide estimates of relative vector motions of the observing sites. The algorithm is suitable for the reduction of large, inhomogeneous data sets, and takes into account the full position covariances, errors due to poorly resolved Earth orientation parameters and vertical positions, and reduces baises due to inhomogeneous sampling of the data. This methodology was applied to the problem of estimating the rate-scaling parameter of a global plate tectonic model using satellite laser ranging observations over a five-year interval. The results indicate that the mean rate of global plate motions for that interval are consistent with those averaged over several million years, and are not consistent with quiescent or greatly accelerated plate motions. This methodology was also used to provide constraints on deformation in the western U.S. using very long baseline interferometry observations over a two-year period.

  8. Estimation of age at death from the pubic symphysis and the auricular surface of the ilium using a smoothing procedure.

    PubMed

    Martins, Rui; Oliveira, Paulo Eduardo; Schmitt, Aurore

    2012-06-10

    We discuss here the estimation of age at death from two indicators (pubic symphysis and the sacro-pelvic surface of the ilium) based on four different osteological series from Portugal, Great-Britain, South Africa or USA (European origin). These samples and the scoring system of the two indicators were used by Schmitt et al. (2002), applying the methodology proposed by Lucy et al. (1996). In the present work, the same data was processed using a modification of the empirical method proposed by Lucy et al. (2002). The various probability distributions are estimated from training data by using kernel density procedures and Jackknife methodology. Bayes's theorem is then used to produce the posterior distribution from which point and interval estimates may be made. This statistical approach reduces the bias of the estimates to less than 70% of what was obtained by the initial method. This reduction going up to 52% if knowledge of sex of the individual is available, and produces an age for all the individuals that improves age at death assessment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Nonlinear Time Series Analysis in the Absence of Strong Harmonics

    NASA Astrophysics Data System (ADS)

    Stine, Peter; Jevtic, N.

    2010-05-01

    Nonlinear time series analysis has successfully been used for noise reduction and for identifying long term periodicities in variable star light curves. It was thought that good noise reduction could be obtained when a strong fundamental and second harmonic are present. We show that, quite unexpectedly, this methodology for noise reduction can be efficient for data with very noisy power spectra without a strong fundamental and second harmonic. Not only can one obtain almost two orders of magnitude noise reduction of the white noise tail, insight can also be gained into the short time scale of organized behavior. Thus, we are able to obtain an estimate of this short time scale, which is on the order of 1.5 hours in the case of a variable white dwarf.

  10. Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions

    NASA Astrophysics Data System (ADS)

    De Risi, Raffaele; Goda, Katsuichiro

    2017-08-01

    Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.

  11. Estimating 'lost heart beats' rather than reductions in heart rate during the intubation of critically-ill children.

    PubMed

    Jones, Peter; Ovenden, Nick; Dauger, Stéphane; Peters, Mark J

    2014-01-01

    Reductions in heart rate occur frequently in children during critical care intubation and are currently considered the gold standard for haemodynamic instability. Our objective was to estimate loss of heart beats during intubation and compare this to reduction in heart rate alone whilst testing the impact of atropine pre-medication. Data were extracted from a prospective 2-year cohort study of intubation ECGs from critically ill children in PICU/Paediatric Transport. A three step algorithm was established to exclude variation in pre-intubation heart rate (using a 95%CI limit derived from pre-intubation heart rate variation of the children included), measure the heart rate over time and finally the estimate the numbers of lost beats. 333 intubations in children were eligible for inclusion of which 245 were available for analysis (74%). Intubations where the fall in heart rate was less than 50 bpm were accompanied almost exclusively by less than 25 lost beats (n = 175, median 0 [0-1]). When there was a reduction of >50 bpm there was a poor correlation with numbers of lost beats (n = 70, median 42 [15-83]). During intubation the median number of lost beats was 8 [1]-[32] when atropine was not used compared to 0 [0-0] when atropine was used (p<0.001). A reduction in heart rate during intubation of <50 bpm reliably predicted a minimal loss of beats. When the reduction in heart rate was >50 bpm the heart rate was poorly predictive of lost beats. A study looking at the relationship between lost beats and cardiac output needs to be performed. Atropine reduces both fall in heart rate and loss of beats. Similar area-under-the-curve methodology may be useful for estimating risk when biological parameters deviate outside normal range.

  12. A solid waste audit and directions for waste reduction at the University of British Columbia, Canada.

    PubMed

    Felder, M A; Petrell, R J; Duff, S J

    2001-08-01

    A novel design for a solid waste audit was developed and applied to the University of British Columbia, Canada, in 1998. This audit was designed to determine the characteristics of the residual solid waste generated by the campus and provide directions for waste reduction. The methodology was constructed to address complications in solid waste sampling, including spatial and temporal variation in waste, extrapolation from the study area, and study validation. Accounting for spatial effects decreased the variation in calculating total waste loads. Additionally, collecting information on user flow provided a means to decrease daily variation in solid waste and allow extrapolation over time and space. The total annual waste estimated from the experimental design was compared to documented values and was found to differ by -18%. The majority of this discrepancy was likely attributable to the unauthorised disposal of construction and demolition waste. Several options were proposed to address waste minimisation goals. These included: enhancing the current recycling program, source reduction of plastic materials, and/or diverting organic material to composting (maximum diversion: approximately 320, approximately 270, and approximately 1510 t yr(-1), respectively). The greatest diversion by weight would be accomplished through the diversion of organic material, as it was estimated to comprise 70% of the projected waste stream. The audit methodology designed is most appropriate for facilities/regions that have a separate collection system for seasonal wastes and have a means for tracking user flow.

  13. Remedial Action Assessment System: A computer-based methodology for conducting feasibility studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, M.K.; Buelt, J.L.; Stottlemyre, J.A.

    1991-02-01

    Because of the complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, DOE is supporting the development of a computer-based methodology to streamline the remedial investigation/feasibility study process. The Remedial Action Assessment System (RAAS), can be used for screening, linking, and evaluating established technology processes in support of conducting feasibility studies. It is also intended to do the same in support of corrective measures studies. The user interface employs menus, windows, help features, and graphical information while RAAS is in operation. Object-oriented programming is used to link unit processes into sets ofmore » compatible processes that form appropriate remedial alternatives. Once the remedial alternatives are formed, the RAAS methodology can evaluate them in terms of effectiveness, implementability, and cost. RAAS will access a user-selected risk assessment code to determine the reduction of risk after remedial action by each recommended alternative. The methodology will also help determine the implementability of the remedial alternatives at a site and access cost estimating tools to provide estimates of capital, operating, and maintenance costs. This paper presents the characteristics of two RAAS prototypes currently being developed. These include the RAAS Technology Information System, which accesses graphical, tabular and textual information about technologies, and the main RAAS methodology, which screens, links, and evaluates remedial technologies. 4 refs., 3 figs., 1 tab.« less

  14. State Approaches to Demand Reduction Induced Price Effects: Examining How Energy Efficiency Can Lower Prices for All

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Colin; Hedman, Bruce; Goldberg, Amelie

    Effects (DRIPE) as a real, quantifiable benefit of energy efficiency and demand response programs. DRIPE is a measurement of the value of demand reductions in terms of the decrease in wholesale energy prices, resulting in lower total expenditures on electricity or natural gas across a given grid. Crucially for policymakers and consumer advocates, DRIPE savings accrue not only to the subset of customers who consume less, but to all consumers. Rate-paying customers realize DRIPE savings when price reductions across an electricity or natural gas system are passed on to all retail customers as lower rates (depending upon regulation and marketmore » structure, residual savings may be wholly or partially retained by utilities). DRIPE savings, though seemingly small in terms of percent price reductions or dollars per household, can amount to hundreds of millions of dollars per year across entire states or grids. Therefore, accurately assessing DRIPE benefits can help to ensure appropriate programs are designed and implemented for energy efficiency measures. This paper reviews the existing knowledge and experience from select U.S. states regarding DRIPE (including New York and Ohio), and the potential for expanded application of the concept of DRIPE by regulators. Policymakers and public utility commissions have a critical role to play in setting the methodology for determining DRIPE, encouraging its capture by utilities, and allocating DRIPE benefits among utilities, various groups of customers, and/or society at large. While the methodologies for estimating DRIPE benefits are still being perfected, policymakers can follow the examples of states such as Maryland and Vermont in including conservative DRIPE estimates in their resource planning.« less

  15. Using a relative health indicator (RHI) metric to estimate health risk reductions in drinking water.

    PubMed

    Alfredo, Katherine A; Seidel, Chad; Ghosh, Amlan; Roberson, J Alan

    2017-03-01

    When a new drinking water regulation is being developed, the USEPA conducts a health risk reduction and cost analysis to, in part, estimate quantifiable and non-quantifiable cost and benefits of the various regulatory alternatives. Numerous methodologies are available for cumulative risk assessment ranging from primarily qualitative to primarily quantitative. This research developed a summary metric of relative cumulative health impacts resulting from drinking water, the relative health indicator (RHI). An intermediate level of quantification and modeling was chosen, one which retains the concept of an aggregated metric of public health impact and hence allows for comparisons to be made across "cups of water," but avoids the need for development and use of complex models that are beyond the existing state of the science. Using the USEPA Six-Year Review data and available national occurrence surveys of drinking water contaminants, the metric is used to test risk reduction as it pertains to the implementation of the arsenic and uranium maximum contaminant levels and quantify "meaningful" risk reduction. Uranium represented the threshold risk reduction against which national non-compliance risk reduction was compared for arsenic, nitrate, and radium. Arsenic non-compliance is most significant and efforts focused on bringing those non-compliant utilities into compliance with the 10 μg/L maximum contaminant level would meet the threshold for meaningful risk reduction.

  16. Seismic design and engineering research at the U.S. Geological Survey

    USGS Publications Warehouse

    1988-01-01

    The Engineering Seismology Element of the USGS Earthquake Hazards Reduction Program is responsible for the coordination and operation of the National Strong Motion Network to collect, process, and disseminate earthquake strong-motion data; and, the development of improved methodologies to estimate and predict earthquake ground motion.  Instrumental observations of strong ground shaking induced by damaging earthquakes and the corresponding response of man-made structures provide the basis for estimating the severity of shaking from future earthquakes, for earthquake-resistant design, and for understanding the physics of seismologic failure in the Earth's crust.

  17. A methodology to derive Synthetic Design Hydrographs for river flood management

    NASA Astrophysics Data System (ADS)

    Tomirotti, Massimo; Mignosa, Paolo

    2017-12-01

    The design of flood protection measures requires in many cases not only the estimation of the peak discharges, but also of the volume of the floods and its time distribution. A typical solution to this kind of problems is the formulation of Synthetic Design Hydrographs (SDHs). In this paper a methodology to derive SDHs is proposed on the basis of the estimation of the Flow Duration Frequency (FDF) reduction curve and of a Peak-Duration (PD) relationship furnishing respectively the quantiles of the maximum average discharge and the average peak position in each duration. The methodology is intended to synthesize the main features of the historical floods in a unique SDH for each return period. The shape of the SDH is not selected a priori but is a result of the behaviour of FDF and PD curves, allowing to account in a very convenient way for the variability of the shapes of the observed hydrographs at local time scale. The validation of the methodology is performed with reference to flood routing problems in reservoirs, lakes and rivers. The results obtained demonstrate the capability of the SDHs to describe the effects of different hydraulic systems on the statistical regime of floods, even in presence of strong modifications induced on the probability distribution of peak flows.

  18. Satellite vulnerability to space debris - an improved 3D risk assessment methodology

    NASA Astrophysics Data System (ADS)

    Grassi, Lilith; Tiboldo, Francesca; Destefanis, Roberto; Donath, Thérèse; Winterboer, Arne; Evans, Leanne; Janovsky, Rolf; Kempf, Scott; Rudolph, Martin; Schäfer, Frank; Gelhaus, Johannes

    2014-06-01

    The work described in the present paper, performed as a part of the P2 project, presents an enhanced method to evaluate satellite vulnerability to micrometeoroids and orbital debris (MMOD), using the ESABASE2/Debris tool (developed under ESA contract). Starting from the estimation of induced failures on spacecraft (S/C) components and from the computation of lethal impacts (with an energy leading to the loss of the satellite), and considering the equipment redundancies and interactions between components, the debris-induced S/C functional impairment is assessed. The developed methodology, illustrated through its application to a case study satellite, includes the capability to estimate the number of failures on internal components, overcoming the limitations of current tools which do not allow propagating the debris cloud inside the S/C. The ballistic limit of internal equipment behind a sandwich panel structure is evaluated through the implementation of the Schäfer Ryan Lambert (SRL) Ballistic Limit Equation (BLE). The analysis conducted on the case study satellite shows the S/C vulnerability index to be in the range of about 4% over the complete mission, with a significant reduction with respect to the results typically obtained with the traditional analysis, which considers as a failure the structural penetration of the satellite structural panels. The methodology has then been applied to select design strategies (additional local shielding, relocation of components) to improve S/C protection with respect to MMOD. The results of the analyses conducted on the improved design show a reduction of the vulnerability index of about 18%.

  19. Comment on Geoengineering with seagrasses: is credit due where credit is given?

    NASA Astrophysics Data System (ADS)

    Oreska, Matthew P. J.; McGlathery, Karen J.; Emmer, Igino M.; Needelman, Brian A.; Emmett-Mattox, Stephen; Crooks, Stephen; Megonigal, J. Patrick; Myers, Doug

    2018-03-01

    In their recent review, ‘Geoengineering with seagrasses: is credit due where credit is given?,’ Johannessen and Macdonald (2016) invoke the prospect of carbon offset-credit over-allocation by the Verified Carbon Standard as a pretense for their concerns about published seagrass carbon burial rate and global stock estimates. Johannessen and Macdonald (2016) suggest that projects seeking offset-credits under the Verified Carbon Standard methodology VM0033: Methodology for Tidal Wetland and Seagrass Restoration will overestimate long-term (100 yr) sediment organic carbon (SOC) storage because issues affecting carbon burial rates bias storage estimates. These issues warrant serious consideration by the seagrass research community; however, VM0033 does not refer to seagrass SOC ‘burial rates’ or ‘storage.’ Projects seeking credits under VM0033 must document greenhouse gas emission reductions over time, relative to a baseline scenario, in order to receive credits. Projects must also monitor changes in carbon pools, including SOC, to confirm that observed benefits are maintained over time. However, VM0033 allows projects to conservatively underestimate project benefits by citing default values for specific accounting parameters, including CO2 emissions reductions. We therefore acknowledge that carbon crediting methodologies such as VM0033 are sensitive to the quality of the seagrass literature, particularly when permitted default factors are based in part on seagrass burial rates. Literature-derived values should be evaluated based on the concerns raised by Johannessen and Macdonald (2016), but these issues should not lead to credit over-allocation in practice, provided VM0033 is rigorously followed. These issues may, however, affect the feasibility of particular seagrass offset projects.

  20. Product pricing in the Solar Array Manufacturing Industry - An executive summary of SAMICS

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1978-01-01

    Capabilities, methodology, and a description of input data to the Solar Array Manufacturing Industry Costing Standards (SAMICS) are presented. SAMICS were developed to provide a standardized procedure and data base for comparing manufacturing processes of Low-cost Solar Array (LSA) subcontractors, guide the setting of research priorities, and assess the progress of LSA toward its hundred-fold cost reduction goal. SAMICS can be used to estimate the manufacturing costs and product prices and determine the impact of inflation, taxes, and interest rates, but it is limited by its ignoring the effects of the market supply and demand and an assumption that all factories operate in a production line mode. The SAMICS methodology defines the industry structure, hypothetical supplier companies, and manufacturing processes and maintains a body of standardized data which is used to compute the final product price. The input data includes the product description, the process characteristics, the equipment cost factors, and production data for the preparation of detailed cost estimates. Activities validating that SAMICS produced realistic price estimates and cost breakdowns are described.

  1. Cooling energy savings potential of light-colored roofs for residential and commercial buildings in 11 US metropolitan areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konopacki, S.; Akbari, H.; Gartland, L.

    The U.S. Environmental Protection Agency (EPA) sponsored this project to estimate potential energy and monetary savings resulting from the implementation of light-colored roofs on residential and commercial buildings in major U.S. metropolitan areas. Light-colored roofs reflect more sunlight than dark roofs, so they keep buildings cooler and reduce air-conditioning demand. Typically, rooftops in the United States are dark, and thus there is a potential for saving energy and money by changing to reflective roofs. Naturally, the expected savings are higher in southern, sunny, and cloudless climates. In this study, we make quantitative estimates of reduction in peak power demand andmore » annual cooling electricity use that would result from increasing the reflectivity of the roofs. Since light-colored roofs also reflect heat in the winter, the estimates of annual electricity savings are a net value corrected for the increased wintertime energy use. Savings estimates only include direct reduction in building energy use and do not account for the indirect benefit that would also occur from the reduction in ambient temperature, i.e. a reduction in the heat island effect. This analysis is based on simulations of building energy use, using the DOE-2 building energy simulation program. Our methodology starts with specifying 11 prototypical buildings: single-family residential (old and new), office (old and new), retail store (old and new), school (primary and secondary), health (hospital and nursing home), and grocery store. Most prototypes are simulated with two heating systems: gas furnace and heat pumps. We then perform DOE-2 simulations of the prototypical buildings, with light and dark roofs, in a variety of climates and obtain estimates of the energy use for air conditioning and heating.« less

  2. Estimating the Fiscal Effects of Public Pharmaceutical Expenditure Reduction in Greece

    PubMed Central

    Souliotis, Kyriakos; Papageorgiou, Manto; Politi, Anastasia; Frangos, Nikolaos; Tountas, Yiannis

    2015-01-01

    The purpose of the present study is to estimate the impact of pharmaceutical spending reduction on public revenue, based on data from the national health accounts as well as on reports of Greece’s organizations. The methodology of the analysis is structured in two basic parts. The first part presents the urgency for rapid cutbacks on public pharmaceutical costs due to the financial crisis and provides a conceptual framework for the contribution of the Greek pharmaceutical branch to the country’s economy. In the second part, we perform a quantitative analysis for the estimation of multiplier effects of public pharmaceutical expenditure reduction on main revenue sources, such as taxes and social contributions. We also fit projection models with multipliers as regressands for the evaluation of the efficiency of the particular fiscal measure in the short run. According to the results, nearly half of the gains from the measure’s application is offset by financially equivalent decreases in the government’s revenue, i.e., losses in tax revenues and social security contributions alone, not considering any other direct or indirect costs. The findings of multipliers’ high value and increasing short-term trend imply the measure’s inefficiency henceforward and signal the risk of vicious circles that will provoke the economy’s deprivation of useful resources. PMID:26380249

  3. Estimating the Fiscal Effects of Public Pharmaceutical Expenditure Reduction in Greece.

    PubMed

    Souliotis, Kyriakos; Papageorgiou, Manto; Politi, Anastasia; Frangos, Nikolaos; Tountas, Yiannis

    2015-01-01

    The purpose of the present study is to estimate the impact of pharmaceutical spending reduction on public revenue, based on data from the national health accounts as well as on reports of Greece's organizations. The methodology of the analysis is structured in two basic parts. The first part presents the urgency for rapid cutbacks on public pharmaceutical costs due to the financial crisis and provides a conceptual framework for the contribution of the Greek pharmaceutical branch to the country's economy. In the second part, we perform a quantitative analysis for the estimation of multiplier effects of public pharmaceutical expenditure reduction on main revenue sources, such as taxes and social contributions. We also fit projection models with multipliers as regressands for the evaluation of the efficiency of the particular fiscal measure in the short run. According to the results, nearly half of the gains from the measure's application is offset by financially equivalent decreases in the government's revenue, i.e., losses in tax revenues and social security contributions alone, not considering any other direct or indirect costs. The findings of multipliers' high value and increasing short-term trend imply the measure's inefficiency henceforward and signal the risk of vicious circles that will provoke the economy's deprivation of useful resources.

  4. Estimation of the state of solar activity type stars by virtual observations of CrAVO

    NASA Astrophysics Data System (ADS)

    Dolgov, A. A.; Shlyapnikov, A. A.

    2012-05-01

    The results of precosseing of negatives with direct images of the sky from CrAO glass library are presented in this work, which became a part of on-line archive of the Crimean Astronomical Virtual Observatory (CrAVO). Based on the obtained data, the parameters of dwarf stars have been estimated, included in the catalog "Stars with solar-type activity" (GTSh10). The following matters are considered: searching methodology of negatives with positions of studied stars and with calculated limited magnitude; image viewing and reduction with the facilities of the International Virtual Observatory; the preliminary results of the photometry of studied objects.

  5. Mathematical modeling of elementary trapping-reduction processes in positron annihilation lifetime spectroscopy: methodology of Ps-to-positron trapping conversion

    NASA Astrophysics Data System (ADS)

    Shpotyuk, Ya; Cebulski, J.; Ingram, A.; Shpotyuk, O.

    2017-12-01

    Methodological possibilities of positron annihilation lifetime (PAL) spectroscopy in application to nanostructurized substances treated within three-term fitting procedure are reconsidered to parameterize their atomic-deficient structural arrangement. In contrast to conventional three-term fitting analysis of the detected PAL spectra based on admixed positron trapping and positronium (Ps) decaying, the nanostructurization due to guest nanoparticles embedded in host matrix is considered as producing modified trapping, which involves conversion between these channels. The developed approach referred to as x3-x2-coupling decomposition algorithm allows estimation free volumes of interfacial voids responsible for positron trapping and bulk lifetimes in nanoparticle-embedded substances. This methodology is validated using experimental data of Chakraverty et al. [Phys. Rev. B71 (2005) 024115] on PAL study of composites formed by guest NiFe2O4 nanocrystals grown in host SiO2 matrix.

  6. Modeling of GE Appliances: Cost Benefit Study of Smart Appliances in Wholesale Energy, Frequency Regulation, and Spinning Reserve Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuller, Jason C.; Parker, Graham B.

    This report is the second in a series of three reports describing the potential of GE’s DR-enabled appliances to provide benefits to the utility grid. The first report described the modeling methodology used to represent the GE appliances in the GridLAB-D simulation environment and the estimated potential for peak demand reduction at various deployment levels. The third report will explore the technical capability of aggregated group actions to positively impact grid stability, including frequency and voltage regulation and spinning reserves, and the impacts on distribution feeder voltage regulation, including mitigation of fluctuations caused by high penetration of photovoltaic distributed generation.more » In this report, a series of analytical methods were presented to estimate the potential cost benefit of smart appliances while utilizing demand response. Previous work estimated the potential technical benefit (i.e., peak reduction) of smart appliances, while this report focuses on the monetary value of that participation. The effects on wholesale energy cost and possible additional revenue available by participating in frequency regulation and spinning reserve markets were explored.« less

  7. The public health benefits of insulation retrofits in existing housing in the United States

    PubMed Central

    Levy, Jonathan I; Nishioka, Yurika; Spengler, John D

    2003-01-01

    Background Methodological limitations make it difficult to quantify the public health benefits of energy efficiency programs. To address this issue, we developed a risk-based model to estimate the health benefits associated with marginal energy usage reductions and applied the model to a hypothetical case study of insulation retrofits in single-family homes in the United States. Methods We modeled energy savings with a regression model that extrapolated findings from an energy simulation program. Reductions of fine particulate matter (PM2.5) emissions and particle precursors (SO2 and NOx) were quantified using fuel-specific emission factors and marginal electricity analyses. Estimates of population exposure per unit emissions, varying by location and source type, were extrapolated from past dispersion model runs. Concentration-response functions for morbidity and mortality from PM2.5 were derived from the epidemiological literature, and economic values were assigned to health outcomes based on willingness to pay studies. Results In total, the insulation retrofits would save 800 TBTU (8 × 1014 British Thermal Units) per year across 46 million homes, resulting in 3,100 fewer tons of PM2.5, 100,000 fewer tons of NOx, and 190,000 fewer tons of SO2 per year. These emission reductions are associated with outcomes including 240 fewer deaths, 6,500 fewer asthma attacks, and 110,000 fewer restricted activity days per year. At a state level, the health benefits per unit energy savings vary by an order of magnitude, illustrating that multiple factors (including population patterns and energy sources) influence health benefit estimates. The health benefits correspond to $1.3 billion per year in externalities averted, compared with $5.9 billion per year in economic savings. Conclusion In spite of significant uncertainties related to the interpretation of PM2.5 health effects and other dimensions of the model, our analysis demonstrates that a risk-based methodology is viable for national-level energy efficiency programs. PMID:12740041

  8. Methodological Issues to Consider When Collecting Data to Estimate Poverty Impact in Economic Evaluations in Low-income and Middle-income Countries.

    PubMed

    Sweeney, Sedona; Vassall, Anna; Foster, Nicola; Simms, Victoria; Ilboudo, Patrick; Kimaro, Godfather; Mudzengi, Don; Guinness, Lorna

    2016-02-01

    Out-of-pocket spending is increasingly recognized as an important barrier to accessing health care, particularly in low-income and middle-income countries (LMICs) where a large portion of health expenditure comes from out-of-pocket payments. Emerging universal healthcare policies prioritize reduction of poverty impact such as catastrophic and impoverishing healthcare expenditure. Poverty impact is therefore increasingly evaluated alongside and within economic evaluations to estimate the impact of specific health interventions on poverty. However, data collection for these metrics can be challenging in intervention-based contexts in LMICs because of study design and practical limitations. Using a set of case studies, this letter identifies methodological challenges in collecting patient cost data in LMIC contexts. These components are presented in a framework to encourage researchers to consider the implications of differing approaches in data collection and to report their approach in a standardized and transparent way. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.

  9. Methodological development for selection of significant predictors explaining fatal road accidents.

    PubMed

    Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco

    2016-05-01

    Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.

  10. Sand/cement ratio evaluation on mortar using neural networks and ultrasonic transmission inspection.

    PubMed

    Molero, M; Segura, I; Izquierdo, M A G; Fuente, J V; Anaya, J J

    2009-02-01

    The quality and degradation state of building materials can be determined by nondestructive testing (NDT). These materials are composed of a cementitious matrix and particles or fragments of aggregates. Sand/cement ratio (s/c) provides the final material quality; however, the sand content can mask the matrix properties in a nondestructive measurement. Therefore, s/c ratio estimation is needed in nondestructive characterization of cementitious materials. In this study, a methodology to classify the sand content in mortar is presented. The methodology is based on ultrasonic transmission inspection, data reduction, and features extraction by principal components analysis (PCA), and neural network classification. This evaluation is carried out with several mortar samples, which were made while taking into account different cement types and s/c ratios. The estimated s/c ratio is determined by ultrasonic spectral attenuation with three different broadband transducers (0.5, 1, and 2 MHz). Statistical PCA to reduce the dimension of the captured traces has been applied. Feed-forward neural networks (NNs) are trained using principal components (PCs) and their outputs are used to display the estimated s/c ratios in false color images, showing the s/c ratio distribution of the mortar samples.

  11. Child Mortality Estimation: Accelerated Progress in Reducing Global Child Mortality, 1990–2010

    PubMed Central

    Hill, Kenneth; You, Danzhen; Inoue, Mie; Oestergaard, Mikkel Z.; Hill, Kenneth; Alkema, Leontine; Cousens, Simon; Croft, Trevor; Guillot, Michel; Pedersen, Jon; Walker, Neff; Wilmoth, John; Jones, Gareth

    2012-01-01

    Monitoring development indicators has become a central interest of international agencies and countries for tracking progress towards the Millennium Development Goals. In this review, which also provides an introduction to a collection of articles, we describe the methodology used by the United Nations Inter-agency Group for Child Mortality Estimation to track country-specific changes in the key indicator for Millennium Development Goal 4 (MDG 4), the decline of the under-five mortality rate (the probability of dying between birth and age five, also denoted in the literature as U5MR and 5 q 0). We review how relevant data from civil registration, sample registration, population censuses, and household surveys are compiled and assessed for United Nations member states, and how time series regression models are fitted to all points of acceptable quality to establish the trends in U5MR from which infant and neonatal mortality rates are generally derived. The application of this methodology indicates that, between 1990 and 2010, the global U5MR fell from 88 to 57 deaths per 1,000 live births, and the annual number of under-five deaths fell from 12.0 to 7.6 million. Although the annual rate of reduction in the U5MR accelerated from 1.9% for the period 1990–2000 to 2.5% for the period 2000–2010, it remains well below the 4.4% annual rate of reduction required to achieve the MDG 4 goal of a two-thirds reduction in U5MR from its 1990 value by 2015. Thus, despite progress in reducing child mortality worldwide, and an encouraging increase in the pace of decline over the last two decades, MDG 4 will not be met without greatly increasing efforts to reduce child deaths. PMID:22952441

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Shaughnessy, Eric; Heeter, Jenny; Keyser, David

    Cities are increasingly taking actions such as building code enforcement, urban planning, and public transit expansion to reduce emissions of carbon dioxide in their communities and municipal operations. However, many cities lack the quantitative information needed to estimate policy impacts and prioritize city actions in terms of carbon abatement potential and cost effectiveness. This report fills this research gap by providing methodologies to assess the carbon abatement potential of a variety of city actions. The methodologies are applied to an energy use data set of 23,458 cities compiled for the U.S. Department of Energy’s City Energy Profile tool. The analysismore » estimates the national carbon abatement potential of the most commonly implemented actions in six specific policy areas. The results of this analysis suggest that, in aggregate, cities could reduce nationwide carbon emissions by about 210 million metric tons of carbon dioxide (MMT CO 2) per year in a "moderate abatement scenario" by 2035 and 480 MMT CO 2/year in a "high abatement scenario" by 2035 through these common actions typically within a city’s control in the six policy areas. The aggregate carbon abatement potential of these specific areas equates to a reduction of 3%-7% relative to 2013 U.S. emissions. At the city level, the results suggest the average city could reduce carbon emissions by 7% (moderate) to 19% (high) relative to current city-level emissions. City carbon abatement potential is sensitive to national and state policies that affect the carbon intensity of electricity and transportation. Specifically, the U.S. Clean Power Plan and further renewable energy cost reductions could reduce city carbon emissions overall, helping cities achieve their carbon reduction goals.« less

  13. Assessment of Integrated Pedestrian Protection Systems with Autonomous Emergency Braking (AEB) and Passive Safety Components.

    PubMed

    Edwards, Mervyn; Nathanson, Andrew; Carroll, Jolyon; Wisch, Marcus; Zander, Oliver; Lubbe, Nils

    2015-01-01

    Autonomous emergency braking (AEB) systems fitted to cars for pedestrians have been predicted to offer substantial benefit. On this basis, consumer rating programs-for example, the European New Car Assessment Programme (Euro NCAP)-are developing rating schemes to encourage fitment of these systems. One of the questions that needs to be answered to do this fully is how the assessment of the speed reduction offered by the AEB is integrated with the current assessment of the passive safety for mitigation of pedestrian injury. Ideally, this should be done on a benefit-related basis. The objective of this research was to develop a benefit-based methodology for assessment of integrated pedestrian protection systems with AEB and passive safety components. The method should include weighting procedures to ensure that it represents injury patterns from accident data and replicates an independently estimated benefit of AEB. A methodology has been developed to calculate the expected societal cost of pedestrian injuries, assuming that all pedestrians in the target population (i.e., pedestrians impacted by the front of a passenger car) are impacted by the car being assessed, taking into account the impact speed reduction offered by the car's AEB (if fitted) and the passive safety protection offered by the car's frontal structure. For rating purposes, the cost for the assessed car is normalized by comparing it to the cost calculated for a reference car. The speed reductions measured in AEB tests are used to determine the speed at which each pedestrian in the target population will be impacted. Injury probabilities for each impact are then calculated using the results from Euro NCAP pedestrian impactor tests and injury risk curves. These injury probabilities are converted into cost using "harm"-type costs for the body regions tested. These costs are weighted and summed. Weighting factors were determined using accident data from Germany and Great Britain and an independently estimated AEB benefit. German and Great Britain versions of the methodology are available. The methodology was used to assess cars with good, average, and poor Euro NCAP pedestrian ratings, in combination with a current AEB system. The fitment of a hypothetical A-pillar airbag was also investigated. It was found that the decrease in casualty injury cost achieved by fitting an AEB system was approximately equivalent to that achieved by increasing the passive safety rating from poor to average. Because the assessment was influenced strongly by the level of head protection offered in the scuttle and windscreen area, a hypothetical A-pillar airbag showed high potential to reduce overall casualty cost. A benefit-based methodology for assessment of integrated pedestrian protection systems with AEB has been developed and tested. It uses input from AEB tests and Euro NCAP passive safety tests to give an integrated assessment of the system performance, which includes consideration of effects such as the change in head impact location caused by the impact speed reduction given by the AEB.

  14. A Methodological Approach for Conducting a Business Case Analysis (BCA) of Zephyr Joint Capability Technology Demonstration (JCTD)

    DTIC Science & Technology

    2008-12-01

    Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the...Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1 . AGENCY USE ONLY...on Investment (ROI) of the Zephyr system. This is achieved by ( 1 ) Developing a model to carry out Business Case Analysis (BCA) of JCTDs, including

  15. Pesticide Environmental Accounting: a method for assessing the external costs of individual pesticide applications.

    PubMed

    Leach, A W; Mumford, J D

    2008-01-01

    The Pesticide Environmental Accounting (PEA) tool provides a monetary estimate of environmental and health impacts per hectare-application for any pesticide. The model combines the Environmental Impact Quotient method and a methodology for absolute estimates of external pesticide costs in UK, USA and Germany. For many countries resources are not available for intensive assessments of external pesticide costs. The model converts external costs of a pesticide in the UK, USA and Germany to Mediterranean countries. Economic and policy applications include estimating impacts of pesticide reduction policies or benefits from technologies replacing pesticides, such as sterile insect technique. The system integrates disparate data and approaches into a single logical method. The assumptions in the system provide transparency and consistency but at the cost of some specificity and precision, a reasonable trade-off for a method that provides both comparative estimates of pesticide impacts and area-based assessments of absolute impacts.

  16. The ISO 50001 Impact Estimator Tool (IET 50001 V1.1.4) - User Guide and Introduction to the ISO 50001 Impacts Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Therkelsen, Peter L.; Rao, Prakash; Aghajanzadeh, Arian

    ISO 50001-Energy management systems – Requirements with guidance for use, is an internationally developed standard that provides organizations with a flexible framework for implementing an energy management system (EnMS) with the goal of continual energy performance improvement. The ISO 50001 standard was first published in 2011 and has since seen growth in the number of certificates issued around the world, primarily in the industrial (agriculture, manufacturing, and mining) and service (commercial) sectors. Policy makers in many regions and countries are looking to or are already using ISO 50001 as a basis for energy efficiency, carbon reduction, and other energy performancemore » improvement schemes. The Impact Estimator Tool 50001 (IET 50001 Tool) is a computational model developed to assist researchers and policy makers determine the potential impact of ISO 50001 implementation in the industrial and service (commercial) sectors for a given region or country. The IET 50001 Tool is based upon a methodology initially developed by the Lawrence Berkeley National Laboratory that has been improved upon and vetted by a group of international researchers. By using a commonly accepted and transparent methodology, users of the IET 50001 Tool can easily and clearly communicate the potential impact of ISO 50001 for a region or country.« less

  17. Robot-assisted hysterectomy for endometrial and cervical cancers: a systematic review.

    PubMed

    Nevis, Immaculate F; Vali, Bahareh; Higgins, Caroline; Dhalla, Irfan; Urbach, David; Bernardini, Marcus Q

    2017-03-01

    Total and radical hysterectomies are the most common treatment strategies for early-stage endometrial and cervical cancers, respectively. Surgical modalities include open surgery, laparoscopy, and more recently, minimally invasive robot-assisted surgery. We searched several electronic databases for randomized controlled trials and observational studies with a comparison group, published between 2009 and 2014. Our outcomes of interest included both perioperative and morbidity outcomes. We included 35 observational studies in this review. We did not find any randomized controlled trials. The quality of evidence for all reported outcomes was very low. For women with endometrial cancer, we found that there was a reduction in estimated blood loss between the robot-assisted surgery compared to both laparoscopy and open surgery. There was a reduction in length of hospital stay between robot-assisted surgery and open surgery but not laparoscopy. There was no difference in total lymph node removal between the three modalities. There was no difference in the rate of overall complications between the robot-assisted technique and laparoscopy. For women with cervical cancer, there were no differences in estimated blood loss or removal of lymph nodes between robot-assisted and laparoscopic procedure. Compared to laparotomy, robot-assisted hysterectomy for cervical cancer showed an overall reduction in estimated blood loss. Although robot-assisted hysterectomy is clinically effective for the treatment of both endometrial and cervical cancers, methodologically rigorous studies are lacking to draw definitive conclusions.

  18. Response surface methodology as a tool for modeling and optimization of Bacillus subtilis spores inactivation by UV/ nano-Fe0 process for safe water production.

    PubMed

    Yousefzadeh, Samira; Matin, Atiyeh Rajabi; Ahmadi, Ehsan; Sabeti, Zahra; Alimohammadi, Mahmood; Aslani, Hassan; Nabizadeh, Ramin

    2018-04-01

    One of the most important aspects of environmental issues is the demand for clean and safe water. Meanwhile, disinfection process is one of the most important steps in safe water production. The present study aims at estimating the performance of UV, nano Zero-Valent Iron particles (nZVI, nano-Fe 0 ), and UV treatment with the addition of nZVI (combined process) for Bacillus subtilis spores inactivation. Effects of different factors on inactivation including contact time, initial nZVI concentration, UV irradiance and various aerations conditions were investigated. Response surface methodology, based on a five-level, two variable central composite design, was used to optimize target microorganism reduction and the experimental parameters. The results indicated that the disinfection time had the greatest positive impact on disinfection ability among the different selected independent variables. According to the results, it can be concluded that microbial reduction by UV alone was more effective than nZVI while the combined UV/nZVI process demonstrated the maximum log reduction. The optimum reduction of about 4 logs was observed at 491 mg/L of nZVI and 60 min of contact time when spores were exposed to UV radiation under deaerated condition. Therefore, UV/nZVI process can be suggested as a reliable method for Bacillus subtilis spores inactivation. Copyright © 2018. Published by Elsevier Ltd.

  19. Assessment of NHTSA’s Report “Relationships Between Fatality Risk, Mass, and Footprint in Model Year 2004-2011 Passenger Cars and LTVs” (LBNL Phase 1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Tom P.

    In its 2012 report NHTSA simulated the effect four fleetwide mass reduction scenarios would have on the change in annual fatalities. NHTSA estimated that the most aggressive of these scenarios (reducing mass 5.2% in heavier light trucks and 2.6% in all other vehicles types except lighter cars) would result in a small reduction in societal fatalities. LBNL replicated the methodology NHTSA used to simulate six mass reduction scenarios, including the mass reductions recommended in the 2015 NRC committee report, and estimated in 2021 and 2025 by EPA in the TAR, using the updated data through 2012. The analysis indicates thatmore » the estimated x change in fatalities under each scenario based on the updated analysis is comparable to that in the 2012 analysis, but less beneficial or more detrimental than that in the 2016 analysis. For example, an across the board 100-lb reduction in mass would result in an estimated 157 additional annual fatalities based on the 2012 analysis, but would result in only an estimated 91 additional annual fatalities based on the 2016 analysis, and an additional 87 fatalities based on the current analysis. The mass reductions recommended by the 2015 NRC committee report6 would result in a 224 increase in annual fatalities in the 2012 analysis, a 344 decrease in annual fatalities in the 2016 analysis, and a 141 increase in fatalities in the current analysis. The mass reductions EPA estimated for 2025 in the TAR7 would result in a 203 decrease in fatalities based on the 2016 analysis, but an increase of 39 fatalities based on the current analysis. These results support NHTSA’s conclusion from its 2012 study that, when footprint is held fixed, “no judicious combination of mass reductions in the various classes of vehicles results in a statistically significant fatality increase and many potential combinations are safety-neutral as point estimates.”Like the previous NHTSA studies, this updated report concludes that the estimated effect of mass reduction while maintaining footprint on societal U.S. fatality risk is small, and not statistically significant at the 95% or 90% confidence level for all vehicle types based on the jack-knife method NHTSA used. This report also finds that the estimated effects of other control variables, such as vehicle type, specific safety technologies, and crash conditions such as whether the crash occurred at night, in a rural county, or on a high-speed road, on risk are much larger, in some cases two orders of magnitude larger, than the estimated effect of mass or footprint reduction on risk. Finally, this report shows that after accounting for the many vehicle, driver, and crash variables NHTSA used in its regression analyses, there remains a wide variation in risk by vehicle make and model, and this variation is unrelated to vehicle mass. Although the purpose of the NHTSA and LBNL reports is to estimate the effect of vehicle mass reduction on societal risk, this is not exactly what the regression models are estimating. Rather, they are estimating the recent historical relationship between mass and risk, after accounting for most measurable differences between vehicles, drivers, and crash times and locations. In essence, the regression models are comparing the risk of a 2600-lb Dodge Neon with that of a 2500-lb Honda Civic, after attempting to account for all other differences between the two vehicles. The models are not estimating the effect of literally removing 100 pounds from the Neon, leaving everything else unchanged. In addition, the analyses are based on the relationship of vehicle mass and footprint on risk for recent vehicle designs (model year 2004 to 2011). These relationships may or may not continue into the future as manufacturers utilize new vehicle designs and incorporate new technologies, such as more extensive use of strong lightweight materials and specific safety technologies. Therefore, throughout this report we use the phrase “the estimated effect of mass (or footprint) reduction on risk” as shorthand for “the estimated change in risk as a function of its relationship to mass (or footprint) for vehicle models of recent design.”« less

  20. Methodological and hermeneutic reduction - a study of Finnish multiple-birth families.

    PubMed

    Heinonen, Kristiina

    2015-07-01

    To describe reduction as a method in methodological and hermeneutic reduction and the hermeneutic circle using van Manen's principles, with the empirical example of the lifeworlds of multiple-birth families in Finland. Reduction involves several levels that can be distinguished for their methodological usefulness. Researchers can use reduction in different ways and dimensions for their methodological needs. Open interviews with public health nurses, family care workers and parents of twins. The systematic literature and knowledge review shows there were no articles on multiple-birth families that used van Manen's method. This paper presents reduction as a method that uses the hermeneutic circle. The lifeworlds of multiple-birth families consist of three core themes: 'A state of constant vigilance'; 'Ensuring that they can continue to cope'; and 'Opportunities to share with other people'. Reduction allows us to perform deep phenomenological-hermeneutic research and understand people's lifeworlds. It helps to keep research stages separate but also enables a consolidated view. Social care and healthcare professionals have to hear parents' voices better to comprehensively understand their situation; they also need further tools and training to be able to empower parents of twins. The many variations in adapting reduction mean its use can be very complex and confusing. This paper adds to the discussion of phenomenology, hermeneutic study and reduction.

  1. An overview and methodological assessment of systematic reviews and meta-analyses of enhanced recovery programmes in colorectal surgery

    PubMed Central

    Chambers, Duncan; Paton, Fiona; Wilson, Paul; Eastwood, Alison; Craig, Dawn; Fox, Dave; Jayne, David; McGinnes, Erika

    2014-01-01

    Objectives To identify and critically assess the extent to which systematic reviews of enhanced recovery programmes for patients undergoing colorectal surgery differ in their methodology and reported estimates of effect. Design Review of published systematic reviews. We searched the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects (DARE) and Health Technology Assessment (HTA) Database from 1990 to March 2013. Systematic reviews of enhanced recovery programmes for patients undergoing colorectal surgery were eligible for inclusion. Primary and secondary outcome measures The primary outcome was length of hospital stay. We assessed changes in pooled estimates of treatment effect over time and how these might have been influenced by decisions taken by researchers as well as by the availability of new trials. The quality of systematic reviews was assessed using the Centre for Reviews and Dissemination (CRD) DARE critical appraisal process. Results 10 systematic reviews were included. Systematic reviews of randomised controlled trials have consistently shown a reduction in length of hospital stay with enhanced recovery compared with traditional care. The estimated effect tended to increase from 2006 to 2010 as more trials were published but has not altered significantly in the most recent review, despite the inclusion of several unique trials. The best estimate appears to be an average reduction of around 2.5 days in primary postoperative length of stay. Differences between reviews reflected differences in interpretation of inclusion criteria, searching and analytical methods or software. Conclusions Systematic reviews of enhanced recovery programmes show a high level of research waste, with multiple reviews covering identical or very similar groups of trials. Where multiple reviews exist on a topic, interpretation may require careful attention to apparently minor differences between reviews. Researchers can help readers by acknowledging existing reviews and through clear reporting of key decisions, especially on inclusion/exclusion and on statistical pooling. PMID:24879828

  2. Estimation of Recurrence of Colorectal Adenomas with Dependent Censoring Using Weighted Logistic Regression

    PubMed Central

    Hsu, Chiu-Hsieh; Li, Yisheng; Long, Qi; Zhao, Qiuhong; Lance, Peter

    2011-01-01

    In colorectal polyp prevention trials, estimation of the rate of recurrence of adenomas at the end of the trial may be complicated by dependent censoring, that is, time to follow-up colonoscopy and dropout may be dependent on time to recurrence. Assuming that the auxiliary variables capture the dependence between recurrence and censoring times, we propose to fit two working models with the auxiliary variables as covariates to define risk groups and then extend an existing weighted logistic regression method for independent censoring to each risk group to accommodate potential dependent censoring. In a simulation study, we show that the proposed method results in both a gain in efficiency and reduction in bias for estimating the recurrence rate. We illustrate the methodology by analyzing a recurrent adenoma dataset from a colorectal polyp prevention trial. PMID:22065985

  3. Evaluation and design of a rain gauge network using a statistical optimization method in a severe hydro-geological hazard prone area

    NASA Astrophysics Data System (ADS)

    Fattoruso, Grazia; Longobardi, Antonia; Pizzuti, Alfredo; Molinara, Mario; Marocco, Claudio; De Vito, Saverio; Tortorella, Francesco; Di Francia, Girolamo

    2017-06-01

    Rainfall data collection gathered in continuous by a distributed rain gauge network is instrumental to more effective hydro-geological risk forecasting and management services though the input estimated rainfall fields suffer from prediction uncertainty. Optimal rain gauge networks can generate accurate estimated rainfall fields. In this research work, a methodology has been investigated for evaluating an optimal rain gauges network aimed at robust hydrogeological hazard investigations. The rain gauges of the Sarno River basin (Southern Italy) has been evaluated by optimizing a two-objective function that maximizes the estimated accuracy and minimizes the total metering cost through the variance reduction algorithm along with the climatological variogram (time-invariant). This problem has been solved by using an enumerative search algorithm, evaluating the exact Pareto-front by an efficient computational time.

  4. Large Area Crop Inventory Experiment (LACIE). Review of LACIE methodology, a project evaluation of technical acceptability

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The author has identified the following significant results. Results indicated that the LANDSAT data and the classification technology can estimate the small grains area within a sample segment accurately and reliably enough to meet the LACIE goals. Overall, the LACIE estimates in a 9 x 11 kilometer segment agree well with ground and aircraft determined area within these segments. The estimated c.v. of the random classification error was acceptably small. These analyses confirmed that bias introduced by various factors, such as LANDSAT spatial resolution, lack of spectral resolution, classifier bias, and repeatability, was not excessive in terms of the required performance criterion. Results of these tests did indicate a difficulty in differentiating wheat from other closely related small grains. However, satisfactory wheat area estimates were obtained through the reduction of the small grain area estimates in accordance with relative amounts of these crops as determined from historic data; these procedures are being further refined.

  5. Comparison of sampling methodologies for nutrient monitoring in streams: uncertainties, costs and implications for mitigation

    NASA Astrophysics Data System (ADS)

    Audet, J.; Martinsen, L.; Hasler, B.; de Jonge, H.; Karydi, E.; Ovesen, N. B.; Kronvang, B.

    2014-07-01

    Eutrophication of aquatic ecosystems caused by excess concentrations of nitrogen and phosphorus may have harmful consequences for biodiversity and poses a health risk to humans via the water supplies. Reduction of nitrogen and phosphorus losses to aquatic ecosystems involves implementation of costly measures, and reliable monitoring methods are therefore essential to select appropriate mitigation strategies and to evaluate their effects. Here, we compare the performances and costs of three methodologies for the monitoring of nutrients in rivers: grab sampling, time-proportional sampling and passive sampling using flow proportional samplers. Assuming time-proportional sampling to be the best estimate of the "true" nutrient load, our results showed that the risk of obtaining wrong total nutrient load estimates by passive samplers is high despite similar costs as the time-proportional sampling. Our conclusion is that for passive samplers to provide a reliable monitoring alternative, further development is needed. Grab sampling was the cheapest of the three methods and was more precise and accurate than passive sampling. We conclude that although monitoring employing time-proportional sampling is costly, its reliability precludes unnecessarily high implementation expenses.

  6. Comparison of sampling methodologies for nutrient monitoring in streams: uncertainties, costs and implications for mitigation

    NASA Astrophysics Data System (ADS)

    Audet, J.; Martinsen, L.; Hasler, B.; de Jonge, H.; Karydi, E.; Ovesen, N. B.; Kronvang, B.

    2014-11-01

    Eutrophication of aquatic ecosystems caused by excess concentrations of nitrogen and phosphorus may have harmful consequences for biodiversity and poses a health risk to humans via water supplies. Reduction of nitrogen and phosphorus losses to aquatic ecosystems involves implementation of costly measures, and reliable monitoring methods are therefore essential to select appropriate mitigation strategies and to evaluate their effects. Here, we compare the performances and costs of three methodologies for the monitoring of nutrients in rivers: grab sampling; time-proportional sampling; and passive sampling using flow-proportional samplers. Assuming hourly time-proportional sampling to be the best estimate of the "true" nutrient load, our results showed that the risk of obtaining wrong total nutrient load estimates by passive samplers is high despite similar costs as the time-proportional sampling. Our conclusion is that for passive samplers to provide a reliable monitoring alternative, further development is needed. Grab sampling was the cheapest of the three methods and was more precise and accurate than passive sampling. We conclude that although monitoring employing time-proportional sampling is costly, its reliability precludes unnecessarily high implementation expenses.

  7. Cigarette Graphic Warning Labels and Smoking Prevalence in Canada: A Critical Examination and Reformulation of the FDA Regulatory Impact Analysis

    PubMed Central

    Huang, Jidong; Chaloupka, Frank J.; Fong, Geoffrey T.

    2014-01-01

    Background The estimated effect of cigarette graphic warning labels (GWLs) on smoking rates is a key input to FDA's regulatory impact analysis (RIA), required by law as part of its rulemaking process. However, evidence on the impact of GWLs on smoking prevalence is scarce. Objective The goal of this paper is to critically analyze FDA's approach to estimating the impact of GWLs on smoking rates in its RIA, and to suggest a path forward to estimating the impact of the adoption of GWLs in Canada on Canadian national adult smoking prevalence. Methods A quasi-experimental methodology was employed to examine the impact of adoption of GWLs in Canada in 2000, using the U.S. as a control. Findings We found a statistically significant reduction in smoking rates after the adoption of GWLs in Canada in comparison to the U.S. Our analyses show that implementation of GWLs in Canada reduced smoking rates by 2.87 to 4.68 percentage points, a relative reduction of 12.1 to 19.6% — 33 to 53 times larger than FDA's estimates of a 0.088 percentage point reduction. We also demonstrated that FDA's estimate of the impact was flawed because it is highly sensitive to the changes in variable selection, model specification, and the time period analyzed. Conclusions Adopting GWLs on cigarette packages reduces smoking prevalence. Applying our analysis of the Canadian GWLs, we estimate that if the U.S. had adopted GWLs in 2012, the number of adult smokers in the U.S. would have decreased by 5.3 to 8.6 million in 2013. Our analysis demonstrates that FDA's approach to estimating the impact of GWLs on smoking rates is flawed. Rectifying these problems before this approach becomes the norm is critical for FDA's effective regulation of tobacco products. PMID:24218057

  8. Contribution of Early Detection and Adjuvant Treatments to Breast Cancer Mortality Reduction in Catalonia, Spain

    PubMed Central

    Vilaprinyo, Ester; Puig, Teresa; Rue, Montserrat

    2012-01-01

    Background Reductions in breast cancer (BC) mortality in Western countries have been attributed to the use of screening mammography and adjuvant treatments. The goal of this work was to analyze the contributions of both interventions to the decrease in BC mortality between 1975 and 2008 in Catalonia. Methodology/Principal Findings A stochastic model was used to quantify the contribution of each intervention. Age standardized BC mortality rates for calendar years 1975–2008 were estimated in four hypothetical scenarios: 1) Only screening, 2) Only adjuvant treatment, 3) Both interventions, and 4) No intervention. For the 30–69 age group, observed Catalan BC mortality rates per 100,000 women-year rose from 29.4 in 1975 to 38.3 in 1993, and afterwards continuously decreased to 23.2 in 2008. If neither of the two interventions had been used, in 2008 the estimated BC mortality would have been 43.5, which, compared to the observed BC mortality rate, indicates a 46.7% reduction. In 2008 the reduction attributable to screening was 20.4%, to adjuvant treatments was 15.8% and to both interventions 34.1%. Conclusions/Significance Screening and adjuvant treatments similarly contributed to reducing BC mortality in Catalonia. Mathematical models have been useful to assess the impact of interventions addressed to reduce BC mortality that occurred over nearly the same periods. PMID:22272292

  9. A Probabilistic and Observation Based Methodology to Estimate Small Craft Harbor Vulnerability to Tsunami Events

    NASA Astrophysics Data System (ADS)

    Keen, A. S.; Lynett, P. J.; Ayca, A.

    2016-12-01

    Because of the damage resulting from the 2010 Chile and 2011 Japanese tele-tsunamis, the tsunami risk to the small craft marinas in California has become an important concern. The talk will outline an assessment tool which can be used to assess the tsunami hazard to small craft harbors. The methodology is based on the demand and structural capacity of the floating dock system, composed of floating docks/fingers and moored vessels. The structural demand is determined using a Monte Carlo methodology. Monte Carlo methodology is a probabilistic computational tool where the governing might be well known, but the independent variables of the input (demand) as well as the resisting structural components (capacity) may not be completely known. The Monte Carlo approach uses a distribution of each variable, and then uses that random variable within the described parameters, to generate a single computation. The process then repeats hundreds or thousands of times. The numerical model "Method of Splitting Tsunamis" (MOST) has been used to determine the inputs for the small craft harbors within California. Hydrodynamic model results of current speed, direction and surface elevation were incorporated via the drag equations to provide the bases of the demand term. To determine the capacities, an inspection program was developed to identify common features of structural components. A total of six harbors have been inspected ranging from Crescent City in Northern California to Oceanside Harbor in Southern California. Results from the inspection program were used to develop component capacity tables which incorporated the basic specifications of each component (e.g. bolt size and configuration) and a reduction factor (which accounts for the component reduction in capacity with age) to estimate in situ capacities. Like the demand term, these capacities are added probabilistically into the model. To date the model has been applied to Santa Cruz Harbor as well as Noyo River. Once calibrated, the model was able to hindcast the damage produced in Santa Cruz Harbor during the 2010 Chile and 2011 Japan events. Results of the Santa Cruz analysis will be presented and discussed.

  10. Air quality impacts of distributed energy resources implemented in the northeastern United States.

    PubMed

    Carreras-Sospedra, Marc; Dabdub, Donald; Brouwer, Jacob; Knipping, Eladio; Kumar, Naresh; Darrow, Ken; Hampson, Anne; Hedman, Bruce

    2008-07-01

    Emissions from the potential installation of distributed energy resources (DER) in the place of current utility-scale power generators have been introduced into an emissions inventory of the northeastern United States. A methodology for predicting future market penetration of DER that considers economics and emission factors was used to estimate the most likely implementation of DER. The methodology results in spatially and temporally resolved emission profiles of criteria pollutants that are subsequently introduced into a detailed atmospheric chemistry and transport model of the region. The DER technology determined by the methodology includes 62% reciprocating engines, 34% gas turbines, and 4% fuel cells and other emerging technologies. The introduction of DER leads to retirement of 2625 MW of existing power plants for which emissions are removed from the inventory. The air quality model predicts maximum differences in air pollutant concentrations that are located downwind from the central power plants that were removed from the domain. Maximum decreases in hourly peak ozone concentrations due to DER use are 10 ppb and are located over the state of New Jersey. Maximum decreases in 24-hr average fine particulate matter (PM2.5) concentrations reach 3 microg/m3 and are located off the coast of New Jersey and New York. The main contribution to decreased PM2.5 is the reduction of sulfate levels due to significant reductions in direct emissions of sulfur oxides (SO(x)) from the DER compared with the central power plants removed. The scenario presented here represents an accelerated DER penetration case with aggressive emission reductions due to removal of highly emitting power plants. Such scenario provides an upper bound for air quality benefits of DER implementation scenarios.

  11. An integrative cross-design synthesis approach to estimate the cost of illness: an applied case to the cost of depression in Catalonia.

    PubMed

    Bendeck, Murielle; Serrano-Blanco, Antoni; García-Alonso, Carlos; Bonet, Pere; Jordà, Esther; Sabes-Figuera, Ramon; Salvador-Carulla, Luis

    2013-04-01

    Cost of illness (COI) studies are carried out under conditions of uncertainty and with incomplete information. There are concerns regarding their generalisability, accuracy and usability in evidence-informed care. A hybrid methodology is used to estimate the regional costs of depression in Catalonia (Spain) following an integrative approach. The cross-design synthesis included nominal groups and quantitative analysis of both top-down and bottom-up studies, and incorporated primary and secondary data from different sources of information in Catalonia. Sensitivity analysis used probabilistic Monte Carlo simulation modelling. A dissemination strategy was planned, including a standard form adapted from cost-effectiveness studies to summarise methods and results. The method used allows for a comprehensive estimate of the cost of depression in Catalonia. Health officers and decision-makers concluded that this methodology provided useful information and knowledge for evidence-informed planning in mental health. The mix of methods, combined with a simulation model, contributed to a reduction in data gaps and, in conditions of uncertainty, supplied more complete information on the costs of depression in Catalonia. This approach to COI should be differentiated from other COI designs to allow like-with-like comparisons. A consensus on COI typology, procedures and dissemination is needed.

  12. A Life-Cycle Cost Estimating Methodology for NASA-Developed Air Traffic Control Decision Support Tools

    NASA Technical Reports Server (NTRS)

    Wang, Jianzhong Jay; Datta, Koushik; Landis, Michael R. (Technical Monitor)

    2002-01-01

    This paper describes the development of a life-cycle cost (LCC) estimating methodology for air traffic control Decision Support Tools (DSTs) under development by the National Aeronautics and Space Administration (NASA), using a combination of parametric, analogy, and expert opinion methods. There is no one standard methodology and technique that is used by NASA or by the Federal Aviation Administration (FAA) for LCC estimation of prospective Decision Support Tools. Some of the frequently used methodologies include bottom-up, analogy, top-down, parametric, expert judgement, and Parkinson's Law. The developed LCC estimating methodology can be visualized as a three-dimensional matrix where the three axes represent coverage, estimation, and timing. This paper focuses on the three characteristics of this methodology that correspond to the three axes.

  13. CO2 storage capacity estimation: Methodology and gaps

    USGS Publications Warehouse

    Bachu, S.; Bonijoly, D.; Bradshaw, J.; Burruss, R.; Holloway, S.; Christensen, N.P.; Mathiassen, O.M.

    2007-01-01

    Implementation of CO2 capture and geological storage (CCGS) technology at the scale needed to achieve a significant and meaningful reduction in CO2 emissions requires knowledge of the available CO2 storage capacity. CO2 storage capacity assessments may be conducted at various scales-in decreasing order of size and increasing order of resolution: country, basin, regional, local and site-specific. Estimation of the CO2 storage capacity in depleted oil and gas reservoirs is straightforward and is based on recoverable reserves, reservoir properties and in situ CO2 characteristics. In the case of CO2-EOR, the CO2 storage capacity can be roughly evaluated on the basis of worldwide field experience or more accurately through numerical simulations. Determination of the theoretical CO2 storage capacity in coal beds is based on coal thickness and CO2 adsorption isotherms, and recovery and completion factors. Evaluation of the CO2 storage capacity in deep saline aquifers is very complex because four trapping mechanisms that act at different rates are involved and, at times, all mechanisms may be operating simultaneously. The level of detail and resolution required in the data make reliable and accurate estimation of CO2 storage capacity in deep saline aquifers practical only at the local and site-specific scales. This paper follows a previous one on issues and development of standards for CO2 storage capacity estimation, and provides a clear set of definitions and methodologies for the assessment of CO2 storage capacity in geological media. Notwithstanding the defined methodologies suggested for estimating CO2 storage capacity, major challenges lie ahead because of lack of data, particularly for coal beds and deep saline aquifers, lack of knowledge about the coefficients that reduce storage capacity from theoretical to effective and to practical, and lack of knowledge about the interplay between various trapping mechanisms at work in deep saline aquifers. ?? 2007 Elsevier Ltd. All rights reserved.

  14. Global Scenarios of Air Pollution until 2030: Combining Air Quality, Climate Change and Energy Access Policies

    NASA Astrophysics Data System (ADS)

    Rao, S.; Dentener, F. J.; Klimont, Z.; Riahi, K.

    2011-12-01

    Outdoor air pollution is increasingly recognized as a significant contributor to global health outcomes. This has led to the implementation of a number of air quality policies worldwide, with total air pollution control costs in 2005 estimated at US$195 billion. More than 80% of the world's population is still found to be exposed to PM2.5 concentrations exceeding WHO air quality guidelines and health impacts resulting from these exposures estimated at around 2-5% of the global disease burden. Key questions to answer are 1) How will pollutant emissions evolve in the future given developments in the energy system and how will energy and environmental policies influence such emission trends. 2) What implications will this have for resulting exposures and related health outcomes. In order to answer these questions, varying levels of stringency of air quality legislation are analyzed in combination with policies on universal access to clean cooking fuels and limiting global temperature change to 2°C in 2100. Bottom-up methodologies using energy emissions modeling are used to derive sector-based pollutant emission trajectories until 2030. Emissions are spatially downscaled and used in combination with a global transport chemistry model to derive ambient concentrations of PM2.5. Health impacts of these exposures are further estimated consistent with WHO data and methodology. The results indicate that currently planned air quality legislation combined with rising energy demand will be insufficient in controlling future emissions growth in developing countries. In order to achieve significant reductions in pollutant emissions of the order of more than 50% from 2005 levels and reduce exposures to levels consistent with WHO standards, it will be necessary to increase the stringency of such legislations and combine them with policies on energy access and climate change. Combined policies also result in reductions in air pollution control costs as compared to those associated with current legislations. Health related co-benefits of combined policies are also found to be large, especially in developing countries- a reduction of more than 50% in terms of pollution related mortality impacts as compared to today.

  15. ELER software - a new tool for urban earthquake loss assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Erdik, M.

    2010-12-01

    Rapid loss estimation after potentially damaging earthquakes is critical for effective emergency response and public information. A methodology and software package, ELER-Earthquake Loss Estimation Routine, for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region was developed under the Joint Research Activity-3 (JRA3) of the EC FP6 Project entitled "Network of Research Infrastructures for European Seismology-NERIES". Recently, a new version (v2.0) of ELER software has been released. The multi-level methodology developed is capable of incorporating regional variability and uncertainty originating from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. Although primarily intended for quasi real-time estimation of earthquake shaking and losses, the routine is also equally capable of incorporating scenario-based earthquake loss assessments. This paper introduces the urban earthquake loss assessment module (Level 2) of the ELER software which makes use of the most detailed inventory databases of physical and social elements at risk in combination with the analytical vulnerability relationships and building damage-related casualty vulnerability models for the estimation of building damage and casualty distributions, respectively. Spectral capacity-based loss assessment methodology and its vital components are presented. The analysis methods of the Level 2 module, i.e. Capacity Spectrum Method (ATC-40, 1996), Modified Acceleration-Displacement Response Spectrum Method (FEMA 440, 2005), Reduction Factor Method (Fajfar, 2000) and Coefficient Method (ASCE 41-06, 2006), are applied to the selected building types for validation and verification purposes. The damage estimates are compared to the results obtained from the other studies available in the literature, i.e. SELENA v4.0 (Molina et al., 2008) and ATC-55 (Yang, 2005). An urban loss assessment exercise for a scenario earthquake for the city of Istanbul is conducted and physical and social losses are presented. Damage to the urban environment is compared to the results obtained from similar software, i.e. KOERILoss (KOERI, 2002) and DBELA (Crowley et al., 2004). The European rapid loss estimation tool is expected to help enable effective emergency response, on both local and global level, as well as public information.

  16. Influence of model reduction on uncertainty of flood inundation predictions

    NASA Astrophysics Data System (ADS)

    Romanowicz, R. J.; Kiczko, A.; Osuch, M.

    2012-04-01

    Derivation of flood risk maps requires an estimation of the maximum inundation extent for a flood with an assumed probability of exceedence, e.g. a 100 or 500 year flood. The results of numerical simulations of flood wave propagation are used to overcome the lack of relevant observations. In practice, deterministic 1-D models are used for flow routing, giving a simplified image of a flood wave propagation process. The solution of a 1-D model depends on the simplifications to the model structure, the initial and boundary conditions and the estimates of model parameters which are usually identified using the inverse problem based on the available noisy observations. Therefore, there is a large uncertainty involved in the derivation of flood risk maps. In this study we examine the influence of model structure simplifications on estimates of flood extent for the urban river reach. As the study area we chose the Warsaw reach of the River Vistula, where nine bridges and several dikes are located. The aim of the study is to examine the influence of water structures on the derived model roughness parameters, with all the bridges and dikes taken into account, with a reduced number and without any water infrastructure. The results indicate that roughness parameter values of a 1-D HEC-RAS model can be adjusted for the reduction in model structure. However, the price we pay is the model robustness. Apart from a relatively simple question regarding reducing model structure, we also try to answer more fundamental questions regarding the relative importance of input, model structure simplification, parametric and rating curve uncertainty to the uncertainty of flood extent estimates. We apply pseudo-Bayesian methods of uncertainty estimation and Global Sensitivity Analysis as the main methodological tools. The results indicate that the uncertainties have a substantial influence on flood risk assessment. In the paper we present a simplified methodology allowing the influence of that uncertainty to be assessed. This work was supported by National Science Centre of Poland (grant 2011/01/B/ST10/06866).

  17. Flying qualities criteria for GA single pilot IFR operations

    NASA Technical Reports Server (NTRS)

    Bar-Gill, A.

    1982-01-01

    The flying qualities criteria in general aviation (GA) to decrease accidents are discussed. The following in-flight research is discussed: (1) identification of key aerodynamic configurations; (2) implementation of an in-flight simulator; (3) mission matrix design; (4) experimental systems; (5) data reduction; (6) optimal flight path reconstruction. Some of the accomplished work is reported: an integrated flight testing and flight path reconstruction methodology was developd, high accuracy in trajectory estimation was achieved with an experimental setup, and a part of the flight test series was flown.

  18. Observing tectonic plate motions and deformations from satellite laser ranging

    NASA Technical Reports Server (NTRS)

    Christodoulidis, D. C.; Smith, D. E.; Kolenkiewicz, R.; Klosko, S. M.; Torrence, M. H.

    1985-01-01

    The scope of geodesy has been greatly affected by the advent of artificial near-earth satellites. The present paper provides a description of the results obtained from the reduction of data collected with the aid of satellite laser ranging. It is pointed out that dynamic reduction of satellite laser ranging (SLR) data provides very precise positions in three dimensions for the laser tracking network. The vertical components of the stations, through the tracking geometry provided by the global network and the accurate knowledge of orbital dynamics, are uniquely related to the center of mass of the earth. Attention is given to the observations, the methodologies for reducing satellite observations to estimate station positions, Lageos-observed tectonic plate motions, an improved temporal resolution of SLR plate motions, and the SLR vertical datum.

  19. Modeling the formation and properties of traditional and non-traditional secondary organic aerosol: problem formulation and application to aircraft exhaust

    NASA Astrophysics Data System (ADS)

    Jathar, S. H.; Miracolo, M. A.; Presto, A. A.; Donahue, N. M.; Adams, P. J.; Robinson, A. L.

    2012-10-01

    We present a methodology to model secondary organic aerosol (SOA) formation from the photo-oxidation of unspeciated low-volatility organics (semi-volatile and intermediate volatile organic compounds) emitted by combustion systems. It is formulated using the volatility basis-set approach. Unspeciated low-volatility organics are classified by volatility and then allowed to react with the hydroxyl radical. The new methodology allows for larger reductions in volatility with each oxidation step than previous volatility basis set models, which is more consistent with the addition of common functional groups and similar to those used by traditional SOA models. The methodology is illustrated using data collected during two field campaigns that characterized the atmospheric evolution of dilute gas-turbine engine emissions using a smog chamber. In those experiments, photo-oxidation formed a significant amount of SOA, much of which could not be explained based on the emissions of traditional speciated precursors; we refer to the unexplained SOA as non-traditional SOA (NT-SOA). The NT-SOA can be explained by emissions of unspeciated low-volatility organics measured using sorbents. We show that the parameterization proposed by Robinson et al. (2007) is unable to explain the timing of the NT-SOA formation in the aircraft experiments because it assumes a very modest reduction in volatility of the precursors with every oxidation reaction. In contrast the new method better reproduces the NT-SOA formation. The NT-SOA yields estimated for the unspeciated low-volatility organic emissions in aircraft exhaust are similar to literature data for large n-alkanes and other low-volatility organics. The estimated yields vary with fuel composition (Jet Propellent-8 versus Fischer-Tropsch) and engine load (ground idle versus non-ground idle). The framework developed here is suitable for modeling SOA formation from emissions from other combustion systems.

  20. Direct potable reuse microbial risk assessment methodology: Sensitivity analysis and application to State log credit allocations.

    PubMed

    Soller, Jeffrey A; Eftim, Sorina E; Nappier, Sharon P

    2018-01-01

    Understanding pathogen risks is a critically important consideration in the design of water treatment, particularly for potable reuse projects. As an extension to our published microbial risk assessment methodology to estimate infection risks associated with Direct Potable Reuse (DPR) treatment train unit process combinations, herein, we (1) provide an updated compilation of pathogen density data in raw wastewater and dose-response models; (2) conduct a series of sensitivity analyses to consider potential risk implications using updated data; (3) evaluate the risks associated with log credit allocations in the United States; and (4) identify reference pathogen reductions needed to consistently meet currently applied benchmark risk levels. Sensitivity analyses illustrated changes in cumulative annual risks estimates, the significance of which depends on the pathogen group driving the risk for a given treatment train. For example, updates to norovirus (NoV) raw wastewater values and use of a NoV dose-response approach, capturing the full range of uncertainty, increased risks associated with one of the treatment trains evaluated, but not the other. Additionally, compared to traditional log-credit allocation approaches, our results indicate that the risk methodology provides more nuanced information about how consistently public health benchmarks are achieved. Our results indicate that viruses need to be reduced by 14 logs or more to consistently achieve currently applied benchmark levels of protection associated with DPR. The refined methodology, updated model inputs, and log credit allocation comparisons will be useful to regulators considering DPR projects and design engineers as they consider which unit treatment processes should be employed for particular projects. Published by Elsevier Ltd.

  1. Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Anissipour, Amir A.; Benson, Russell A.

    1989-01-01

    The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.

  2. A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Riggs, H. R.; Dambach, M.

    1998-01-01

    A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.

  3. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  4. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.

    PubMed

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.

  5. Policy implications of uncertainty in modeled life-cycle greenhouse gas emissions of biofuels.

    PubMed

    Mullins, Kimberley A; Griffin, W Michael; Matthews, H Scott

    2011-01-01

    Biofuels have received legislative support recently in California's Low-Carbon Fuel Standard and the Federal Energy Independence and Security Act. Both present new fuel types, but neither provides methodological guidelines for dealing with the inherent uncertainty in evaluating their potential life-cycle greenhouse gas emissions. Emissions reductions are based on point estimates only. This work demonstrates the use of Monte Carlo simulation to estimate life-cycle emissions distributions from ethanol and butanol from corn or switchgrass. Life-cycle emissions distributions for each feedstock and fuel pairing modeled span an order of magnitude or more. Using a streamlined life-cycle assessment, corn ethanol emissions range from 50 to 250 g CO(2)e/MJ, for example, and each feedstock-fuel pathway studied shows some probability of greater emissions than a distribution for gasoline. Potential GHG emissions reductions from displacing fossil fuels with biofuels are difficult to forecast given this high degree of uncertainty in life-cycle emissions. This uncertainty is driven by the importance and uncertainty of indirect land use change emissions. Incorporating uncertainty in the decision making process can illuminate the risks of policy failure (e.g., increased emissions), and a calculated risk of failure due to uncertainty can be used to inform more appropriate reduction targets in future biofuel policies.

  6. Definition of 1992 Technology Aircraft Noise Levels and the Methodology for Assessing Airplane Noise Impact of Component Noise Reduction Concepts

    NASA Technical Reports Server (NTRS)

    Kumasaka, Henry A.; Martinez, Michael M.; Weir, Donald S.

    1996-01-01

    This report describes the methodology for assessing the impact of component noise reduction on total airplane system noise. The methodology is intended to be applied to the results of individual study elements of the NASA-Advanced Subsonic Technology (AST) Noise Reduction Program, which will address the development of noise reduction concepts for specific components. Program progress will be assessed in terms of noise reduction achieved, relative to baseline levels representative of 1992 technology airplane/engine design and performance. In this report, the 1992 technology reference levels are defined for assessment models based on four airplane sizes - an average business jet and three commercial transports: a small twin, a medium sized twin, and a large quad. Study results indicate that component changes defined as program final goals for nacelle treatment and engine/airframe source noise reduction would achieve from 6-7 EPNdB reduction of total airplane noise at FAR 36 Stage 3 noise certification conditions for all of the airplane noise assessment models.

  7. Theoretical impacts of a range of major tobacco retail outlet reduction interventions: modelling results in a country with a smoke-free nation goal.

    PubMed

    Pearson, Amber L; van der Deen, Frederieke S; Wilson, Nick; Cobiac, Linda; Blakely, Tony

    2015-03-01

    To inform endgame strategies in tobacco control, this study aimed to estimate the impact of interventions that markedly reduced availability of tobacco retail outlets. The setting was New Zealand, a developed nation where the government has a smoke-free nation goal in 2025. Various legally mandated reductions in outlets that were phased in over 10 years were modelled. Geographic analyses using the road network were used to estimate the distance and time travelled from centres of small areas to the reduced number of tobacco outlets, and from there to calculate increased travel costs for each intervention. Age-specific price elasticities of demand were used to estimate future smoking prevalence. With a law that required a 95% reduction in outlets, the cost of a pack of 20 cigarettes (including travel costs) increased by 20% in rural areas and 10% elsewhere and yielded a smoking prevalence of 9.6% by 2025 (compared with 9.9% with no intervention). The intervention that permitted tobacco sales at only 50% of liquor stores resulted in the largest cost increase (∼$60/pack in rural areas) and the lowest prevalence (9.1%) by 2025. Elimination of outlets within 2 km of schools produced a smoking prevalence of 9.3%. This modelling merges geographic, economic and epidemiological methodologies in a novel way, but the results should be interpreted cautiously and further research is desirable. Nevertheless, the results still suggest that tobacco outlet reduction interventions could modestly contribute to an endgame goal. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Estimating shallow groundwater availability in small catchments using streamflow recession and instream flow requirements of rivers in South Africa

    NASA Astrophysics Data System (ADS)

    Ebrahim, Girma Y.; Villholth, Karen G.

    2016-10-01

    Groundwater is an important resource for multiple uses in South Africa. Hence, setting limits to its sustainable abstraction while assuring basic human needs is required. Due to prevalent data scarcity related to groundwater replenishment, which is the traditional basis for estimating groundwater availability, the present article presents a novel method for determining allocatable groundwater in quaternary (fourth-order) catchments through information on streamflow. Using established methodologies for assessing baseflow, recession flow, and instream ecological flow requirement, the methodology develops a combined stepwise methodology to determine annual available groundwater storage volume using linear reservoir theory, essentially linking low flows proportionally to upstream groundwater storages. The approach was trialled for twenty-one perennial and relatively undisturbed catchments with long-term and reliable streamflow records. Using the Desktop Reserve Model, instream flow requirements necessary to meet the present ecological state of the streams were determined, and baseflows in excess of these flows were converted into a conservative estimates of allocatable groundwater storages on an annual basis. Results show that groundwater development potential exists in fourteen of the catchments, with upper limits to allocatable groundwater volumes (including present uses) ranging from 0.02 to 3.54 × 106 m3 a-1 (0.10-11.83 mm a-1) per catchment. With a secured availability of these volume 75% of the years, variability between years is assumed to be manageable. A significant (R2 = 0.88) correlation between baseflow index and the drainage time scale for the catchments underscores the physical basis of the methodology and also enables the reduction of the procedure by one step, omitting recession flow analysis. The method serves as an important complementary tool for the assessment of the groundwater part of the Reserve and the Groundwater Resource Directed Measures in South Africa and could be adapted and applied elsewhere.

  9. Contribution of cooperative sector recycling to greenhouse gas emissions reduction: a case study of Ribeirão Pires, Brazil.

    PubMed

    King, Megan F; Gutberlet, Jutta

    2013-12-01

    Solid waste, including municipal waste and its management, is a major challenge for most cities and among the key contributors to climate change. Greenhouse gas emissions can be reduced through recovery and recycling of resources from the municipal solid waste stream. In São Paulo, Brazil, recycling cooperatives play a crucial role in providing recycling services including collection, separation, cleaning, stocking, and sale of recyclable resources. The present research attempts to measure the greenhouse gas emission reductions achieved by the recycling cooperative Cooperpires, as well as highlight its socioeconomic benefits. Methods include participant observation, structured interviews, questionnaire application, and greenhouse gas accounting of recycling using a Clean Development Mechanism methodology. The results show that recycling cooperatives can achieve important energy savings and reductions in greenhouse gas emissions, and suggest there is an opportunity for Cooperpires and other similar recycling groups to participate in the carbon credit market. Based on these findings, the authors created a simple greenhouse gas accounting calculator for recyclers to estimate their emissions reductions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Application of Autonomous Smart Inverter Volt-VAR Function for Voltage Reduction Energy Savings and Power Quality in Electric Distribution Systems: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Nagarajan, Adarsh; Baggu, Murali

    This paper evaluated the impact of smart inverter Volt-VAR function on voltage reduction energy saving and power quality in electric power distribution systems. A methodology to implement the voltage reduction optimization was developed by controlling the substation LTC and capacitor banks, and having smart inverters participate through their autonomous Volt-VAR control. In addition, a power quality scoring methodology was proposed and utilized to quantify the effect on power distribution system power quality. All of these methodologies were applied to a utility distribution system model to evaluate the voltage reduction energy saving and power quality under various PV penetrations and smartmore » inverter densities.« less

  11. Effect of body composition methodology on heritability estimation of body fatness

    USDA-ARS?s Scientific Manuscript database

    Heritability estimates of human body fatness vary widely and the contribution of body composition methodology to this variability is unknown. The effect of body composition methodology on estimations of genetic and environmental contributions to body fatness variation was examined in 78 adult male ...

  12. Estimating unbiased economies of scale of HIV prevention projects: a case study of Avahan.

    PubMed

    Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudha; Blanc, Elodie; Le Nestour, Alexis

    2015-04-01

    Governments and donors are investing considerable resources on HIV prevention in order to scale up these services rapidly. Given the current economic climate, providers of HIV prevention services increasingly need to demonstrate that these investments offer good 'value for money'. One of the primary routes to achieve efficiency is to take advantage of economies of scale (a reduction in the average cost of a health service as provision scales-up), yet empirical evidence on economies of scale is scarce. Methodologically, the estimation of economies of scale is hampered by several statistical issues preventing causal inference and thus making the estimation of economies of scale complex. In order to estimate unbiased economies of scale when scaling up HIV prevention services, we apply our analysis to one of the few HIV prevention programmes globally delivered at a large scale: the Indian Avahan initiative. We costed the project by collecting data from the 138 Avahan NGOs and the supporting partners in the first four years of its scale-up, between 2004 and 2007. We develop a parsimonious empirical model and apply a system Generalized Method of Moments (GMM) and fixed-effects Instrumental Variable (IV) estimators to estimate unbiased economies of scale. At the programme level, we find that, after controlling for the endogeneity of scale, the scale-up of Avahan has generated high economies of scale. Our findings suggest that average cost reductions per person reached are achievable when scaling-up HIV prevention in low and middle income countries. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. The Impact of Variable Wind Shear Coefficients on Risk Reduction of Wind Energy Projects

    PubMed Central

    Thomson, Allan; Yoonesi, Behrang; McNutt, Josiah

    2016-01-01

    Estimation of wind speed at proposed hub heights is typically achieved using a wind shear exponent or wind shear coefficient (WSC), variation in wind speed as a function of height. The WSC is subject to temporal variation at low and high frequencies, ranging from diurnal and seasonal variations to disturbance caused by weather patterns; however, in many cases, it is assumed that the WSC remains constant. This assumption creates significant error in resource assessment, increasing uncertainty in projects and potentially significantly impacting the ability to control gird connected wind generators. This paper contributes to the body of knowledge relating to the evaluation and assessment of wind speed, with particular emphasis on the development of techniques to improve the accuracy of estimated wind speed above measurement height. It presents an evaluation of the use of a variable wind shear coefficient methodology based on a distribution of wind shear coefficients which have been implemented in real time. The results indicate that a VWSC provides a more accurate estimate of wind at hub height, ranging from 41% to 4% reduction in root mean squared error (RMSE) between predicted and actual wind speeds when using a variable wind shear coefficient at heights ranging from 33% to 100% above the highest actual wind measurement. PMID:27872898

  14. Discussion of and reply to ``Waste-to-energy: The next step in the hierarchy after the 3-Rs``

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niessen, W.R.; Hahn, J.L.; Jones, K.H.

    1995-11-01

    In their paper Jeffrey L. Hahn and Kay H. Jones addressed the issue of what the next step should be in the hierarchy after reduction, reuse and recycling (the 3-Rs) with regards to communities managing their solid wastes. The author believes Mr. Hahn and Ms. Jones should provide literature citations or their estimation methodology and assumptions. The author questions the apparent assertion by Mr. Hahn and Ms. Jones that the greenhouse gas emission of WTE are much less than that of landfills. The relative magnitude of the maximum year and average year non-methane organic carbon emission estimates for landfills ismore » questioned. This article also contains the original authors` reply to the comments and questions.« less

  15. Methodology for estimating soil carbon for the forest carbon budget model of the United States, 2001

    Treesearch

    L. S. Heath; R. A. Birdsey; D. W. Williams

    2002-01-01

    The largest carbon (C) pool in United States forests is the soil C pool. We present methodology and soil C pool estimates used in the FORCARB model, which estimates and projects forest carbon budgets for the United States. The methodology balances knowledge, uncertainties, and ease of use. The estimates are calculated using the USDA Natural Resources Conservation...

  16. Towards a Novel Integrated Approach for Estimating Greenhouse Gas Emissions in Support of International Agreements

    NASA Astrophysics Data System (ADS)

    Reimann, S.; Vollmer, M. K.; Henne, S.; Brunner, D.; Emmenegger, L.; Manning, A.; Fraser, P. J.; Krummel, P. B.; Dunse, B. L.; DeCola, P.; Tarasova, O. A.

    2016-12-01

    In the recently adopted Paris Agreement the community of signatory states has agreed to limit the future global temperature increase between +1.5 °C and +2.0 °C, compared to pre-industrial times. To achieve this goal, emission reduction targets have been submitted by individual nations (called Intended Nationally Determined Contributions, INDCs). Inventories will be used for checking progress towards these envisaged goals. These inventories are calculated by combining information on specific activities (e.g. passenger cars, agriculture) with activity-related, typically IPCC-sanctioned, emission factors - the so-called bottom-up method. These calculated emissions are reported on an annual basis and are checked by external bodies by using the same method. A second independent method estimates emissions by translating greenhouse gas measurements made at regionally representative stations into regional/global emissions using meteorologically-based transport models. In recent years this so-called top-down approach has been substantially advanced into a powerful tool and emission estimates at the national/regional level have become possible. This method is already used in Switzerland, in the United Kingdom and in Australia to estimate greenhouse gas emissions and independently support the national bottom-up emission inventories within the UNFCCC framework. Examples of the comparison of the two independent methods will be presented and the added-value will be discussed. The World Meteorological Organization (WMO) and partner organizations are currently developing a plan to expand this top-down approach and to expand the globally representative GAW network of ground-based stations and remote-sensing platforms and integrate their information with atmospheric transport models. This Integrated Global Greenhouse Gas Information System (IG3IS) initiative will help nations to improve the accuracy of their country-based emissions inventories and their ability to evaluate the success of emission reductions strategies. This could foster trans-national collaboration on methodologies for estimation of emissions. Furthermore, more accurate emission knowledge will clarify the value of emission reduction efforts and could encourage countries to strengthen their reduction pledges.

  17. K-Means Subject Matter Expert Refined Topic Model Methodology

    DTIC Science & Technology

    2017-01-01

    Refined Topic Model Methodology Topic Model Estimation via K-Means U.S. Army TRADOC Analysis Center-Monterey 700 Dyer Road...January 2017 K-means Subject Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-Means Theodore T. Allen, Ph.D. Zhenhuan...Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-means 5a. CONTRACT NUMBER W9124N-15-P-0022 5b. GRANT NUMBER 5c

  18. A validated methodology for determination of laboratory instrument computer interface efficacy

    NASA Astrophysics Data System (ADS)

    1984-12-01

    This report is intended to provide a methodology for determining when, and for which instruments, direct interfacing of laboratory instrument and laboratory computers is beneficial. This methodology has been developed to assist the Tri-Service Medical Information Systems Program Office in making future decisions regarding laboratory instrument interfaces. We have calculated the time savings required to reach a break-even point for a range of instrument interface prices and corresponding average annual costs. The break-even analyses used empirical data to estimate the number of data points run per day that are required to meet the break-even point. The results indicate, for example, that at a purchase price of $3,000, an instrument interface will be cost-effective if the instrument is utilized for at least 154 data points per day if operated in the continuous mode, or 216 points per day if operated in the discrete mode. Although this model can help to ensure that instrument interfaces are cost effective, additional information should be considered in making the interface decisions. A reduction in results transcription errors may be a major benefit of instrument interfacing.

  19. Cost assessment and ecological effectiveness of nutrient reduction options for mitigating Phaeocystis colony blooms in the Southern North Sea: an integrated modeling approach.

    PubMed

    Lancelot, Christiane; Thieu, Vincent; Polard, Audrey; Garnier, Josette; Billen, Gilles; Hecq, Walter; Gypens, Nathalie

    2011-05-01

    Nutrient reduction measures have been already taken by wealthier countries to decrease nutrient loads to coastal waters, in most cases however, prior to having properly assessed their ecological effectiveness and their economic costs. In this paper we describe an original integrated impact assessment methodology to estimate the direct cost and the ecological performance of realistic nutrient reduction options to be applied in the Southern North Sea watershed to decrease eutrophication, visible as Phaeocystis blooms and foam deposits on the beaches. The mathematical tool couples the idealized biogeochemical GIS-based model of the river system (SENEQUE-RIVERSTRAHLER) implemented in the Eastern Channel/Southern North Sea watershed to the biogeochemical MIRO model describing Phaeocystis blooms in the marine domain. Model simulations explore how nutrient reduction options regarding diffuse and/or point sources in the watershed would affect the Phaeocystis colony spreading in the coastal area. The reference and prospective simulations are performed for the year 2000 characterized by mean meteorological conditions, and nutrient reduction scenarios include and compare upgrading of wastewater treatment plants and changes in agricultural practices including an idealized shift towards organic farming. A direct cost assessment is performed for each realistic nutrient reduction scenario. Further the reduction obtained for Phaeocystis blooms is assessed by comparison with ecological indicators (bloom magnitude and duration) and the cost for reducing foam events on the beaches is estimated. Uncertainty brought by the added effect of meteorological conditions (rainfall) on coastal eutrophication is discussed. It is concluded that the reduction obtained by implementing realistic environmental measures on the short-term is costly and insufficient to restore well-balanced nutrient conditions in the coastal area while the replacement of conventional agriculture by organic farming might be an option to consider in the nearby future. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Modeling and Reduction With Applications to Semiconductor Processing

    DTIC Science & Technology

    1999-01-01

    smoothies ,” as they kept my energy level high without resorting to coffee (the beverage of choice, it seems, for graduate students). My advisor gave me all...with POC data, and balancing approach. . . . . . . . . . . . . . . . 312 xii LIST OF FIGURES 1.1 General state-space model reduction methodology ...reduction problem, then, is one of finding a systematic methodology within a given mathematical framework to produce an efficient or optimal trade-off of

  1. The greenhouse gas intensity and potential biofuel production capacity of maize stover harvest in the US Midwest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Curtis D.; Zhang, Xuesong; Reddy, Ashwan D.

    Agricultural residues are important sources of feedstock for a cellulosic biofuels industry that is being developed to reduce greenhouse gas emissions and improve energy independence. While the US Midwest has been recognized as key to providing maize stover for meeting near-term cellulosic biofuel production goals, there is uncertainty that such feedstocks can produce biofuels that meet federal cellulosic standards. Here, we conducted extensive site-level calibration of the Environmental Policy Integrated Climate (EPIC) terrestrial ecosystems model and applied the model at high spatial resolution across the US Midwest to improve estimates of the maximum production potential and greenhouse gas emissions expectedmore » from continuous maize residue-derived biofuels. A comparison of methodologies for calculating the soil carbon impacts of residue harvesting demonstrates the large impact of study duration, depth of soil considered, and inclusion of litter carbon in soil carbon change calculations on the estimated greenhouse gas intensity of maize stover-derived biofuels. Using the most representative methodology for assessing long-term residue harvesting impacts, we estimate that only 5.3 billion liters per year (bly) of ethanol, or 8.7% of the near-term US cellulosic biofuel demand, could be met under common no-till farming practices. However, appreciably more feedstock becomes available at modestly higher emissions levels, with potential for 89.0 bly of ethanol production meeting US advanced biofuel standards. Adjustments to management practices, such as adding cover crops to no-till management, will be required to produce sufficient quantities of residue meeting the greenhouse gas emission reduction standard for cellulosic biofuels. Considering the rapid increase in residue availability with modest relaxations in GHG reduction level, it is expected that management practices with modest benefits to soil carbon would allow considerable expansion of potential cellulosic biofuel production.« less

  2. Impact of Biogenic Emission Uncertainties on the Simulated Response of Ozone and Fine Particulate Matter to Anthropogenic Emission Reductions

    PubMed Central

    Hogrefe, Christian; Isukapalli, Sastry S.; Tang, Xiaogang; Georgopoulos, Panos G.; He, Shan; Zalewsky, Eric E.; Hao, Winston; Ku, Jia-Yeong; Key, Tonalee; Sistla, Gopal

    2011-01-01

    The role of emissions of volatile organic compounds and nitric oxide from biogenic sources is becoming increasingly important in regulatory air quality modeling as levels of anthropogenic emissions continue to decrease and stricter health-based air quality standards are being adopted. However, considerable uncertainties still exist in the current estimation methodologies for biogenic emissions. The impact of these uncertainties on ozone and fine particulate matter (PM2.5) levels for the eastern United States was studied, focusing on biogenic emissions estimates from two commonly used biogenic emission models, the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and the Biogenic Emissions Inventory System (BEIS). Photochemical grid modeling simulations were performed for two scenarios: one reflecting present day conditions and the other reflecting a hypothetical future year with reductions in emissions of anthropogenic oxides of nitrogen (NOx). For ozone, the use of MEGAN emissions resulted in a higher ozone response to hypothetical anthropogenic NOx emission reductions compared with BEIS. Applying the current U.S. Environmental Protection Agency guidance on regulatory air quality modeling in conjunction with typical maximum ozone concentrations, the differences in estimated future year ozone design values (DVF) stemming from differences in biogenic emissions estimates were on the order of 4 parts per billion (ppb), corresponding to approximately 5% of the daily maximum 8-hr ozone National Ambient Air Quality Standard (NAAQS) of 75 ppb. For PM2.5, the differences were 0.1–0.25 μg/m3 in the summer total organic mass component of DVFs, corresponding to approximately 1–2% of the value of the annual PM2.5 NAAQS of 15 μg/m3. Spatial variations in the ozone and PM2.5 differences also reveal that the impacts of different biogenic emission estimates on ozone and PM2.5 levels are dependent on ambient levels of anthropogenic emissions. PMID:21305893

  3. Respondent Techniques for Reduction of Emotions Limiting School Adjustment: A Quantitative Review and Methodological Critique.

    ERIC Educational Resources Information Center

    Misra, Anjali; Schloss, Patrick J.

    1989-01-01

    The critical analysis of 23 studies using respondent techniques for the reduction of excessive emotional reactions in school children focuses on research design, dependent variables, independent variables, component analysis, and demonstrations of generalization and maintenance. Results indicate widespread methodological flaws that limit the…

  4. A new methodology for estimating nuclear casualties as a function of time.

    PubMed

    Zirkle, Robert A; Walsh, Terri J; Disraelly, Deena S; Curling, Carl A

    2011-09-01

    The Human Response Injury Profile (HRIP) nuclear methodology provides an estimate of casualties occurring as a consequence of nuclear attacks against military targets for planning purposes. The approach develops user-defined, time-based casualty and fatality estimates based on progressions of underlying symptoms and their severity changes over time. This paper provides a description of the HRIP nuclear methodology and its development, including inputs, human response and the casualty estimation process.

  5. Cost benefits of advanced software: A review of methodology used at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla N.

    1993-01-01

    To assist rational investments in advanced software, a formal, explicit, and multi-perspective cost-benefit analysis methodology is proposed. The methodology can be implemented through a six-stage process which is described and explained. The current practice of cost-benefit analysis at KSC is reviewed in the light of this methodology. The review finds that there is a vicious circle operating. Unsound methods lead to unreliable cost-benefit estimates. Unreliable estimates convince management that cost-benefit studies should not be taken seriously. Then, given external demands for cost-benefit estimates, management encourages software enginees to somehow come up with the numbers for their projects. Lacking the expertise needed to do a proper study, courageous software engineers with vested interests use ad hoc and unsound methods to generate some estimates. In turn, these estimates are unreliable, and the cycle continues. The proposed methodology should help KSC to break out of this cycle.

  6. How to improve the standardization and the diagnostic performance of the fecal egg count reduction test?

    PubMed

    Levecke, Bruno; Kaplan, Ray M; Thamsborg, Stig M; Torgerson, Paul R; Vercruysse, Jozef; Dobson, Robert J

    2018-04-15

    Although various studies have provided novel insights into how to best design, analyze and interpret a fecal egg count reduction test (FECRT), it is still not straightforward to provide guidance that allows improving both the standardization and the analytical performance of the FECRT across a variety of both animal and nematode species. For example, it has been suggested to recommend a minimum number of eggs to be counted under the microscope (not eggs per gram of feces), but we lack the evidence to recommend any number of eggs that would allow a reliable assessment of drug efficacy. Other aspects that need further research are the methodology of calculating uncertainty intervals (UIs; confidence intervals in case of frequentist methods and credible intervals in case of Bayesian methods) and the criteria of classifying drug efficacy into 'normal', 'suspected' and 'reduced'. The aim of this study is to provide complementary insights into the current knowledge, and to ultimately provide guidance in the development of new standardized guidelines for the FECRT. First, data were generated using a simulation in which the 'true' drug efficacy (TDE) was evaluated by the FECRT under varying scenarios of sample size, analytic sensitivity of the diagnostic technique, and level of both intensity and aggregation of egg excretion. Second, the obtained data were analyzed with the aim (i) to verify which classification criteria allow for reliable detection of reduced drug efficacy, (ii) to identify the UI methodology that yields the most reliable assessment of drug efficacy (coverage of TDE) and detection of reduced drug efficacy, and (iii) to determine the required sample size and number of eggs counted under the microscope that optimizes the detection of reduced efficacy. Our results confirm that the currently recommended criteria for classifying drug efficacy are the most appropriate. Additionally, the UI methodologies we tested varied in coverage and ability to detect reduced drug efficacy, thus a combination of UI methodologies is recommended to assess the uncertainty across all scenarios of drug efficacy estimates. Finally, based on our model estimates we were able to determine the required number of eggs to count for each sample size, enabling investigators to optimize the probability of correctly classifying a theoretical TDE while minimizing both financial and technical resources. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Assessment of the safety benefits of vehicles' advanced driver assistance, connectivity and low level automation systems.

    PubMed

    Yue, Lishengsa; Abdel-Aty, Mohamed; Wu, Yina; Wang, Ling

    2018-08-01

    The Connected Vehicle (CV) technologies together with other Driving Assistance (DA) technologies are believed to have great effects on traffic operation and safety, and they are expected to impact the future of our cities. However, few research has estimated the exact safety benefits when all vehicles are equipped with these technologies. This paper seeks to fill the gap by using a general crash avoidance effectiveness framework for major CV&DA technologies to make a comprehensive crash reduction estimation. Twenty technologies that were tested in recent studies are summarized and sensitivity analysis is used for estimating their total crash avoidance effectiveness. The results show that crash avoidance effectiveness of CV&DA technology is significantly affected by the vehicle type and the safety estimation methodology. A 70% crash avoidance rate seems to be the highest effectiveness for the CV&DA technologies operating in the real-world environment. Based on the 2005-2008 U.S. GES Crash Records, this research found that the CV&DA technologies could lead to the reduction of light vehicles' crashes and heavy trucks' crashes by at least 32.99% and 40.88%, respectively. The rear-end crashes for both light vehicles and heavy trucks have the most expected crash benefits from the technologies. The paper also studies the effectiveness of Forward Collision Warning technology (FCW) under fog conditions, and the results show that FCW could reduce 35% of the near-crash events under fog conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Predicting the quantifiable impacts of ISO 50001 on climate change mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKane, Aimee; Therkelsen, Peter; Scodel, Anna

    The ISO 50001-Energy management standard provides a continual improvement framework for organizations to reduce their energy consumption, which in the industrial and commercial (service) sectors, accounts for nearly 40% of global greenhouse gas emissions. Reducing this energy consumption will be critical for countries to achieve their national greenhouse gas reduction commitments. Several national policies already support ISO 50001; however, there is no transparent, consistent process to estimate the potential impacts of its implementation. This paper presents the ISO 50001 Impacts Methodology, an internationally-developed methodology to calculate these impacts at a national, regional, or global scale suitable for use by policymakers.more » The recently-formed ISO 50001 Global Impacts Research Network provides a forum for policymakers to refine and encourage use of the methodology. Using this methodology, a scenario with 50% of projected global industrial and service sector energy consumption under ISO 50001 management by 2030 would generate cumulative primary energy savings of approximately 105 EJ, cost savings of nearly US $700 billion (discounted to 2016 net present value), and 6500 million metric tons (Mt) of avoided CO 2 emissions. The avoided annual CO 2 emissions in 2030 alone are equivalent to removing 210 million passenger vehicles from the road.« less

  9. Does Metformin Reduce Cancer Risks? Methodologic Considerations.

    PubMed

    Golozar, Asieh; Liu, Shuiqing; Lin, Joeseph A; Peairs, Kimberly; Yeh, Hsin-Chieh

    2016-01-01

    The substantial burden of cancer and diabetes and the association between the two conditions has been a motivation for researchers to look for targeted strategies that can simultaneously affect both diseases and reduce their overlapping burden. In the absence of randomized clinical trials, researchers have taken advantage of the availability and richness of administrative databases and electronic medical records to investigate the effects of drugs on cancer risk among diabetic individuals. The majority of these studies suggest that metformin could potentially reduce cancer risk. However, the validity of this purported reduction in cancer risk is limited by several methodological flaws either in the study design or in the analysis. Whether metformin use decreases cancer risk relies heavily on the availability of valid data sources with complete information on confounders, accurate assessment of drug use, appropriate study design, and robust analytical techniques. The majority of the observational studies assessing the association between metformin and cancer risk suffer from methodological shortcomings and efforts to address these issues have been incomplete. Future investigations on the association between metformin and cancer risk should clearly address the methodological issues due to confounding by indication, prevalent user bias, and time-related biases. Although the proposed strategies do not guarantee a bias-free estimate for the association between metformin and cancer, they will reduce synthesis of and reporting of erroneous results.

  10. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION

    PubMed Central

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    2016-01-01

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864

  11. Effective normalization for copy number variation detection from whole genome sequencing.

    PubMed

    Janevski, Angel; Varadan, Vinay; Kamalakaran, Sitharthan; Banerjee, Nilanjana; Dimitrova, Nevenka

    2012-01-01

    Whole genome sequencing enables a high resolution view of the human genome and provides unique insights into genome structure at an unprecedented scale. There have been a number of tools to infer copy number variation in the genome. These tools, while validated, also include a number of parameters that are configurable to genome data being analyzed. These algorithms allow for normalization to account for individual and population-specific effects on individual genome CNV estimates but the impact of these changes on the estimated CNVs is not well characterized. We evaluate in detail the effect of normalization methodologies in two CNV algorithms FREEC and CNV-seq using whole genome sequencing data from 8 individuals spanning four populations. We apply FREEC and CNV-seq to a sequencing data set consisting of 8 genomes. We use multiple configurations corresponding to different read-count normalization methodologies in FREEC, and statistically characterize the concordance of the CNV calls between FREEC configurations and the analogous output from CNV-seq. The normalization methodologies evaluated in FREEC are: GC content, mappability and control genome. We further stratify the concordance analysis within genic, non-genic, and a collection of validated variant regions. The GC content normalization methodology generates the highest number of altered copy number regions. Both mappability and control genome normalization reduce the total number and length of copy number regions. Mappability normalization yields Jaccard indices in the 0.07 - 0.3 range, whereas using a control genome normalization yields Jaccard index values around 0.4 with normalization based on GC content. The most critical impact of using mappability as a normalization factor is substantial reduction of deletion CNV calls. The output of another method based on control genome normalization, CNV-seq, resulted in comparable CNV call profiles, and substantial agreement in variable gene and CNV region calls. Choice of read-count normalization methodology has a substantial effect on CNV calls and the use of genomic mappability or an appropriately chosen control genome can optimize the output of CNV analysis.

  12. Potential Occupant Injury Reduction in Pre-Crash System Equipped Vehicles in the Striking Vehicle of Rear-end Crashes.

    PubMed

    Kusano, Kristofer D; Gabler, Hampton C

    2010-01-01

    To mitigate the severity of rear-end and other collisions, Pre-Crash Systems (PCS) are being developed. These active safety systems utilize radar and/or video cameras to determine when a frontal crash, such as a front-to-back rear-end collisions, is imminent and can brake autonomously, even with no driver input. Of these PCS features, the effects of autonomous pre-crash braking are estimated. To estimate the maximum potential for injury reduction due to autonomous pre-crash braking in the striking vehicle of rear-end crashes, a methodology is presented for determining 1) the reduction in vehicle crash change in velocity (ΔV) due to PCS braking and 2) the number of injuries that could be prevented due to the reduction in collision severity. Injury reduction was only performed for belted drivers, as unbelted drivers have an unknown risk of being thrown out of position. The study was based on 1,406 rear-end striking vehicles from NASS / CDS years 1993 to 2008. PCS parameters were selected from realistic values and varied to examine the effect on system performance. PCS braking authority was varied from 0.5 G's to 0.8 G's while time to collision (TTC) was held at 0.45 seconds. TTC was then varied from 0.3 second to 0.6 seconds while braking authority was held constant at 0.6 G's. A constant braking pulse (step function) and ramp-up braking pulse were used. The study found that automated PCS braking could reduce the crash ΔV in rear-end striking vehicles by an average of 12% - 50% and avoid 0% - 14% of collisions, depending on PCS parameters. Autonomous PCS braking could potentially reduce the number of injured drivers who are belted by 19% to 57%.

  13. Methane emission estimation from landfills in Korea (1978-2004): quantitative assessment of a new approach.

    PubMed

    Kim, Hyun-Sun; Yi, Seung-Muk

    2009-01-01

    Quantifying methane emission from landfills is important to evaluating measures for reduction of greenhouse gas (GHG) emissions. To quantify GHG emissions and identify sensitive parameters for their measurement, a new assessment approach consisting of six different scenarios was developed using Tier 1 (mass balance method) and Tier 2 (the first-order decay method) methodologies for GHG estimation from landfills, suggested by the Intergovernmental Panel on Climate Change (IPCC). Methane emissions using Tier 1 correspond to trends in disposed waste amount, whereas emissions from Tier 2 gradually increase as disposed waste decomposes over time. The results indicate that the amount of disposed waste and the decay rate for anaerobic decomposition were decisive parameters for emission estimation using Tier 1 and Tier 2. As for the different scenarios, methane emissions were highest under Scope 1 (scenarios I and II), in which all landfills in Korea were regarded as one landfill. Methane emissions under scenarios III, IV, and V, which separated the dissimilated fraction of degradable organic carbon (DOC(F)) by waste type and/or revised the methane correction factor (MCF) by waste layer, were underestimated compared with scenarios II and III. This indicates that the methodology of scenario I, which has been used in most previous studies, may lead to an overestimation of methane emissions. Additionally, separate DOC(F) and revised MCF were shown to be important parameters for methane emission estimation from landfills, and revised MCF by waste layer played an important role in emission variations. Therefore, more precise information on each landfill and careful determination of parameter values and characteristics of disposed waste in Korea should be used to accurately estimate methane emissions from landfills.

  14. A component modes projection and assembly model reduction methodology for articulated, multi-flexible body structures

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Tsuha, Walter S.

    1993-01-01

    A two-stage model reduction methodology, combining the classical Component Mode Synthesis (CMS) method and the newly developed Enhanced Projection and Assembly (EP&A) method, is proposed in this research. The first stage of this methodology, called the COmponent Modes Projection and Assembly model REduction (COMPARE) method, involves the generation of CMS mode sets, such as the MacNeal-Rubin mode sets. These mode sets are then used to reduce the order of each component model in the Rayleigh-Ritz sense. The resultant component models are then combined to generate reduced-order system models at various system configurations. A composite mode set which retains important system modes at all system configurations is then selected from these reduced-order system models. In the second stage, the EP&A model reduction method is employed to reduce further the order of the system model generated in the first stage. The effectiveness of the COMPARE methodology has been successfully demonstrated on a high-order, finite-element model of the cruise-configured Galileo spacecraft.

  15. Reduced order modeling and active flow control of an inlet duct

    NASA Astrophysics Data System (ADS)

    Ge, Xiaoqing

    Many aerodynamic applications require the modeling of compressible flows in or around a body, e.g., the design of aircraft, inlet or exhaust duct, wind turbines, or tall buildings. Traditional methods use wind tunnel experiments and computational fluid dynamics (CFD) to investigate the spatial and temporal distribution of the flows. Although they provide a great deal of insight into the essential characteristics of the flow field, they are not suitable for control analysis and design due to the high physical/computational cost. Many model reduction methods have been studied to reduce the complexity of the flow model. There are two main approaches: linearization based input/output modeling and proper orthogonal decomposition (POD) based model reduction. The former captures mostly the local behavior near a steady state, which is suitable to model laminar flow dynamics. The latter obtains a reduced order model by projecting the governing equation onto an "optimal" subspace and is able to model complex nonlinear flow phenomena. In this research we investigate various model reduction approaches and compare them in flow modeling and control design. We propose an integrated model-based control methodology and apply it to the reduced order modeling and active flow control of compressible flows within a very aggressive (length to exit diameter ratio, L/D, of 1.5) inlet duct and its upstream contraction section. The approach systematically applies reduced order modeling, estimator design, sensor placement and control design to improve the aerodynamic performance. The main contribution of this work is the development of a hybrid model reduction approach that attempts to combine the best features of input/output model identification and POD method. We first identify a linear input/output model by using a subspace algorithm. We next project the difference between CFD response and the identified model response onto a set of POD basis. This trajectory is fit to a nonlinear dynamical model to augment the linear input/output model. Thus, the full system is decomposed into a dominant linear subsystem and a low order nonlinear subsystem. The hybrid model is then used for control design and compared with other modeling methods in CFD simulations. Numerical results indicate that the hybrid model accurately predicts the nonlinear behavior of the flow for a 2D diffuser contraction section model. It also performs best in terms of feedback control design and learning control. Since some outputs of interest (e.g., the AIP pressure recovery) are not observable during normal operations, static and dynamic estimators are designed to recreate the information from available sensor measurements. The latter also provides a state estimation for feedback controller. Based on the reduced order models and estimators, different controllers are designed to improve the aerodynamic performance of the contraction section and inlet duct. The integrated control methodology is evaluated with CFD simulations. Numerical results demonstrate the feasibility and efficacy of the active flow control based on reduced order models. Our reduced order models not only generate a good approximation of the nonlinear flow dynamics over a wide input range, but also help to design controllers that significantly improve the flow response. The tools developed for model reduction, estimator and control design can also be applied to wind tunnel experiment.

  16. Spatial indeterminacy and power sector carbon emissions accounting

    NASA Astrophysics Data System (ADS)

    Jiusto, J. Scott

    Carbon emission indicators are essential for understanding climate change processes, and for motivating and measuring the effectiveness of carbon reduction policy at multiple scales. Carbon indicators also play an increasingly important role in shaping cultural discourses and politics about nature-society relations and the roles of the state, markets and civil society in creating sustainable natural resource practices and just societies. The analytical and political significance of indicators is tied closely to their objective basis: how accurately they account for the places, people, and processes responsible for emissions. In the electric power sector, however, power-trading across geographic boundaries prevents a simple, purely objective spatial attribution of emissions. Using U.S. states as the unit of analysis, three alternative methods of accounting for carbon emissions from electricity use are assessed, each of which is conceptually sound and methodologically rigorous, yet produces radically different estimates of individual state emissions. Each method also implicitly embodies distinctly different incentive structures for states to enact carbon reduction policies. Because none of the three methods can be said to more accurately reflect "true" emissions levels, I argue the best method is that which most encourages states to reduce emissions. Energy and carbon policy processes are highly contested, however, and thus I examine competing interests and perspectives shaping state energy policy. I explore what it means, philosophically and politically, to predicate emissions estimates on both objectively verifiable past experience and subjectively debatable policy prescriptions for the future. Although developed here at the state scale, the issues engaged and the carbon accounting methodology proposed are directly relevant to carbon analysis and policy formation at scales ranging from the local to the international.

  17. Uncertainty quantification applied to the radiological characterization of radioactive waste.

    PubMed

    Zaffora, B; Magistris, M; Saporta, G; Chevalier, J-P

    2017-09-01

    This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Evaluating the Population Impact on Racial/Ethnic Disparities in HIV in Adulthood of Intervening on Specific Targets: A Conceptual and Methodological Framework.

    PubMed

    Howe, Chanelle J; Dulin-Keita, Akilah; Cole, Stephen R; Hogan, Joseph W; Lau, Bryan; Moore, Richard D; Mathews, W Christopher; Crane, Heidi M; Drozd, Daniel R; Geng, Elvin; Boswell, Stephen L; Napravnik, Sonia; Eron, Joseph J; Mugavero, Michael J

    2018-02-01

    Reducing racial/ethnic disparities in human immunodeficiency virus (HIV) disease is a high priority. Reductions in HIV racial/ethnic disparities can potentially be achieved by intervening on important intermediate factors. The potential population impact of intervening on intermediates can be evaluated using observational data when certain conditions are met. However, using standard stratification-based approaches commonly employed in the observational HIV literature to estimate the potential population impact in this setting may yield results that do not accurately estimate quantities of interest. Here we describe a useful conceptual and methodological framework for using observational data to appropriately evaluate the impact on HIV racial/ethnic disparities of interventions. This framework reframes relevant scientific questions in terms of a controlled direct effect and estimates a corresponding proportion eliminated. We review methods and conditions sufficient for accurate estimation within the proposed framework. We use the framework to analyze data on 2,329 participants in the CFAR [Centers for AIDS Research] Network of Integrated Clinical Systems (2008-2014) to evaluate the potential impact of universal prescription of and ≥95% adherence to antiretroviral therapy on racial disparities in HIV virological suppression. We encourage the use of the described framework to appropriately evaluate the potential impact of targeted interventions in addressing HIV racial/ethnic disparities using observational data. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Reference Model 5 (RM5): Oscillating Surge Wave Energy Converter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. H.; Jenne, D. S.; Thresher, R.

    This report is an addendum to SAND2013-9040: Methodology for Design and Economic Analysis of Marine Energy Conversion (MEC) Technologies. This report describes an Oscillating Water Column Wave Energy Converter (OSWEC) reference model design in a complementary manner to Reference Models 1-4 contained in the above report. A conceptual design for a taut moored oscillating surge wave energy converter was developed. The design had an annual electrical power of 108 kilowatts (kW), rated power of 360 kW, and intended deployment at water depths between 50 m and 100 m. The study includes structural analysis, power output estimation, a hydraulic power conversionmore » chain system, and mooring designs. The results were used to estimate device capital cost and annual operation and maintenance costs. The device performance and costs were used for the economic analysis, following the methodology presented in SAND2013-9040 that included costs for designing, manufacturing, deploying, and operating commercial-scale MEC arrays up to 100 devices. The levelized cost of energy estimated for the Reference Model 5 OSWEC, presented in this report, was for a single device and arrays of 10, 50, and 100 units, and it enabled the economic analysis to account for cost reductions associated with economies of scale. The baseline commercial levelized cost of energy estimate for the Reference Model 5 device in an array comprised of 10 units is $1.44/kilowatt-hour (kWh), and the value drops to approximately $0.69/kWh for an array of 100 units.« less

  20. Methodology for Estimating Total Automotive Manufacturing Costs

    DOT National Transportation Integrated Search

    1983-04-01

    A number of methodologies for estimating manufacturing costs have been developed. This report discusses the different approaches and shows that an approach to estimating manufacturing costs in the automobile industry based on surrogate plants is pref...

  1. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  2. Predictions of first passage times in sparse discrete fracture networks using graph-based reductions

    NASA Astrophysics Data System (ADS)

    Hyman, J.; Hagberg, A.; Srinivasan, G.; Mohd-Yusof, J.; Viswanathan, H. S.

    2017-12-01

    We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.

  3. Predictions of first passage times in sparse discrete fracture networks using graph-based reductions

    NASA Astrophysics Data System (ADS)

    Hyman, Jeffrey D.; Hagberg, Aric; Srinivasan, Gowri; Mohd-Yusof, Jamaludin; Viswanathan, Hari

    2017-07-01

    We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.

  4. Self-reported hand washing behaviors and foodborne illness: a propensity score matching approach.

    PubMed

    Ali, Mir M; Verrill, Linda; Zhang, Yuanting

    2014-03-01

    Hand washing is a simple and effective but easily overlooked way to reduce cross-contamination and the transmission of foodborne pathogens. In this study, we used the propensity score matching methodology to account for potential selection bias to explore our hypothesis that always washing hands before food preparation tasks is associated with a reduction in the probability of reported foodborne illness. Propensity score matching can simulate random assignment to a condition so that pretreatment observable differences between a treatment group and a control group are homogenous on all the covariates except the treatment variable. Using the U.S. Food and Drug Administration's 2010 Food Safety Survey, we estimated the effect of self-reported hand washing behavior on the probability of self-reported foodborne illness. Our results indicate that reported washing of hands with soap always before food preparation leads to a reduction in the probability of reported foodborne illness.

  5. Computational circular dichroism estimation for point-of-care diagnostics via vortex half-wave retarders

    NASA Astrophysics Data System (ADS)

    Haider, Shahid A.; Tran, Megan Y.; Wong, Alexander

    2018-02-01

    Observing the circular dichroism (CD) caused by organic molecules in biological fluids can provide powerful indicators of patient health and provide diagnostic clues for treatment. Methods for this kind of analysis involve tabletop devices that weigh tens of kilograms with costs on the order of tens of thousands of dollars, making them prohibitive in point-of-care diagnostic applications. In an e ort to reduce the size, cost, and complexity of CD estimation systems for point-of-care diagnostics, we propose a novel method for CD estimation that leverages a vortex half-wave retarder in between two linear polarizers and a two-dimensional photodetector array to provide an overall complexity reduction in the system. This enables the measurement of polarization variations across multiple polarizations after they interact with a biological sample, simultaneously, without the need for mechanical actuation. We further discuss design considerations of this methodology in the context of practical applications to point-of-care diagnostics.

  6. Analysis of electron transfer processes across liquid/liquid interfaces: estimation of free energy of activation using diffuse boundary model.

    PubMed

    Harinipriya, S; Sangaranarayanan, M V

    2006-01-31

    The evaluation of the free energy of activation pertaining to the electron-transfer reactions occurring at liquid/liquid interfaces is carried out employing a diffuse boundary model. The interfacial solvation numbers are estimated using a lattice gas model under the quasichemical approximation. The standard reduction potentials of the redox couples, appropriate inner potential differences, dielectric permittivities, as well as the width of the interface are included in the analysis. The methodology is applied to the reaction between [Fe(CN)6](3-/4-) and [Lu(biphthalocyanine)](3+/4+) at water/1,2-dichloroethane interface. The rate-determining step is inferred from the estimated free energy of activation for the constituent processes. The results indicate that the solvent shielding effect and the desolvation of the reactants at the interface play a central role in dictating the free energy of activation. The heterogeneous electron-transfer rate constant is evaluated from the molar reaction volume and the frequency factor.

  7. Bayes and empirical Bayes methods for reduced rank regression models in matched case-control studies.

    PubMed

    Satagopan, Jaya M; Sen, Ananda; Zhou, Qin; Lan, Qing; Rothman, Nathaniel; Langseth, Hilde; Engel, Lawrence S

    2016-06-01

    Matched case-control studies are popular designs used in epidemiology for assessing the effects of exposures on binary traits. Modern studies increasingly enjoy the ability to examine a large number of exposures in a comprehensive manner. However, several risk factors often tend to be related in a nontrivial way, undermining efforts to identify the risk factors using standard analytic methods due to inflated type-I errors and possible masking of effects. Epidemiologists often use data reduction techniques by grouping the prognostic factors using a thematic approach, with themes deriving from biological considerations. We propose shrinkage-type estimators based on Bayesian penalization methods to estimate the effects of the risk factors using these themes. The properties of the estimators are examined using extensive simulations. The methodology is illustrated using data from a matched case-control study of polychlorinated biphenyls in relation to the etiology of non-Hodgkin's lymphoma. © 2015, The International Biometric Society.

  8. Determination of Time Dependent Virus Inactivation Rates

    NASA Astrophysics Data System (ADS)

    Chrysikopoulos, C. V.; Vogler, E. T.

    2003-12-01

    A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.

  9. Comparing estimates of EMEP MSC-W and UFORE models in air pollutant reduction by urban trees.

    PubMed

    Guidolotti, Gabriele; Salviato, Michele; Calfapietra, Carlo

    2016-10-01

    There is a growing interest to identify and quantify the benefits provided by the presence of trees in urban environment in order to improve the environmental quality in cities. However, the evaluation and estimate of plant efficiency in removing atmospheric pollutants is rather complicated, because of the high number of factors involved and the difficulty of estimating the effect of the interactions between the different components. In this study, the EMEP MSC-W model was implemented to scale-down to tree-level and allows its application to an industrial-urban green area in Northern Italy. Moreover, the annual outputs were compared with the outputs of UFORE (nowadays i-Tree), a leading model for urban forest applications. Although, EMEP/MSC-W model and UFORE are semi-empirical models designed for different applications, the comparison, based on O3, NO2 and PM10 removal, showed a good agreement in the estimates and highlights how the down-scaling methodology presented in this study may have significant opportunities for further developments.

  10. Implications of raising cigarette excise taxes in Peru.

    PubMed

    Gonzalez-Rozada, Martin; Ramos-Carbajales, Alejandro

    2016-10-01

    To assess how raising cigarette excise taxes in Peru might impact cigarette consumption, and to determine if higher taxes would be regressive. Total demand price elasticity was estimated by income groups using two datasets: quarterly time-series data from 1993 - 2012 and data from a cross-sectional survey of income and expenses conducted in 2008 - 2009 . A functional form of the cigarette demand in Peru was specified using the quarterly data set, and the demand price elasticity was estimated for the short and long run. Using the second data set and Deaton methodology, the implementation of elasticity estimation and by groups' elasticity was done in a two-step procedure. Demand price elasticity was -0.7, implying that a 10% price increase via a new tax would reduce consumption by 7%. Demand price elasticity estimations by income group suggested that poorer families are not more price sensitive than richer ones, which implies that increasing cigarette taxes could be regressive. Increasing cigarette taxes is the most efficient policy for inducing a reduction in smoking. However, in the case of Peru, an increase in cigarette taxes could be regressive.

  11. Expanded uncertainty estimation methodology in determining the sandy soils filtration coefficient

    NASA Astrophysics Data System (ADS)

    Rusanova, A. D.; Malaja, L. D.; Ivanov, R. N.; Gruzin, A. V.; Shalaj, V. V.

    2018-04-01

    The combined standard uncertainty estimation methodology in determining the sandy soils filtration coefficient has been developed. The laboratory researches were carried out which resulted in filtration coefficient determination and combined uncertainty estimation obtaining.

  12. The Development of a Methodology for Estimating the Cost of Air Force On-the-Job Training.

    ERIC Educational Resources Information Center

    Samers, Bernard N.; And Others

    The Air Force uses a standardized costing methodology for resident technical training schools (TTS); no comparable methodology exists for computing the cost of on-the-job training (OJT). This study evaluates three alternative survey methodologies and a number of cost models for estimating the cost of OJT for airmen training in the Administrative…

  13. Methodology to Estimate the Quantity, Composition, and ...

    EPA Pesticide Factsheets

    This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estimates have been limited to building-related CDD, a goal in the development of this methodology was to use data originating from CDD facilities and contractors to better capture the current picture of total CDD management, including materials from roads, bridges and infrastructure. This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estimates have been limited to building-related CDD, a goal in the development of this methodology was to use data originating from CDD facilities and contractors to better capture the current picture of total CDD management, including materials from roads, bridges and infrastructure.

  14. Methodology to estimate particulate matter emissions from certified commercial aircraft engines.

    PubMed

    Wayson, Roger L; Fleming, Gregg G; Lovinelli, Ralph

    2009-01-01

    Today, about one-fourth of U.S. commercial service airports, including 41 of the busiest 50, are either in nonattainment or maintenance areas per the National Ambient Air Quality Standards. U.S. aviation activity is forecasted to triple by 2025, while at the same time, the U.S. Environmental Protection Agency (EPA) is evaluating stricter particulate matter (PM) standards on the basis of documented human health and welfare impacts. Stricter federal standards are expected to impede capacity and limit aviation growth if regulatory mandated emission reductions occur as for other non-aviation sources (i.e., automobiles, power plants, etc.). In addition, strong interest exists as to the role aviation emissions play in air quality and climate change issues. These reasons underpin the need to quantify and understand PM emissions from certified commercial aircraft engines, which has led to the need for a methodology to predict these emissions. Standardized sampling techniques to measure volatile and nonvolatile PM emissions from aircraft engines do not exist. As such, a first-order approximation (FOA) was derived to fill this need based on available information. FOA1.0 only allowed prediction of nonvolatile PM. FOA2.0 was a change to include volatile PM emissions on the basis of the ratio of nonvolatile to volatile emissions. Recent collaborative efforts by industry (manufacturers and airlines), research establishments, and regulators have begun to provide further insight into the estimation of the PM emissions. The resultant PM measurement datasets are being analyzed to refine sampling techniques and progress towards standardized PM measurements. These preliminary measurement datasets also support the continued refinement of the FOA methodology. FOA3.0 disaggregated the prediction techniques to allow for independent prediction of nonvolatile and volatile emissions on a more theoretical basis. The Committee for Aviation Environmental Protection of the International Civil Aviation Organization endorsed the use of FOA3.0 in February 2007. Further commitment was made to improve the FOA as new data become available, until such time the methodology is rendered obsolete by a fully validated database of PM emission indices for today's certified commercial fleet. This paper discusses related assumptions and derived equations for the FOA3.0 methodology used worldwide to estimate PM emissions from certified commercial aircraft engines within the vicinity of airports.

  15. The effect of routine early amniotomy on spontaneous labor: a meta-analysis.

    PubMed

    Brisson-Carroll, G; Fraser, W; Bréart, G; Krauss, I; Thornton, J

    1996-05-01

    To obtain estimates of the effects of amniotomy on the risk of cesarean delivery and on other indicators of maternal and neonatal morbidity (Apgar score less than 7 at 5 minutes, admission to neonatal intensive care unit [NICU]). Published studies were identified through manual and computerized searches using Medline and the Cochrane Collaboration Pregnancy and Childbirth Database. Our search identified ten trials, all published in peer-reviewed journals. Trials were assigned a methodological quality score based on a standardized rating system. Three trials were excluded from the analysis for methodological limitations. Data were abstracted by two trained reviewers. Typical odds ratios (OR) were calculated. Amniotomy was associated with a reduction in labor duration varying from 0.8-2.3 hours. There was a nonstatistically significant increase in the risk of cesarean delivery; OR 1.2, 95% confidence interval (CI) 0.9-1.6. The risk of a 5-minute Apgar score less than 7 was reduced in association with early amniotomy (OR 0.5, 95% CI 0.3-0.9). Groups were similar with respect to other indicators of neonatal status (arterial cord pH, NICU admissions). Routine early amniotomy is associated with both benefits and risks. Benefits include a reduction in labor duration and a possible reduction in abnormal 5-minute Apgar scores. This meta-analysis provides no support for the hypothesis that routine early amniotomy reduces the risk of cesarean delivery. An association between early amniotomy and cesarean delivery for fetal distress was noted in one large trial, suggesting that amniotomy should be reserved for patients with abnormal labor progress.

  16. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  17. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  18. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  19. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  20. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  1. Health impact assessment of decreases in PM10 and ozone concentrations in the Mexico City Metropolitan Area: a basis for a new air quality management program.

    PubMed

    Riojas-Rodríguez, Horacio; Álamo-Hernández, Urinda; Texcalac-Sangrador, José Luis; Romieu, Isabelle

    2014-01-01

    To conduct a health impact assessment (HIA) to quantify health benefits for several PM and O3 air pollution reduction scenarios in the Mexico City Metropolitan Area (MCMA). Results from this HIA will contribute to the scientific support of the MCMA air quality management plan (PROAIRE) for the period 2011-2020. The HIA methodology consisted of four steps: 1) selection of the air pollution reduction scenarios, 2) identification of the at-risk population and health outcomes for the 2005 baseline scenario, 3) selection of concentration-response functions and 4) estimation of health impacts. Reductions of PM10 levels to 20 μg/m³ and O3 levels to 0.050ppm (98 µg/m³) would prevent 2300 and 400 annual deaths respectively. The greatest health impact was seen in the over-65 age group and in mortality due to cardiopulmonary and cardiovascular disease. Improved air quality in the MCMA could provide significant health benefits through focusing interventions by exposure zones.

  2. Integrated Path Differential Absorption Lidar Optimizations Based on Pre-Analyzed Atmospheric Data for ASCENDS Mission Applications

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narasimha S.

    2012-01-01

    In this paper a modeling method based on data reductions is investigated which includes pre analyzed MERRA atmospheric fields for quantitative estimates of uncertainties introduced in the integrated path differential absorption methods for the sensing of various molecules including CO2. This approach represents the extension of our existing lidar modeling framework previously developed and allows effective on- and offline wavelength optimizations and weighting function analysis to minimize the interference effects such as those due to temperature sensitivity and water vapor absorption. The new simulation methodology is different from the previous implementation in that it allows analysis of atmospheric effects over annual spans and the entire Earth coverage which was achieved due to the data reduction methods employed. The effectiveness of the proposed simulation approach is demonstrated with application to the mixing ratio retrievals for the future ASCENDS mission. Independent analysis of multiple accuracy limiting factors including the temperature, water vapor interferences, and selected system parameters is further used to identify favorable spectral regions as well as wavelength combinations facilitating the reduction in total errors in the retrieved XCO2 values.

  3. Decline in infection-related morbidities following drug-mediated reductions in the intensity of Schistosoma infection: A systematic review and meta-analysis

    PubMed Central

    Andrade, Gisele; Bertsch, David J.; Gazzinelli, Andrea

    2017-01-01

    Background Since 1984, WHO has endorsed drug treatment to reduce Schistosoma infection and its consequent morbidity. Cross-sectional studies suggest pre-treatment correlation between infection intensity and risk for Schistosoma-related pathology. However, evidence also suggests that post-treatment reduction in intensity may not reverse morbidity because some morbidities occur at all levels of infection, and some reflect permanent tissue damage. The aim of this project was to systematically review evidence on drug-based control of schistosomiasis and to develop a quantitative estimate of the impact of post-treatment reductions in infection intensity on prevalence of infection-associated morbidity. Methodology/Principal findings This review was registered at inception with PROSPERO (CRD42015026080). Studies that evaluated morbidity before and after treatment were identified by online searches and searches of private archives. Post-treatment odds ratios or standardized mean differences were calculated for each outcome, and these were correlated to treatment-related egg count reduction ratios (ERRs) by meta-regression. A greater ERR correlated with greater reduction in odds of most morbidities. Random effects meta-analysis was used to derive summary estimates: after treatment of S. mansoni and S. japonicum, left-sided hepatomegaly was reduced by 54%, right-sided hepatomegaly by 47%, splenomegaly by 37%, periportal fibrosis by 52%, diarrhea by 53%, and blood in stools by 75%. For S. haematobium, hematuria was reduced by 92%, proteinuria by 90%, bladder lesions by 86%, and upper urinary tract lesions by 72%. There were no consistent changes in portal dilation or hemoglobin levels. In sub-group analysis, age, infection status, region, parasite species, and interval to follow-up were associated with meaningful differences in outcome. Conclusion/Significance While there are challenges to implementing therapy for schistosomiasis, and praziquantel therapy is not fully curative, reductions in egg output are significantly correlated with decreased morbidity and can be used to project diminution in disease burden when contemplating more aggressive strategies to minimize infection intensity. PMID:28212414

  4. Methodologies for Estimating Cumulative Human Exposures to Current-Use Pyrethroid Pesticides

    EPA Science Inventory

    We estimated cumulative residential pesticide exposures for a group of nine young children (4–6 years) using three different methodologies developed by the US Environmental Protection Agency and compared the results with estimates derived from measured urinary metabolite concentr...

  5. CONCEPTUAL DESIGNS FOR A NEW HIGHWAY VEHICLE EMISSIONS ESTIMATION METHODOLOGY

    EPA Science Inventory

    The report discusses six conceptual designs for a new highway vehicle emissions estimation methodology and summarizes the recommendations of each design for improving the emissions and activity factors in the emissions estimation process. he complete design reports are included a...

  6. Methods for measuring denitrification: Diverse approaches to a difficult problem

    USGS Publications Warehouse

    Groffman, Peter M; Altabet, Mary A.; Böhlke, J.K.; Butterbach-Bahl, Klaus; David, Mary B.; Firestone, Mary K.; Giblin, Anne E.; Kana, Todd M.; Nielsen , Lars Peter; Voytek, Mary A.

    2006-01-01

    Denitrification, the reduction of the nitrogen (N) oxides, nitrate (NO3−) and nitrite (NO2−), to the gases nitric oxide (NO), nitrous oxide (N2O), and dinitrogen (N2), is important to primary production, water quality, and the chemistry and physics of the atmosphere at ecosystem, landscape, regional, and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments and discuss the strengths, weaknesses, and future prospects for the different methods. Methodological approaches covered include (1) acetylene-based methods, (2) 15N tracers, (3) direct N2 quantification, (4) N2:Ar ratio quantification, (5) mass balance approaches, (6) stoichiometric approaches, (7) methods based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass balance and stoichiometric approaches that constrain estimates of denitrification at large scales with point measurements (made using multiple methods), in multiple systems, is likely to propel more improvement in denitrification methods over the next few years.

  7. Landslide Risk: Economic Valuation in The North-Eastern Zone of Medellin City

    NASA Astrophysics Data System (ADS)

    Vega, Johnny Alexander; Hidalgo, César Augusto; Johana Marín, Nini

    2017-10-01

    Natural disasters of a geodynamic nature can cause enormous economic and human losses. The economic costs of a landslide disaster include relocation of communities and physical repair of urban infrastructure. However, when performing a quantitative risk analysis, generally, the indirect economic consequences of such an event are not taken into account. A probabilistic approach methodology that considers several scenarios of hazard and vulnerability to measure the magnitude of the landslide and to quantify the economic costs is proposed. With this approach, it is possible to carry out a quantitative evaluation of the risk by landslides, allowing the calculation of the economic losses before a potential disaster in an objective, standardized and reproducible way, taking into account the uncertainty of the building costs in the study zone. The possibility of comparing different scenarios facilitates the urban planning process, the optimization of interventions to reduce risk to acceptable levels and an assessment of economic losses according to the magnitude of the damage. For the development and explanation of the proposed methodology, a simple case study is presented, located in north-eastern zone of the city of Medellín. This area has particular geomorphological characteristics, and it is also characterized by the presence of several buildings in bad structural conditions. The proposed methodology permits to obtain an estimative of the probable economic losses by earthquake-induced landslides, taking into account the uncertainty of the building costs in the study zone. The obtained estimative shows that the structural intervention of the buildings produces a reduction the order of 21 % in the total landslide risk.

  8. Adaptation to hydrological extremes through insurance: a financial fund simulation model under changing scenarios

    NASA Astrophysics Data System (ADS)

    Guzman, Diego; Mohor, Guilherme; Câmara, Clarissa; Mendiondo, Eduardo

    2017-04-01

    Researches from around the world relate global environmental changes with the increase of vulnerability to extreme events, such as heavy and scarce precipitations - floods and droughts. Hydrological disasters have caused increasing losses in recent years. Thus, risk transfer mechanisms, such as insurance, are being implemented to mitigate impacts, finance the recovery of the affected population, and promote the reduction of hydrological risks. However, among the main problems in implementing these strategies, there are: First, the partial knowledge of natural and anthropogenic climate change in terms of intensity and frequency; Second, the efficient risk reduction policies require accurate risk assessment, with careful consideration of costs; Third, the uncertainty associated with numerical models and input data used. The objective of this document is to introduce and discuss the feasibility of the application of Hydrological Risk Transfer Models (HRTMs) as a strategy of adaptation to global climate change. The article shows the development of a methodology for the collective and multi-sectoral vulnerability management, facing the hydrological risk in the long term, under an insurance funds simulator. The methodology estimates the optimized premium as a function of willingness to pay (WTP) and the potential direct loss derived from hydrological risk. The proposed methodology structures the watershed insurance scheme in three analysis modules. First, the hazard module, which characterizes the hydrologic threat from the recorded series input or modelled series under IPCC / RCM's generated scenarios. Second, the vulnerability module calculates the potential economic loss for each sector1 evaluated as a function of the return period "TR". Finally, the finance module determines the value of the optimal aggregate premium by evaluating equiprobable scenarios of water vulnerability; taking into account variables such as the maximum limit of coverage, deductible, reinsurance schemes, and incentives for risk reduction. The methodology tested by members of the Integrated Nucleus of River Basins (NIBH) (University of Sao Paulo (USP) School of Engineering of São Carlos (EESC) - Brazil) presents an alternative to the analysis and planning of insurance funds, aiming to mitigate the impacts of hydrological droughts and stream flash floods. The presented procedure is especially important when information relevant to studies and the development and implementation of insurance funds are difficult to access and of complex evaluation. A sequence of academic applications has been made in Brazil under the South American context, where the market of hydrological insurance has a low penetration compared to developed economies and insurance markets more established as the United States and Europe, producing relevant information and demonstrating the potential of the methodology in development.

  9. A GIS-based model to estimate flood consequences and the degree of accessibility and operability of strategic emergency response structures in urban areas

    NASA Astrophysics Data System (ADS)

    Albano, R.; Sole, A.; Adamowski, J.; Mancusi, L.

    2014-11-01

    Efficient decision-making regarding flood risk reduction has become a priority for authorities and stakeholders in many European countries. Risk analysis methods and techniques are a useful tool for evaluating costs and benefits of possible interventions. Within this context, a methodology to estimate flood consequences was developed in this paper that is based on GIS, and integrated with a model that estimates the degree of accessibility and operability of strategic emergency response structures in an urban area. The majority of the currently available approaches do not properly analyse road network connections and dependencies within systems, and as such a loss of roads could cause significant damages and problems to emergency services in cases of flooding. The proposed model is unique in that it provides a maximum-impact estimation of flood consequences on the basis of the operability of the strategic emergency structures in an urban area, their accessibility, and connection within the urban system of a city (i.e. connection between aid centres and buildings at risk), in the emergency phase. The results of a case study in the Puglia region in southern Italy are described to illustrate the practical applications of this newly proposed approach. The main advantage of the proposed approach is that it allows for defining a hierarchy between different infrastructure in the urban area through the identification of particular components whose operation and efficiency are critical for emergency management. This information can be used by decision-makers to prioritize risk reduction interventions in flood emergencies in urban areas, given limited financial resources.

  10. Reduced Risk of Importing Ebola Virus Disease because of Travel Restrictions in 2014: A Retrospective Epidemiological Modeling Study

    PubMed Central

    Otsuki, Shiori

    2016-01-01

    Background An epidemic of Ebola virus disease (EVD) from 2013–16 posed a serious risk of global spread during its early growth phase. A post-epidemic evaluation of the effectiveness of travel restrictions has yet to be conducted. The present study aimed to estimate the effectiveness of travel restrictions in reducing the risk of importation from mid-August to September, 2014, using a simple hazard-based statistical model. Methodology/Principal Findings The hazard rate was modeled as an inverse function of the effective distance, an excellent predictor of disease spread, which was calculated from the airline transportation network. By analyzing datasets of the date of EVD case importation from the 15th of July to the 15th of September 2014, and assuming that the network structure changed from the 8th of August 2014 because of travel restrictions, parameters that characterized the hazard rate were estimated. The absolute risk reduction and relative risk reductions due to travel restrictions were estimated to be less than 1% and about 20%, respectively, for all models tested. Effectiveness estimates among African countries were greater than those for other countries outside Africa. Conclusions The travel restrictions were not effective enough to expect the prevention of global spread of Ebola virus disease. It is more efficient to control the spread of disease locally during an early phase of an epidemic than to attempt to control the epidemic at international borders. Capacity building for local containment and coordinated and expedited international cooperation are essential to reduce the risk of global transmission. PMID:27657544

  11. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    PubMed Central

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  12. General multiyear aggregation technology: Methodology and software documentation. [estimating seasonal crop acreage proportions

    NASA Technical Reports Server (NTRS)

    Baker, T. C. (Principal Investigator)

    1982-01-01

    A general methodology is presented for estimating a stratum's at-harvest crop acreage proportion for a given crop year (target year) from the crop's estimated acreage proportion for sample segments from within the stratum. Sample segments from crop years other than the target year are (usually) required for use in conjunction with those from the target year. In addition, the stratum's (identifiable) crop acreage proportion may be estimated for times other than at-harvest in some situations. A by-product of the procedure is a methodology for estimating the change in the stratum's at-harvest crop acreage proportion from crop year to crop year. An implementation of the proposed procedure as a statistical analysis system routine using the system's matrix language module, PROC MATRIX, is described and documented. Three examples illustrating use of the methodology and algorithm are provided.

  13. Speech Enhancement of Mobile Devices Based on the Integration of a Dual Microphone Array and a Background Noise Elimination Algorithm.

    PubMed

    Chen, Yung-Yue

    2018-05-08

    Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.

  14. Implementation and adaptation of a macro-scale methodology to calculate direct economic losses

    NASA Astrophysics Data System (ADS)

    Natho, Stephanie; Thieken, Annegret

    2017-04-01

    As one of the 195 member countries of the United Nations, Germany signed the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR). With this, though voluntary and non-binding, Germany agreed to report on achievements to reduce disaster impacts. Among other targets, the SFDRR aims at reducing direct economic losses in relation to the global gross domestic product by 2030 - but how to measure this without a standardized approach? The United Nations Office for Disaster Risk Reduction (UNISDR) has hence proposed a methodology to estimate direct economic losses per event and country on the basis of the number of damaged or destroyed items in different sectors. The method bases on experiences from developing countries. However, its applicability in industrial countries has not been investigated so far. Therefore, this study presents the first implementation of this approach in Germany to test its applicability for the costliest natural hazards and suggests adaptations. The approach proposed by UNISDR considers assets in the sectors agriculture, industry, commerce, housing, and infrastructure by considering roads, medical and educational facilities. The asset values are estimated on the basis of sector and event specific number of affected items, sector specific mean sizes per item, their standardized construction costs per square meter and a loss ratio of 25%. The methodology was tested for the three costliest natural hazard types in Germany, i.e. floods, storms and hail storms, considering 13 case studies on the federal or state scale between 1984 and 2016. Not any complete calculation of all sectors necessary to describe the total direct economic loss was possible due to incomplete documentation. Therefore, the method was tested sector-wise. Three new modules were developed to better adapt this methodology to German conditions covering private transport (cars), forestry and paved roads. Unpaved roads in contrast were integrated into the agricultural and forestry sector. Furthermore overheads are proposed to include costs of housing content as well as the overall costs of public infrastructure, one of the most important damage sectors. All constants considering sector specific mean sizes or construction costs were adapted. Loss ratios were adapted for each event. Whereas the original UNISDR method over- und underestimates the losses of the tested events, the adapted method is able to calculate losses in good accordance for river floods, hail storms and storms. For example, for the 2013-flood economic losses of EUR 6.3 billion were calculated (UNISDR EUR 0.85 billion, documentation EUR 11 billion). For the hail storms in 2013 the calculated EUR 3.6 billion overestimate the documented losses of EUR 2.7 billion less than the original UNISDR approach with EUR 5.2 billion. Only for flash floods, where public infrastructure can account for more than 90% of total losses, the method is absolutely not applicable. The adapted methodology serves as a good starting point for macro-scale loss estimations by accounting for the most important damage sectors. By implementing this approach into damage and event documentation and reporting standards, a consistent monitoring according to the SFDRR could be achieved.

  15. Estimating the potential for industrial waste heat reutilization in urban district energy systems: method development and implementation in two Chinese provinces

    NASA Astrophysics Data System (ADS)

    Tong, Kangkang; Fang, Andrew; Yu, Huajun; Li, Yang; Shi, Lei; Wang, Yangjun; Wang, Shuxiao; Ramaswami, Anu

    2017-12-01

    Utilizing low-grade waste heat from industries to heat and cool homes and businesses through fourth generation district energy systems (DES) is a novel strategy to reduce energy use. This paper develops a generalizable methodology to estimate the energy saving potential for heating/cooling in 20 cities in two Chinese provinces, representing cold winter and hot summer regions respectively. We also conduct a life-cycle analysis of the new infrastructure required for energy exchange in DES. Results show that heating and cooling energy use reduction from this waste heat exchange strategy varies widely based on the mix of industrial, residential and commercial activities, and climate conditions in cities. Low-grade heat is found to be the dominant component of waste heat released by industries, which can be reused for both district heating and cooling in fourth generation DES, yielding energy use reductions from 12%-91% (average of 58%) for heating and 24%-100% (average of 73%) for cooling energy use in the different cities based on annual exchange potential. Incorporating seasonality and multiple energy exchange pathways resulted in energy savings reductions from 0%-87%. The life-cycle impact of added infrastructure was small (<3% for heating) and 1.9% ~ 6.5% (cooling) of the carbon emissions from fuel use in current heating or cooling systems, indicating net carbon savings. This generalizable approach to delineate waste heat potential can help determine suitable cities for the widespread application of industrial waste heat re-utilization.

  16. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  17. New Methodology for Natural Gas Production Estimates

    EIA Publications

    2010-01-01

    A new methodology is implemented with the monthly natural gas production estimates from the EIA-914 survey this month. The estimates, to be released April 29, 2010, include revisions for all of 2009. The fundamental changes in the new process include the timeliness of the historical data used for estimation and the frequency of sample updates, both of which are improved.

  18. Revised estimates for direct-effect recreational jobs in the interior Columbia River basin.

    Treesearch

    Lisa K. Crone; Richard W. Haynes

    1999-01-01

    This paper reviews the methodology used to derive the original estimates for direct employment associated with recreation on Federal lands in the interior Columbia River basin (the basin), and details the changes in methodology and data used to derive new estimates. The new analysis resulted in an estimate of 77,655 direct-effect jobs associated with recreational...

  19. Sample size considerations for studies of intervention efficacy in the occupational setting.

    PubMed

    Lazovich, Deann; Murray, David M; Brosseau, Lisa M; Parker, David L; Milton, F Thomas; Dugan, Siobhan K

    2002-03-01

    Due to a shared environment and similarities among workers within a worksite, the strongest analytical design to evaluate the efficacy of an intervention to reduce occupational health or safety hazards is to randomly assign worksites, not workers, to the intervention and comparison conditions. Statistical methods are well described for estimating the sample size when the unit of assignment is a group but these methods have not been applied in the evaluation of occupational health and safety interventions. We review and apply the statistical methods for group-randomized trials in planning a study to evaluate the effectiveness of technical/behavioral interventions to reduce wood dust levels among small woodworking businesses. We conducted a pilot study in five small woodworking businesses to estimate variance components between and within worksites and between and within workers. In each worksite, 8 h time-weighted dust concentrations were obtained for each production employee on between two and five occasions. With these data, we estimated the parameters necessary to calculate the percent change in dust concentrations that we could detect (alpha = 0.05, power = 80%) for a range of worksites per condition, workers per worksite and repeat measurements per worker. The mean wood dust concentration across woodworking businesses was 4.53 mg/m3. The measure of similarity among workers within a woodworking business was large (intraclass correlation = 0.5086). Repeated measurements within a worker were weakly correlated (r = 0.1927) while repeated measurements within a worksite were strongly correlated (r = 0.8925). The dominant factor in the sample size calculation was the number of worksites per condition, with the number of workers per worksite playing a lesser role. We also observed that increasing the number of repeat measurements per person had little benefit given the low within-worker correlation in our data. We found that 30 worksites per condition and 10 workers per worksite would give us 80% power to detect a reduction of approximately 30% in wood dust levels (alpha = 0.05). Our results demonstrate the application of the group-randomized trials methodology to evaluate interventions to reduce occupational hazards. The methodology is widely applicable and not limited to the context of wood dust reduction.

  20. Speed cameras for the prevention of road traffic injuries and deaths.

    PubMed

    Wilson, Cecilia; Willis, Charlene; Hendrikz, Joan K; Le Brocque, Robyne; Bellamy, Nicholas

    2010-11-10

    It is estimated that by 2020, road traffic crashes will have moved from ninth to third in the world ranking of burden of disease, as measured in disability adjusted life years. The prevention of road traffic injuries is of global public health importance. Measures aimed at reducing traffic speed are considered essential to preventing road injuries; the use of speed cameras is one such measure. To assess whether the use of speed cameras reduces the incidence of speeding, road traffic crashes, injuries and deaths. We searched the following electronic databases covering all available years up to March 2010; the Cochrane Library, MEDLINE (WebSPIRS), EMBASE (WebSPIRS), TRANSPORT, IRRD (International Road Research Documentation), TRANSDOC (European Conference of Ministers of Transport databases), Web of Science (Science and Social Science Citation Index), PsycINFO, CINAHL, EconLit, WHO database, Sociological Abstracts, Dissertation Abstracts, Index to Theses. Randomised controlled trials, interrupted time series and controlled before-after studies that assessed the impact of speed cameras on speeding, road crashes, crashes causing injury and fatalities were eligible for inclusion. We independently screened studies for inclusion, extracted data, assessed methodological quality, reported study authors' outcomes and where possible, calculated standardised results based on the information available in each study. Due to considerable heterogeneity between and within included studies, a meta-analysis was not appropriate. Thirty five studies met the inclusion criteria. Compared with controls, the relative reduction in average speed ranged from 1% to 15% and the reduction in proportion of vehicles speeding ranged from 14% to 65%. In the vicinity of camera sites, the pre/post reductions ranged from 8% to 49% for all crashes and 11% to 44% for fatal and serious injury crashes. Compared with controls, the relative improvement in pre/post injury crash proportions ranged from 8% to 50%. Despite the methodological limitations and the variability in degree of signal to noise effect, the consistency of reported reductions in speed and crash outcomes across all studies show that speed cameras are a worthwhile intervention for reducing the number of road traffic injuries and deaths. However, whilst the the evidence base clearly demonstrates a positive direction in the effect, an overall magnitude of this effect is currently not deducible due to heterogeneity and lack of methodological rigour. More studies of a scientifically rigorous and homogenous nature are necessary, to provide the answer to the magnitude of effect.

  1. Speed cameras for the prevention of road traffic injuries and deaths.

    PubMed

    Wilson, Cecilia; Willis, Charlene; Hendrikz, Joan K; Le Brocque, Robyne; Bellamy, Nicholas

    2010-10-06

    It is estimated that by 2020, road traffic crashes will have moved from ninth to third in the world ranking of burden of disease, as measured in disability adjusted life years. The prevention of road traffic injuries is of global public health importance. Measures aimed at reducing traffic speed are considered essential to preventing road injuries; the use of speed cameras is one such measure. To assess whether the use of speed cameras reduces the incidence of speeding, road traffic crashes, injuries and deaths. We searched the following electronic databases covering all available years up to March 2010; the Cochrane Library, MEDLINE (WebSPIRS), EMBASE (WebSPIRS), TRANSPORT, IRRD (International Road Research Documentation), TRANSDOC (European Conference of Ministers of Transport databases), Web of Science (Science and Social Science Citation Index), PsycINFO, CINAHL, EconLit, WHO database, Sociological Abstracts, Dissertation Abstracts, Index to Theses. Randomised controlled trials, interrupted time series and controlled before-after studies that assessed the impact of speed cameras on speeding, road crashes, crashes causing injury and fatalities were eligible for inclusion. We independently screened studies for inclusion, extracted data, assessed methodological quality, reported study authors' outcomes and where possible, calculated standardised results based on the information available in each study. Due to considerable heterogeneity between and within included studies, a meta-analysis was not appropriate. Thirty five studies met the inclusion criteria. Compared with controls, the relative reduction in average speed ranged from 1% to 15% and the reduction in proportion of vehicles speeding ranged from 14% to 65%. In the vicinity of camera sites, the pre/post reductions ranged from 8% to 49% for all crashes and 11% to 44% for fatal and serious injury crashes. Compared with controls, the relative improvement in pre/post injury crash proportions ranged from 8% to 50%. Despite the methodological limitations and the variability in degree of signal to noise effect, the consistency of reported reductions in speed and crash outcomes across all studies show that speed cameras are a worthwhile intervention for reducing the number of road traffic injuries and deaths. However, whilst the the evidence base clearly demonstrates a positive direction in the effect, an overall magnitude of this effect is currently not deducible due to heterogeneity and lack of methodological rigour. More studies of a scientifically rigorous and homogenous nature are necessary, to provide the answer to the magnitude of effect.

  2. Optimizing value utilizing Toyota Kata methodology in a multidisciplinary clinic.

    PubMed

    Merguerian, Paul A; Grady, Richard; Waldhausen, John; Libby, Arlene; Murphy, Whitney; Melzer, Lilah; Avansino, Jeffrey

    2015-08-01

    Value in healthcare is measured in terms of patient outcomes achieved per dollar expended. Outcomes and cost must be measured at the patient level to optimize value. Multidisciplinary clinics have been shown to be effective in providing coordinated and comprehensive care with improved outcomes, yet tend to have higher cost than typical clinics. We sought to lower individual patient cost and optimize value in a pediatric multidisciplinary reconstructive pelvic medicine (RPM) clinic. The RPM clinic is a multidisciplinary clinic that takes care of patients with anomalies of the pelvic organs. The specialties involved include Urology, General Surgery, Gynecology, and Gastroenterology/Motility. From May 2012 to November 2014 we performed time-driven activity-based costing (TDABC) analysis by measuring provider time for each step in the patient flow. Using observed time and the estimated hourly cost of each of the providers we calculated the final cost at the individual patient level, targeting clinic preparation. We utilized Toyota Kata methodology to enhance operational efficiency in an effort to optimize value. Variables measured included cost, time to perform a task, number of patients seen in clinic, percent value-added time (VAT) to patients (face to face time) and family experience scores (FES). At the beginning of the study period, clinic costs were $619 per patient. We reduced conference time from 6 min/patient to 1 min per patient, physician preparation time from 8 min to 6 min and increased Medical Assistant (MA) preparation time from 9.5 min to 20 min, achieving a cost reduction of 41% to $366 per patient. Continued improvements further reduced the MA preparation time to 14 min and the MD preparation time to 5 min with a further cost reduction to $194 (69%) (Figure). During this study period, we increased the number of appointments per clinic. We demonstrated sustained improvement in FES with regards to the families overall experience with their providers. Value added time was increased from 60% to 78% but this was not significant. Time-based cost analysis effectively measures individualized patient cost. We achieved a 69% reduction in clinic preparation costs. Despite this reduction in costs, we were able to maintain VAT and sustain improvements in family experience. In caring for complex patients, lean management methodology enables optimization of value in a multidisciplinary clinic. Copyright © 2015. Published by Elsevier Ltd.

  3. A Robust Wrap Reduction Algorithm for Fringe Projection Profilometry and Applications in Magnetic Resonance Imaging.

    PubMed

    Arevalillo-Herraez, Miguel; Cobos, Maximo; Garcia-Pineda, Miguel

    2017-03-01

    In this paper, we present an effective algorithm to reduce the number of wraps in a 2D phase signal provided as input. The technique is based on an accurate estimate of the fundamental frequency of a 2D complex signal with the phase given by the input, and the removal of a dependent additive term from the phase map. Unlike existing methods based on the discrete Fourier transform (DFT), the frequency is computed by using noise-robust estimates that are not restricted to integer values. Then, to deal with the problem of a non-integer shift in the frequency domain, an equivalent operation is carried out on the original phase signal. This consists of the subtraction of a tilted plane whose slope is computed from the frequency, followed by a re-wrapping operation. The technique has been exhaustively tested on fringe projection profilometry (FPP) and magnetic resonance imaging (MRI) signals. In addition, the performance of several frequency estimation methods has been compared. The proposed methodology is particularly effective on FPP signals, showing a higher performance than the state-of-the-art wrap reduction approaches. In this context, it contributes to canceling the carrier effect at the same time as it eliminates any potential slope that affects the entire signal. Its effectiveness on other carrier-free phase signals, e.g., MRI, is limited to the case that inherent slopes are present in the phase data.

  4. Life-Extending Control for Aircraft Engines Studied

    NASA Technical Reports Server (NTRS)

    Guo, Te-Huei

    2002-01-01

    Current aircraft engine controllers are designed and operated to provide both performance and stability margins. However, the standard method of operation results in significant wear and tear on the engine and negatively affects the on-wing life--the time between cycles when the engine must be physically removed from the aircraft for maintenance. The NASA Glenn Research Center and its industrial and academic partners have been working together toward a new control concept that will include engine life usage as part of the control function. The resulting controller will be able to significantly extend the engine's on-wing life with little or no impact on engine performance and operability. The new controller design will utilize damage models to estimate and mitigate the rate and overall accumulation of damage to critical engine parts. The control methods will also provide a means to assess tradeoffs between performance and structural durability on the basis of mission requirements and remaining engine life. Two life-extending control methodologies were studied to reduce the overall life-cycle cost of aircraft engines. The first methodology is to modify the baseline control logic to reduce the thermomechanical fatigue (TMF) damage of cooled stators during acceleration. To accomplish this, an innovative algorithm limits the low-speed rotor acceleration command when the engine has reached a threshold close to the requested thrust. This algorithm allows a significant reduction in TMF damage with only a very small increase in the rise time to reach the commanded rotor speed. The second methodology is to reduce stress rupture/creep damage to turbine blades and uncooled stators by incorporating an engine damage model into the flight mission. Overall operation cost is reduced by an optimization among the flight time, fuel consumption, and component damages. Recent efforts have focused on applying life-extending control technology to an existing commercial turbine engine, and doing so without modifying the hardware or adding sensors. This approach makes it possible to retrofit existing engines with life-extending control technology by changing only the control software in the full-authority digital engine controller (FADEC). The significant results include demonstrating a 20- to 30-percent reduction in TMF damage to the hot section by developing and implementing smart acceleration logic during takeoff. The tradeoff is an increase, from 5.0 to 5.2 sec, in the time required to reach maximum power from ground idle. On a typical flight profile of a cruise at Mach 0.8 at an altitude of 41,000 ft, and cruise time of 104 min, the optimized system showed that a reduction in cruise speed from Mach 0.8 to 0.79 can achieve an estimated 25-to 35-percent creep/rupture damage reduction in the engine's hot section and a fuel savings of 2.1 percent. The tradeoff is an increase in flight time of 1.3 percent (1.4 min).

  5. Protecting-group-free synthesis of amines: synthesis of primary amines from aldehydes via reductive amination.

    PubMed

    Dangerfield, Emma M; Plunkett, Catherine H; Win-Mason, Anna L; Stocker, Bridget L; Timmer, Mattie S M

    2010-08-20

    New methodology for the protecting-group-free synthesis of primary amines is presented. By optimizing the metal hydride/ammonia mediated reductive amination of aldehydes and hemiacetals, primary amines were selectively prepared with no or minimal formation of the usual secondary and tertiary amine byproduct. The methodology was performed on a range of functionalized aldehyde substrates, including in situ formed aldehydes from a Vasella reaction. These reductive amination conditions provide a valuable synthetic tool for the selective production of primary amines in fewer steps, in good yields, and without the use of protecting groups.

  6. Regression to fuzziness method for estimation of remaining useful life in power plant components

    NASA Astrophysics Data System (ADS)

    Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.

    2014-10-01

    Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teixeira, M.D.R. Jr.

    This paper describes the development of new wet design MV power cables, up to 35 kV, using EPDM compound as insulation and longitudinal water tightness. The combination of the cable design and the type of insulation compound allow for reduction of the insulation thickness in such a way, as to have an electrical stress at the conductor of 4 kV/mm which is significantly greater than used in MV distribution cables. Following a methodology established, at the author's company, the reliability of this design, cable and EPDM's formulation, in wet location, without metallic water barriers, was well demonstrated. Mini-installation of modelmore » cables in service-like conditions, to estimate the ageing rate, are presented and discussed.« less

  8. Clinically Effective Treatment of Fibromyalgia Pain With High-Definition Transcranial Direct Current Stimulation: Phase II Open-Label Dose Optimization.

    PubMed

    Castillo-Saavedra, Laura; Gebodh, Nigel; Bikson, Marom; Diaz-Cruz, Camilo; Brandao, Rivail; Coutinho, Livia; Truong, Dennis; Datta, Abhishek; Shani-Hershkovich, Revital; Weiss, Michal; Laufer, Ilan; Reches, Amit; Peremen, Ziv; Geva, Amir; Parra, Lucas C; Fregni, Felipe

    2016-01-01

    Despite promising preliminary results in treating fibromyalgia (FM) pain, no neuromodulation technique has been adopted in clinical practice because of limited efficacy, low response rate, or poor tolerability. This phase II open-label trial aims to define a methodology for a clinically effective treatment of pain in FM by establishing treatment protocols and screening procedures to maximize efficacy and response rate. High-definition transcranial direct current stimulation (HD-tDCS) provides targeted subthreshold brain stimulation, combining tolerability with specificity. We aimed to establish the number of HD-tDCS sessions required to achieve a 50% FM pain reduction, and to characterize the biometrics of the response, including brain network activation pain scores of contact heat-evoked potentials. We report a clinically significant benefit of a 50% pain reduction in half (n = 7) of the patients (N = 14), with responders and nonresponders alike benefiting from a cumulative effect of treatment, reflected in significant pain reduction (P = .035) as well as improved quality of life (P = .001) over time. We also report an aggregate 6-week response rate of 50% of patients and estimate 15 as the median number of HD-tDCS sessions to reach clinically meaningful outcomes. The methodology for a pivotal FM neuromodulation clinical trial with individualized treatment is thus supported. Registered in Clinicaltrials.gov under registry number NCT01842009. In this article, an optimized protocol for the treatment of fibromyalgia pain with targeted subthreshold brain stimulation using high-definition transcranial direct current stimulation is outlined. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.

  9. Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach

    ERIC Educational Resources Information Center

    Stevenson, Glenn A.

    2012-01-01

    For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…

  10. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  11. The effects of survey question wording on rape estimates: evidence from a quasi-experimental design.

    PubMed

    Fisher, Bonnie S

    2009-02-01

    The measurement of rape is among the leading methodological issues in the violence against women field. Methodological discussion continues to focus on decreasing measurement errors and improving the accuracy of rape estimates. The current study used a quasi-experimental design to examine the effect of survey question wording on estimates of completed and attempted rape and verbal threats of rape. Specifically, the study statistically compares self-reported rape estimates from two nationally representative studies of college women's sexual victimization experiences, the National College Women Sexual Victimization study and the National Violence Against College Women study. Results show significant differences between the two sets of rape estimates, with National Violence Against College Women study rape estimates ranging from 4.4% to 10.4% lower than the National College Women Sexual Victimization study rape estimates. Implications for future methodological research are discussed.

  12. Methodology for estimating helicopter performance and weights using limited data

    NASA Technical Reports Server (NTRS)

    Baserga, Claudio; Ingalls, Charles; Lee, Henry; Peyran, Richard

    1990-01-01

    Methodology is developed and described for estimating the flight performance and weights of a helicopter for which limited data are available. The methodology is based on assumptions which couple knowledge of the technology of the helicopter under study with detailed data from well documented helicopters thought to be of similar technology. The approach, analysis assumptions, technology modeling, and the use of reference helicopter data are discussed. Application of the methodology is illustrated with an investigation of the Agusta A129 Mangusta.

  13. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  14. Optimal designs for population pharmacokinetic studies of the partner drugs co-administered with artemisinin derivatives in patients with uncomplicated falciparum malaria.

    PubMed

    Jamsen, Kris M; Duffull, Stephen B; Tarning, Joel; Lindegardh, Niklas; White, Nicholas J; Simpson, Julie A

    2012-07-11

    Artemisinin-based combination therapy (ACT) is currently recommended as first-line treatment for uncomplicated malaria, but of concern, it has been observed that the effectiveness of the main artemisinin derivative, artesunate, has been diminished due to parasite resistance. This reduction in effect highlights the importance of the partner drugs in ACT and provides motivation to gain more knowledge of their pharmacokinetic (PK) properties via population PK studies. Optimal design methodology has been developed for population PK studies, which analytically determines a sampling schedule that is clinically feasible and yields precise estimation of model parameters. In this work, optimal design methodology was used to determine sampling designs for typical future population PK studies of the partner drugs (mefloquine, lumefantrine, piperaquine and amodiaquine) co-administered with artemisinin derivatives. The optimal designs were determined using freely available software and were based on structural PK models from the literature and the key specifications of 100 patients with five samples per patient, with one sample taken on the seventh day of treatment. The derived optimal designs were then evaluated via a simulation-estimation procedure. For all partner drugs, designs consisting of two sampling schedules (50 patients per schedule) with five samples per patient resulted in acceptable precision of the model parameter estimates. The sampling schedules proposed in this paper should be considered in future population pharmacokinetic studies where intensive sampling over many days or weeks of follow-up is not possible due to either ethical, logistic or economical reasons.

  15. Development of South Dakota accident reduction factors

    DOT National Transportation Integrated Search

    1998-08-01

    This report offers the methodology and findings of the first project to develop Accident Reduction Factors (ARFs) and Severity Reduction Ratios (SRRs) for the state of South Dakota. The ARFs and SRRs of this project focused on Hazard Elimination and ...

  16. 75 FR 46942 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-04

    ... employers. Should any needed methodological changes be identified, NIOSH will submit a request for modification to OMB. If no substantive methodological changes are required, the phase II study will proceed and... complete the questionnaire on the web or by telephone at that time.) Assuming no methodological changes...

  17. Potential Occupant Injury Reduction in Pre-Crash System Equipped Vehicles in the Striking Vehicle of Rear-end Crashes

    PubMed Central

    Kusano, Kristofer D.; Gabler, Hampton C.

    2010-01-01

    To mitigate the severity of rear-end and other collisions, Pre-Crash Systems (PCS) are being developed. These active safety systems utilize radar and/or video cameras to determine when a frontal crash, such as a front-to-back rear-end collisions, is imminent and can brake autonomously, even with no driver input. Of these PCS features, the effects of autonomous pre-crash braking are estimated. To estimate the maximum potential for injury reduction due to autonomous pre-crash braking in the striking vehicle of rear-end crashes, a methodology is presented for determining 1) the reduction in vehicle crash change in velocity (ΔV) due to PCS braking and 2) the number of injuries that could be prevented due to the reduction in collision severity. Injury reduction was only performed for belted drivers, as unbelted drivers have an unknown risk of being thrown out of position. The study was based on 1,406 rear-end striking vehicles from NASS / CDS years 1993 to 2008. PCS parameters were selected from realistic values and varied to examine the effect on system performance. PCS braking authority was varied from 0.5 G’s to 0.8 G’s while time to collision (TTC) was held at 0.45 seconds. TTC was then varied from 0.3 second to 0.6 seconds while braking authority was held constant at 0.6 G’s. A constant braking pulse (step function) and ramp-up braking pulse were used. The study found that automated PCS braking could reduce the crash ΔV in rear-end striking vehicles by an average of 12% – 50% and avoid 0% – 14% of collisions, depending on PCS parameters. Autonomous PCS braking could potentially reduce the number of injured drivers who are belted by 19% to 57%. PMID:21050603

  18. Meta-analysis of medical intervention for normal tension glaucoma.

    PubMed

    Cheng, Jin-Wei; Cai, Ji-Ping; Wei, Rui-Li

    2009-07-01

    To evaluate the intraocular pressure (IOP) reduction achieved by the most frequently prescribed antiglaucoma drugs in patients with normal tension glaucoma (NTG). Systematic review and meta-analysis. Fifteen randomized clinical trials reported 25 arms for peak IOP reduction, 16 arms for trough IOP reduction, and 13 arms for diurnal curve IOP reduction. Pertinent publications were identified through systematic searches of PubMed, EMBASE, and the Cochrane Controlled Trials Register. The patients had to be diagnosed as having NTG. Methodological quality was assessed by the Delphi list on a scale from 0 to 18. The pooled 1-month IOP-lowering effects were calculated using the 2-step DerSimonian and Laird estimate method of the random effects model. Absolute and relative reductions in IOP from baseline for peak and trough moments. Quality scores of included studies were generally high, with a mean quality score of 12.7 (range, 9-16). Relative IOP reductions were peak, 15% (12%-18%), and trough, 18% (8%-27%) for timolol; peak, 14% (8%-19%), and trough, 12% (-7% to 31%) for dorzolamide; peak, 24% (17%-31%), and trough, 11% (7%-14%) for brimonidine; peak, 20% (17%-24%), and trough, 20% (18%-23%) for latanoprost; peak, 21% (16%-25%), and trough, 18% (14%-22%) for bimatoprost. The differences in absolute IOP reductions between prostaglandin analogues and timolol varied from 0.9 to 1.0 mmHg at peak and -0.1 to 0.2 mmHg at trough. Latanoprost, bimatoprost, and timolol are the most effective IOP-lowering agents in patients with NTG.

  19. Predictions of first passage times in sparse discrete fracture networks using graph-based reductions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyman, Jeffrey De'Haven; Hagberg, Aric Arild; Mohd-Yusof, Jamaludin

    Here, we present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We also derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths.more » First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. We obtain accurate estimates of first passage times with an order of magnitude reduction of CPU time and mesh size using the proposed method.« less

  20. A new approach for solving seismic tomography problems and assessing the uncertainty through the use of graph theory and direct methods

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.; Davis, T. A.

    2016-12-01

    Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.

  1. Design of Degaussing System and Demonstration of Signature Reduction on Ship Model through Laboratory Experiments

    NASA Astrophysics Data System (ADS)

    Varma, R. A. Raveendra

    Magnetic fields of naval vessels are widely used all over the world for detection and localization of naval vessel. Magnetic Anomaly Detectors (MADs) installed on air borne vehicles are used to detect submarine operating in shallow waters. Underwater mines fitted with magnetic sensor are used for detection and destruction of naval vessels in the times of conflict. Reduction of magnetic signature of naval vessels is carried out by deperming and installation of degaussing system onboard the vessel. Present paper elaborates details of studies carried out at Magnetics Division of Naval Science and Technological Laboratory (NSTL) for minimizing the magnetic signature of naval vessels by designing a degaussing system. Magnetic fields of a small ship model are predicted and a degaussing system is designed for reducing magnetic detection. The details of the model, methodology used for estimation of magnetic signature of the vessel and design of degaussing system is brought out in this paper with details of experimental setup and results.

  2. Predictions of first passage times in sparse discrete fracture networks using graph-based reductions

    DOE PAGES

    Hyman, Jeffrey De'Haven; Hagberg, Aric Arild; Mohd-Yusof, Jamaludin; ...

    2017-07-10

    Here, we present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We also derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths.more » First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. We obtain accurate estimates of first passage times with an order of magnitude reduction of CPU time and mesh size using the proposed method.« less

  3. Optimisation of low temperature extraction of banana juice using commercial pectinase.

    PubMed

    Sagu, Sorel Tchewonpi; Nso, Emmanuel Jong; Karmakar, Sankha; De, Sirshendu

    2014-05-15

    The objective of this work was to develop a process with optimum conditions for banana juice. The procedure involves hydrolyzing the banana pulp by commercial pectinase followed by cloth filtration. Response surface methodology with Doehlert design was utilised to optimize the process parameters. The temperature of incubation (30-60 °C), time of reaction (20-120 min) and concentration of pectinase (0.01-0.05% v/w) were the independent variables and viscosity, clarity, alcohol insoluble solids (AIS), total polyphenol and protein concentration were the responses. Total soluble sugar, pH, conductivity, calcium, sodium and potassium concentration in the juice were also evaluated. The results showed reduction of AIS and viscosity with reaction time and pectinase concentration and reduction of polyphenol and protein concentration with temperature. Using numerical optimization, the optimum conditions for the enzymatic extraction of banana juice were estimated. Depectinization kinetics was also studied at optimum temperature and variation of kinetic constants with enzyme dose was evaluated. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Preliminary noise tradeoff study of a Mach 2.7 cruise aircraft

    NASA Technical Reports Server (NTRS)

    Mascitti, V. R.; Maglieri, D. J. (Editor); Raney, J. P. (Editor)

    1979-01-01

    NASA computer codes in the areas of preliminary sizing and enroute performance, takeoff and landing performance, aircraft noise prediction, and economics were used in a preliminary noise tradeoff study for a Mach 2.7 design supersonic cruise concept. Aerodynamic configuration data were based on wind-tunnel model tests and related analyses. Aircraft structural characteristics and weight were based on advanced structural design methodologies, assuming conventional titanium technology. The most advanced noise prediction techniques available were used, and aircraft operating costs were estimated using accepted industry methods. The 4-engines cycles included in the study were based on assumed 1985 technology levels. Propulsion data was provided by aircraft manufacturers. Additional empirical data is needed to define both noise reduction features and other operating characteristics of all engine cycles under study. Data on VCE design parameters, coannular nozzle inverted flow noise reduction and advanced mechanical suppressors are urgently needed to reduce the present uncertainties in studies of this type.

  5. Evaluation of Simulation Models that Estimate the Effect of Dietary Strategies on Nutritional Intake: A Systematic Review.

    PubMed

    Grieger, Jessica A; Johnson, Brittany J; Wycherley, Thomas P; Golley, Rebecca K

    2017-05-01

    Background: Dietary simulation modeling can predict dietary strategies that may improve nutritional or health outcomes. Objectives: The study aims were to undertake a systematic review of simulation studies that model dietary strategies aiming to improve nutritional intake, body weight, and related chronic disease, and to assess the methodologic and reporting quality of these models. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guided the search strategy with studies located through electronic searches [Cochrane Library, Ovid (MEDLINE and Embase), EBSCOhost (CINAHL), and Scopus]. Study findings were described and dietary modeling methodology and reporting quality were critiqued by using a set of quality criteria adapted for dietary modeling from general modeling guidelines. Results: Forty-five studies were included and categorized as modeling moderation, substitution, reformulation, or promotion dietary strategies. Moderation and reformulation strategies targeted individual nutrients or foods to theoretically improve one particular nutrient or health outcome, estimating small to modest improvements. Substituting unhealthy foods with healthier choices was estimated to be effective across a range of nutrients, including an estimated reduction in intake of saturated fatty acids, sodium, and added sugar. Promotion of fruits and vegetables predicted marginal changes in intake. Overall, the quality of the studies was moderate to high, with certain features of the quality criteria consistently reported. Conclusions: Based on the results of reviewed simulation dietary modeling studies, targeting a variety of foods rather than individual foods or nutrients theoretically appears most effective in estimating improvements in nutritional intake, particularly reducing intake of nutrients commonly consumed in excess. A combination of strategies could theoretically be used to deliver the best improvement in outcomes. Study quality was moderate to high. However, given the lack of dietary simulation reporting guidelines, future work could refine the quality tool to harmonize consistency in the reporting of subsequent dietary modeling studies. © 2017 American Society for Nutrition.

  6. Benefit-Cost Analysis of Integrated Paratransit Systems : Volume 6. Technical Appendices.

    DOT National Transportation Integrated Search

    1979-09-01

    This last volume, includes five technical appendices which document the methodologies used in the benefit-cost analysis. They are the following: Scenario analysis methodology; Impact estimation; Example of impact estimation; Sensitivity analysis; Agg...

  7. A framework for quantifying net benefits of alternative prognostic models.

    PubMed

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-30

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.

  8. A framework for quantifying net benefits of alternative prognostic models‡

    PubMed Central

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066

  9. Development of regional stump-to-mill logging cost estimators

    Treesearch

    Chris B. LeDoux; John E. Baumgras

    1989-01-01

    Planning logging operations requires estimating the logging costs for the sale or tract being harvested. Decisions need to be made on equipment selection and its application to terrain. In this paper a methodology is described that has been developed and implemented to solve the problem of accurately estimating logging costs by region. The methodology blends field time...

  10. A Radial Basis Function Approach to Financial Time Series Analysis

    DTIC Science & Technology

    1993-12-01

    including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes

  11. Regional on-road vehicle running emissions modeling and evaluation for conventional and alternative vehicle technologies.

    PubMed

    Frey, H Christopher; Zhai, Haibo; Rouphail, Nagui M

    2009-11-01

    This study presents a methodology for estimating high-resolution, regional on-road vehicle emissions and the associated reductions in air pollutant emissions from vehicles that utilize alternative fuels or propulsion technologies. The fuels considered are gasoline, diesel, ethanol, biodiesel, compressed natural gas, hydrogen, and electricity. The technologies considered are internal combustion or compression engines, hybrids, fuel cell, and electric. Road link-based emission models are developed using modal fuel use and emission rates applied to facility- and speed-specific driving cycles. For an urban case study, passenger cars were found to be the largest sources of HC, CO, and CO(2) emissions, whereas trucks contributed the largest share of NO(x) emissions. When alternative fuel and propulsion technologies were introduced in the fleet at a modest market penetration level of 27%, their emission reductions were found to be 3-14%. Emissions for all pollutants generally decreased with an increase in the market share of alternative vehicle technologies. Turnover of the light duty fleet to newer Tier 2 vehicles reduced emissions of HC, CO, and NO(x) substantially. However, modest improvements in fuel economy may be offset by VMT growth and reductions in overall average speed.

  12. Cement replacement by sugar cane bagasse ash: CO2 emissions reduction and potential for carbon credits.

    PubMed

    Fairbairn, Eduardo M R; Americano, Branca B; Cordeiro, Guilherme C; Paula, Thiago P; Toledo Filho, Romildo D; Silvoso, Marcos M

    2010-09-01

    This paper presents a study of cement replacement by sugar cane bagasse ash (SCBA) in industrial scale aiming to reduce the CO(2) emissions into the atmosphere. SCBA is a by-product of the sugar/ethanol agro-industry abundantly available in some regions of the world and has cementitious properties indicating that it can be used together with cement. Recent comprehensive research developed at the Federal University of Rio de Janeiro/Brazil has demonstrated that SCBA maintains, or even improves, the mechanical and durability properties of cement-based materials such as mortars and concretes. Brazil is the world's largest sugar cane producer and being a developing country can claim carbon credits. A simulation was carried out to estimate the potential of CO(2) emission reductions and the viability to issue certified emission reduction (CER) credits. The simulation was developed within the framework of the methodology established by the United Nations Framework Convention on Climate Change (UNFCCC) for the Clean Development Mechanism (CDM). The State of São Paulo (Brazil) was chosen for this case study because it concentrates about 60% of the national sugar cane and ash production together with an important concentration of cement factories. Since one of the key variables to estimate the CO(2) emissions is the average distance between sugar cane/ethanol factories and the cement plants, a genetic algorithm was developed to solve this optimization problem. The results indicated that SCBA blended cement reduces CO(2) emissions, which qualifies this product for CDM projects. 2010 Elsevier Ltd. All rights reserved.

  13. The Effect of the California Tobacco Control Program on Smoking Prevalence, Cigarette Consumption, and Healthcare Costs: 1989–2008

    PubMed Central

    Lightwood, James; Glantz, Stanton A.

    2013-01-01

    Background Previous research has shown that tobacco control funding in California has reduced per capita cigarette consumption and per capita healthcare expenditures. This paper refines our earlier model by estimating the effect of California tobacco control funding on current smoking prevalence and cigarette consumption per smoker and the effect of prevalence and consumption on per capita healthcare expenditures. The results are used to calculate new estimates of the effect of the California Tobacco Program. Methodology/Principal Findings Using state-specific aggregate data, current smoking prevalence and cigarette consumption per smoker are modeled as functions of cumulative California and control states' per capita tobacco control funding, cigarette price, and per capita income. Per capita healthcare expenditures are modeled as a function of prevalence of current smoking, cigarette consumption per smoker, and per capita income. One additional dollar of cumulative per capita tobacco control funding is associated with reduction in current smoking prevalence of 0.0497 (SE.00347) percentage points and current smoker cigarette consumption of 1.39 (SE.132) packs per smoker per year. Reductions of one percentage point in current smoking prevalence and one pack smoked per smoker are associated with $35.4 (SE $9.85) and $3.14 (SE.786) reductions in per capita healthcare expenditure, respectively (2010 dollars), using the National Income and Product Accounts (NIPA) measure of healthcare spending. Conclusions/Significance Between FY 1989 and 2008 the California Tobacco Program cost $2.4 billion and led to cumulative NIPA healthcare expenditure savings of $134 (SE $30.5) billion. PMID:23418411

  14. The Prone Protected Posture

    DTIC Science & Technology

    1980-08-01

    5K 2. METHODOLOGY . . . . . . . . . . . . . . . . . . . . 5 3. RESULTS . . . . . . . . . . . . . . . . . . . . . . 23I 4...2. METHODOLOGY The first step required in this study was to characterize the prone protected posture. Basically, a man in the prone posture differs...reduction in the presented area of target personnel. Reference 6 contains a concise discussion of the methodology used to generate the shielding functions

  15. Tunnel and Station Cost Methodology : Mined Tunnels

    DOT National Transportation Integrated Search

    1983-01-01

    The main objective of this study was to develop a model for estimating the cost of subway station and tunnel construction. This report describes a cost estimating methodology for subway tunnels that can be used by planners, designers, owners, and gov...

  16. Tunnel and Station Cost Methodology Volume II: Stations

    DOT National Transportation Integrated Search

    1981-01-01

    The main objective of this study was to develop a model for estimating the cost of subway station and tunnel construction. This report describes a cost estimating methodology for subway tunnels that can be used by planners, designers, owners, and gov...

  17. Economic Effects of Increased Control Zone Sizes in Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Datta, Koushik

    1998-01-01

    A methodology for estimating the economic effects of different control zone sizes used in conflict resolutions between aircraft is presented in this paper. The methodology is based on estimating the difference in flight times of aircraft with and without the control zone, and converting the difference into a direct operating cost. Using this methodology the effects of increased lateral and vertical control zone sizes are evaluated.

  18. Predicting Vessel Trajectories from Ais Data Using R

    DTIC Science & Technology

    2017-06-01

    future position at the expectation level set by the user, therefore producing a valid methodology for both estimating the future vessel location and... methodology for both estimating the future vessel location and for assessing anomalous vessel behavior. vi THIS PAGE INTENTIONALLY LEFT BLANK vii... methodology , that brings them one step closer to attaining these goals. A key idea in the current literature is that the series of vessel locations

  19. REDD+ emissions estimation and reporting: dealing with uncertainty

    NASA Astrophysics Data System (ADS)

    Pelletier, Johanne; Martin, Davy; Potvin, Catherine

    2013-09-01

    The United Nations Framework Convention on Climate Change (UNFCCC) defined the technical and financial modalities of policy approaches and incentives to reduce emissions from deforestation and forest degradation in developing countries (REDD+). Substantial technical challenges hinder precise and accurate estimation of forest-related emissions and removals, as well as the setting and assessment of reference levels. These challenges could limit country participation in REDD+, especially if REDD+ emission reductions were to meet quality standards required to serve as compliance grade offsets for developed countries’ emissions. Using Panama as a case study, we tested the matrix approach proposed by Bucki et al (2012 Environ. Res. Lett. 7 024005) to perform sensitivity and uncertainty analysis distinguishing between ‘modelling sources’ of uncertainty, which refers to model-specific parameters and assumptions, and ‘recurring sources’ of uncertainty, which refers to random and systematic errors in emission factors and activity data. The sensitivity analysis estimated differences in the resulting fluxes ranging from 4.2% to 262.2% of the reference emission level. The classification of fallows and the carbon stock increment or carbon accumulation of intact forest lands were the two key parameters showing the largest sensitivity. The highest error propagated using Monte Carlo simulations was caused by modelling sources of uncertainty, which calls for special attention to ensure consistency in REDD+ reporting which is essential for securing environmental integrity. Due to the role of these modelling sources of uncertainty, the adoption of strict rules for estimation and reporting would favour comparability of emission reductions between countries. We believe that a reduction of the bias in emission factors will arise, among other things, from a globally concerted effort to improve allometric equations for tropical forests. Public access to datasets and methodology used to evaluate reference level and emission reductions would strengthen the credibility of the system by promoting accountability and transparency. To secure conservativeness and deal with uncertainty, we consider the need for further research using real data available to developing countries to test the applicability of conservative discounts including the trend uncertainty and other possible options that would allow real incentives and stimulate improvements over time. Finally, we argue that REDD+ result-based actions assessed on the basis of a dashboard of performance indicators, not only in ‘tonnes CO2 equ. per year’ might provide a more holistic approach, at least until better accuracy and certainty of forest carbon stocks emission and removal estimates to support a REDD+ policy can be reached.

  20. Performance-based methodology for assessing seismic vulnerability and capacity of buildings

    NASA Astrophysics Data System (ADS)

    Shibin, Lin; Lili, Xie; Maosheng, Gong; Ming, Li

    2010-06-01

    This paper presents a performance-based methodology for the assessment of seismic vulnerability and capacity of buildings. The vulnerability assessment methodology is based on the HAZUS methodology and the improved capacitydemand-diagram method. The spectral displacement ( S d ) of performance points on a capacity curve is used to estimate the damage level of a building. The relationship between S d and peak ground acceleration (PGA) is established, and then a new vulnerability function is expressed in terms of PGA. Furthermore, the expected value of the seismic capacity index (SCev) is provided to estimate the seismic capacity of buildings based on the probability distribution of damage levels and the corresponding seismic capacity index. The results indicate that the proposed vulnerability methodology is able to assess seismic damage of a large number of building stock directly and quickly following an earthquake. The SCev provides an effective index to measure the seismic capacity of buildings and illustrate the relationship between the seismic capacity of buildings and seismic action. The estimated result is compared with damage surveys of the cities of Dujiangyan and Jiangyou in the M8.0 Wenchuan earthquake, revealing that the methodology is acceptable for seismic risk assessment and decision making. The primary reasons for discrepancies between the estimated results and the damage surveys are discussed.

  1. Security Events and Vulnerability Data for Cybersecurity Risk Estimation.

    PubMed

    Allodi, Luca; Massacci, Fabio

    2017-08-01

    Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.

  2. A high-resolution optical rangefinder using tunable focus optics and spatial photonic signal processing

    NASA Astrophysics Data System (ADS)

    Khwaja, Tariq S.; Mazhar, Mohsin Ali; Niazi, Haris Khan; Reza, Syed Azer

    2017-06-01

    In this paper, we present the design of a proposed optical rangefinder to determine the distance of a semi-reflective target from the sensor module. The sensor module deploys a simple Tunable Focus Lens (TFL), a Laser Source (LS) with a Gaussian Beam profile and a digital beam profiler/imager to achieve its desired operation. We show that, owing to the nature of existing measurement methodologies, previous attempts to use a simple TFL in prior art to estimate target distance mostly deliver "one-shot" distance measurement estimates instead of obtaining and using a larger dataset which can significantly reduce the effect of some largely incorrect individual data points on the final distance estimate. Using a measurement dataset and calculating averages also helps smooth out measurement errors in individual data points through effectively low-pass filtering unexpectedly odd measurement offsets in individual data points. In this paper, we show that a simple setup deploying an LS, a TFL and a beam profiler or imager is capable of delivering an entire measurement dataset thus effectively mitigating the effects on measurement accuracy which are associated with "one-shot" measurement techniques. The technique we propose allows a Gaussian Beam from an LS to pass through the TFL. Tuning the focal length of the TFL results in altering the spot size of the beam at the beam imager plane. Recording these different spot radii at the plane of the beam profiler for each unique setting of the TFL provides us with a means to use this measurement dataset to obtain a significantly improved estimate of the target distance as opposed to relying on a single measurement. We show that an iterative least-squares curve-fit on the recorded data allows us to estimate distances of remote objects very precisely. We also show that using some basic ray-optics-based approximations, we also obtain an initial seed value for distance estimate and subsequently use this value to obtain a more precise estimate through an iterative residual reduction in the least-squares sense. In our experiments, we use a MEMS-based Digital Micro-mirror Device (DMD) as a beam imager/profiler as it delivers an accurate estimate of a Gaussian Beam profile. The proposed method, its working and the distance estimation methodology are discussed in detail. For a proof-of-concept, we back our claims with initial experimental results.

  3. Using the benchmark dose (BMD) methodology to determine an appropriate reduction of certain ingredients in food products.

    PubMed

    Bi, Jian

    2010-01-01

    As the desire to promote health increases, reductions of certain ingredients, for example, sodium, sugar, and fat in food products, are widely requested. However, the reduction is not risk free in sensory and marketing aspects. Over reduction may change the taste and influence the flavor of a product and lead to a decrease in consumer's overall liking or purchase intent for the product. This article uses the benchmark dose (BMD) methodology to determine an appropriate reduction. Calculations of BMD and one-sided lower confidence limit of BMD are illustrated. The article also discusses how to calculate BMD and BMDL for over dispersed binary data in replicated testing based on a corrected beta-binomial model. USEPA Benchmark Dose Software (BMDS) were used and S-Plus programs were developed. The method discussed in the article is originally used to determine an appropriate reduction of certain ingredients, for example, sodium, sugar, and fat in food products, considering both health reason and sensory or marketing risk.

  4. Seismic Hazard Estimates Using Ill-defined Macroseismic Data at Site

    NASA Astrophysics Data System (ADS)

    Albarello, D.; Mucciarelli, M.

    - A new approach is proposed to the seismic hazard estimate based on documentary data concerning local history of seismic effects. The adopted methodology allows for the use of ``poor'' data, such as the macroseismic ones, within a formally coherent approach that permits overcoming a number of problems connected to the forcing of available information in the frame of ``standard'' methodologies calibrated on the use of instrumental data. The use of the proposed methodology allows full exploitation of all the available information (that for many towns in Italy covers several centuries) making possible a correct use of macroseismic data characterized by different levels of completeness and reliability. As an application of the proposed methodology, seismic hazard estimates are presented for two towns located in Northern Italy: Bologna and Carpi.

  5. Purchasing power of civil servant health workers in Mozambique.

    PubMed

    Ferrinho, Fátima; Amaral, Marta; Russo, Giuliano; Ferrinho, Paulo

    2012-01-01

    Health workers' purchasing power is an important consideration in the development of strategies for health workforce development. This work explores the purchasing power variation of Mozambican public sector health workers, between 1999 and 2007. In general, the calculated purchasing power increased for most careers under study, and the highest percentage increase was observed for the lowest remuneration careers, contributing in this way for a relative reduction in the difference between the higher and the lower salaries. This was done through a simple and easy-to-apply methodology to estimate salaries' capitalization rate, by means of the accumulated inflation rate, after taking wage revisions into account. All the career categories in the Ministry of Health and affiliated public sector institutions were considered. Health workers' purchasing power is an important consideration in the development of strategies for health workforce development. This work explores the purchasing power variation of Mozambican public sector health workers, between 1999 and 2007. In general, the calculated purchasing power increased for most careers under study, and the highest percentage increase was observed for the lowest remuneration careers, contributing in this way for a relative reduction in the difference between the higher and the lower salaries. These results seem to contradict a commonly held assumption that health sector pay has deteriorated over the years, and with substantial damage for the poorest. Further studies appear to be needed to design a more accurate methodology to better understand the evolution and impact of public sector health workers' remunerations across the years.

  6. Ambient Vibration Testing for Story Stiffness Estimation of a Heritage Timber Building

    PubMed Central

    Min, Kyung-Won; Kim, Junhee; Park, Sung-Ah; Park, Chan-Soo

    2013-01-01

    This paper investigates dynamic characteristics of a historic wooden structure by ambient vibration testing, presenting a novel estimation methodology of story stiffness for the purpose of vibration-based structural health monitoring. As for the ambient vibration testing, measured structural responses are analyzed by two output-only system identification methods (i.e., frequency domain decomposition and stochastic subspace identification) to estimate modal parameters. The proposed methodology of story stiffness is estimation based on an eigenvalue problem derived from a vibratory rigid body model. Using the identified natural frequencies, the eigenvalue problem is efficiently solved and uniquely yields story stiffness. It is noteworthy that application of the proposed methodology is not necessarily confined to the wooden structure exampled in the paper. PMID:24227999

  7. Fracture mechanics approach to estimate rail wear limits

    DOT National Transportation Integrated Search

    2009-10-01

    This paper describes a systematic methodology to estimate allowable limits for rail head wear in terms of vertical head-height loss, gage-face side wear, and/or the combination of the two. This methodology is based on the principles of engineering fr...

  8. U.S. DOE methodology for the development of geologic storage potential for carbon dioxide at the national and regional scale

    USGS Publications Warehouse

    Goodman, Angela; Hakala, J. Alexandra; Bromhal, Grant; Deel, Dawn; Rodosta, Traci; Frailey, Scott; Small, Michael; Allen, Doug; Romanov, Vyacheslav; Fazio, Jim; Huerta, Nicolas; McIntyre, Dustin; Kutchko, Barbara; Guthrie, George

    2011-01-01

    A detailed description of the United States Department of Energy (US-DOE) methodology for estimating CO2 storage potential for oil and gas reservoirs, saline formations, and unmineable coal seams is provided. The oil and gas reservoirs are assessed at the field level, while saline formations and unmineable coal seams are assessed at the basin level. The US-DOE methodology is intended for external users such as the Regional Carbon Sequestration Partnerships (RCSPs), future project developers, and governmental entities to produce high-level CO2 resource assessments of potential CO2 storage reservoirs in the United States and Canada at the regional and national scale; however, this methodology is general enough that it could be applied globally. The purpose of the US-DOE CO2 storage methodology, definitions of storage terms, and a CO2 storage classification are provided. Methodology for CO2 storage resource estimate calculation is outlined. The Log Odds Method when applied with Monte Carlo Sampling is presented in detail for estimation of CO2 storage efficiency needed for CO2 storage resource estimates at the regional and national scale. CO2 storage potential reported in the US-DOE's assessment are intended to be distributed online by a geographic information system in NatCarb and made available as hard-copy in the Carbon Sequestration Atlas of the United States and Canada. US-DOE's methodology will be continuously refined, incorporating results of the Development Phase projects conducted by the RCSPs from 2008 to 2018. Estimates will be formally updated every two years in subsequent versions of the Carbon Sequestration Atlas of the United States and Canada.

  9. 75 FR 8649 - Request for Comments on Methodology for Conducting an Independent Study of the Burden of Patent...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-25

    ...] Request for Comments on Methodology for Conducting an Independent Study of the Burden of Patent-Related... methodologies for performing such a study (Methodology Report). ICF has now provided the USPTO with its Methodology Report, in which ICF recommends methodologies for addressing various topics about estimating the...

  10. Risk-cost-benefit analysis of atrazine in drinking water from agricultural activities and policy implications

    NASA Astrophysics Data System (ADS)

    Tesfamichael, Aklilu A.; Caplan, Arthur J.; Kaluarachchi, Jagath J.

    2005-05-01

    This study provides an improved methodology for investigating the trade-offs between the health risks and economic benefits of using atrazine in the agricultural sector by incorporating public attitude to pesticide management in the analysis. Regression models are developed to predict finished water atrazine concentration in high-risk community water supplies in the United States. The predicted finished water atrazine concentrations are then used in a health risk assessment. The computed health risks are compared with the total economic surplus in the U.S. corn market for different atrazine application rates using estimated demand and supply functions developed in this work. Analysis of different scenarios with consumer price premiums for chemical-free and reduced-chemical corn indicate that if the society is willing to pay a price premium, risks can be reduced without a large reduction in the total economic surplus and net benefits may be higher. The results also show that this methodology provides an improved scientific framework for future decision making and policy evaluation in pesticide management.

  11. Generalized causal mediation and path analysis: Extensions and practical considerations.

    PubMed

    Albert, Jeffrey M; Cho, Jang Ik; Liu, Yiying; Nelson, Suchitra

    2018-01-01

    Causal mediation analysis seeks to decompose the effect of a treatment or exposure among multiple possible paths and provide casually interpretable path-specific effect estimates. Recent advances have extended causal mediation analysis to situations with a sequence of mediators or multiple contemporaneous mediators. However, available methods still have limitations, and computational and other challenges remain. The present paper provides an extended causal mediation and path analysis methodology. The new method, implemented in the new R package, gmediation (described in a companion paper), accommodates both a sequence (two stages) of mediators and multiple mediators at each stage, and allows for multiple types of outcomes following generalized linear models. The methodology can also handle unsaturated models and clustered data. Addressing other practical issues, we provide new guidelines for the choice of a decomposition, and for the choice of a reference group multiplier for the reduction of Monte Carlo error in mediation formula computations. The new method is applied to data from a cohort study to illuminate the contribution of alternative biological and behavioral paths in the effect of socioeconomic status on dental caries in adolescence.

  12. Contribution of cooperative sector recycling to greenhouse gas emissions reduction: A case study of Ribeirão Pires, Brazil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Megan F., E-mail: mfking@uvic.ca; Gutberlet, Jutta, E-mail: gutber@uvic.ca

    Highlights: • Cooperative recycling achieves environmental, economic and social objectives. • We calculate GHG emissions reduction for a recycling cooperative in São Paulo, Brazil. • The cooperative merits consideration as a Clean Development Mechanism (CDM) project. • A CDM project would enhance the achievements of the recycling cooperative. • National and local waste management policies support the recycling cooperative. - Abstract: Solid waste, including municipal waste and its management, is a major challenge for most cities and among the key contributors to climate change. Greenhouse gas emissions can be reduced through recovery and recycling of resources from the municipal solidmore » waste stream. In São Paulo, Brazil, recycling cooperatives play a crucial role in providing recycling services including collection, separation, cleaning, stocking, and sale of recyclable resources. The present research attempts to measure the greenhouse gas emission reductions achieved by the recycling cooperative Cooperpires, as well as highlight its socioeconomic benefits. Methods include participant observation, structured interviews, questionnaire application, and greenhouse gas accounting of recycling using a Clean Development Mechanism methodology. The results show that recycling cooperatives can achieve important energy savings and reductions in greenhouse gas emissions, and suggest there is an opportunity for Cooperpires and other similar recycling groups to participate in the carbon credit market. Based on these findings, the authors created a simple greenhouse gas accounting calculator for recyclers to estimate their emissions reductions.« less

  13. Development of a Quantitative Methodology to Assess the Impacts of Urban Transport Interventions and Related Noise on Well-Being

    PubMed Central

    Braubach, Matthias; Tobollik, Myriam; Mudu, Pierpaolo; Hiscock, Rosemary; Chapizanis, Dimitris; Sarigiannis, Denis A.; Keuken, Menno; Perez, Laura; Martuzzi, Marco

    2015-01-01

    Well-being impact assessments of urban interventions are a difficult challenge, as there is no agreed methodology and scarce evidence on the relationship between environmental conditions and well-being. The European Union (EU) project “Urban Reduction of Greenhouse Gas Emissions in China and Europe” (URGENCHE) explored a methodological approach to assess traffic noise-related well-being impacts of transport interventions in three European cities (Basel, Rotterdam and Thessaloniki) linking modeled traffic noise reduction effects with survey data indicating noise-well-being associations. Local noise models showed a reduction of high traffic noise levels in all cities as a result of different urban interventions. Survey data indicated that perception of high noise levels was associated with lower probability of well-being. Connecting the local noise exposure profiles with the noise-well-being associations suggests that the urban transport interventions may have a marginal but positive effect on population well-being. This paper also provides insight into the methodological challenges of well-being assessments and highlights the range of limitations arising from the current lack of reliable evidence on environmental conditions and well-being. Due to these limitations, the results should be interpreted with caution. PMID:26016437

  14. Development of a quantitative methodology to assess the impacts of urban transport interventions and related noise on well-being.

    PubMed

    Braubach, Matthias; Tobollik, Myriam; Mudu, Pierpaolo; Hiscock, Rosemary; Chapizanis, Dimitris; Sarigiannis, Denis A; Keuken, Menno; Perez, Laura; Martuzzi, Marco

    2015-05-26

    Well-being impact assessments of urban interventions are a difficult challenge, as there is no agreed methodology and scarce evidence on the relationship between environmental conditions and well-being. The European Union (EU) project "Urban Reduction of Greenhouse Gas Emissions in China and Europe" (URGENCHE) explored a methodological approach to assess traffic noise-related well-being impacts of transport interventions in three European cities (Basel, Rotterdam and Thessaloniki) linking modeled traffic noise reduction effects with survey data indicating noise-well-being associations. Local noise models showed a reduction of high traffic noise levels in all cities as a result of different urban interventions. Survey data indicated that perception of high noise levels was associated with lower probability of well-being. Connecting the local noise exposure profiles with the noise-well-being associations suggests that the urban transport interventions may have a marginal but positive effect on population well-being. This paper also provides insight into the methodological challenges of well-being assessments and highlights the range of limitations arising from the current lack of reliable evidence on environmental conditions and well-being. Due to these limitations, the results should be interpreted with caution.

  15. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  16. Thermospheric Studies with Mars Global Surveyor

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Bruinsma, S.; Chin, D. S.; Forbes, J. M.

    2006-01-01

    The Mars Global Surveyor spacecraft has been located in a near-circular, polar, and low-altitude mapping orbit about Mars for six years, since February 1999. The spacecraft is tracked routinely by the antennae of the Deep Space Network (DSN), using the X Band radio system of the spacecraft. These tracking data have been used for routine spacecraft navigation, and for radio science studies, such as the estimation of the static and time-varying gravity field of Mars. In this paper we describe the methodology for reduction of these data in order to estimate the Mars atmospheric density (normalized to an altitude 380 km) over half a solar cycle, where we discern the correlation of the density with the incident solar flux, and the 27-day solar rotation. The results show that the density at the MGS altitude varies from a mean of 0.7 x 10(exp -17) grams/cu cm near aphelion to a mean of 3.0 x 10(exp -17)grams/cu cm near perihelion.

  17. The Demand for Scientific and Technical Manpower in Selected Energy-Related Industries, 1970-85: A Methodology Applied to a Selected Scenario of Energy Output. A Summary.

    ERIC Educational Resources Information Center

    Gutmanis, Ivars; And Others

    The primary purpose of the study was to develop and apply a methodology for estimating the need for scientists and engineers by specialty in energy and energy-related industries. The projections methodology was based on the Case 1 estimates by the National Petroleum Council of the results of "maximum efforts" to develop domestic fuel sources by…

  18. Decision analysis of visual range improvements attributable to sulfur dioxide emission reductions in the eastern United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balson, W.E.; Rice, J.S.

    1988-01-01

    The Environmental Protection Agency (EPA) recently published an advanced notice of proposed rulemaking (ANPR) (Federal Register, July 1, 1987) inquiring into the need for a secondary ambient standard for fine particles to protect visibility in the east and urban west. The EPA has solicited comments on the application of cost and benefits analyses in making decisions about such standards. In response to this request for comments, the utility air regulatory group (UARG) requested that Decision Focus Incorporated (DFI) estimate the benefits of visibility improvements reasonably associated with changes in SO{sub 2} emissions, to compare those benefits with the cost ofmore » achieving those emission reduction, and to assess the value of acquiring more information before making a decision, taking into account the uncertainties associated with these estimates. This request followed a presentation by DFI on such a method at the Grand Teton Specialty Conference on Visibility. In coordination with this cost and benefit comparison, UARG has also requested that other contractors estimate the levels of uncertainty in visibility improvements, the household value for visibility improvements, and the costs of implementation. The information provided by those contractors served as key inputs for the methodology and the results that are described in this paper. The information on visibility improvements was provided by AeroVironment Incorporated (AV), Zannetti. The information on household value was provided by Dr. Paul Ruud. Finally, the information on costs was provided by Temple, Baker, and Sloane, Incorporated (TBS). The three reports described above are discussed in this paper.« less

  19. Estimates of low-level waste volumes and classifications at 2-Unit 1100 MWe reference plants for decommissioning scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hauf, M.J.; Vance, J.N.; James, D.

    1991-01-01

    A number of nuclear utilities and industry organizations in the United States have evaluated the requirements for reactor decommissioning. These broad scope studies have addressed the major issues of technology, methodology, safety and costs of decommissioning and have produced substantial volumes of data to describe, in detail, the issues and impacts which result. The objective of this paper to provide CECo a reasonable basis for discussion low-level waste burial volumes for the most likely decommissioning options and to show how various decontamination and VR technologies can be applied to provide additional reduction of the volumes required to be buried atmore » low-level waste burial grounds.« less

  20. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  1. A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit

    DOE PAGES

    Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...

    2017-06-07

    We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less

  2. PEM Fuel Cells Redesign Using Biomimetic and TRIZ Design Methodologies

    NASA Astrophysics Data System (ADS)

    Fung, Keith Kin Kei

    Two formal design methodologies, biomimetic design and the Theory of Inventive Problem Solving, TRIZ, were applied to the redesign of a Proton Exchange Membrane (PEM) fuel cell. Proof of concept prototyping was performed on two of the concepts for water management. The liquid water collection with strategically placed wicks concept demonstrated the potential benefits for a fuel cell. Conversely, the periodic flow direction reversal concepts might cause a potential reduction water removal from a fuel cell. The causes of this water removal reduction remain unclear. In additional, three of the concepts generated with biomimetic design were further studied and demonstrated to stimulate more creative ideas in the thermal and water management of fuel cells. The biomimetic design and the TRIZ methodologies were successfully applied to fuel cells and provided different perspectives to the redesign of fuel cells. The methodologies should continue to be used to improve fuel cells.

  3. SCAP: a new methodology for safety management based on feedback from credible accident-probabilistic fault tree analysis system.

    PubMed

    Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A

    2001-10-12

    As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.

  4. Local deformation for soft tissue simulation

    PubMed Central

    Omar, Nadzeri; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2016-01-01

    ABSTRACT This paper presents a new methodology to localize the deformation range to improve the computational efficiency for soft tissue simulation. This methodology identifies the local deformation range from the stress distribution in soft tissues due to an external force. A stress estimation method is used based on elastic theory to estimate the stress in soft tissues according to a depth from the contact surface. The proposed methodology can be used with both mass-spring and finite element modeling approaches for soft tissue deformation. Experimental results show that the proposed methodology can improve the computational efficiency while maintaining the modeling realism. PMID:27286482

  5. Geothermal resources and reserves in Indonesia: an updated revision

    NASA Astrophysics Data System (ADS)

    Fauzi, A.

    2015-02-01

    More than 300 high- to low-enthalpy geothermal sources have been identified throughout Indonesia. From the early 1980s until the late 1990s, the geothermal potential for power production in Indonesia was estimated to be about 20 000 MWe. The most recent estimate exceeds 29 000 MWe derived from the 300 sites (Geological Agency, December 2013). This resource estimate has been obtained by adding all of the estimated geothermal potential resources and reserves classified as "speculative", "hypothetical", "possible", "probable", and "proven" from all sites where such information is available. However, this approach to estimating the geothermal potential is flawed because it includes double counting of some reserve estimates as resource estimates, thus giving an inflated figure for the total national geothermal potential. This paper describes an updated revision of the geothermal resource estimate in Indonesia using a more realistic methodology. The methodology proposes that the preliminary "Speculative Resource" category should cover the full potential of a geothermal area and form the base reference figure for the resource of the area. Further investigation of this resource may improve the level of confidence of the category of reserves but will not necessarily increase the figure of the "preliminary resource estimate" as a whole, unless the result of the investigation is higher. A previous paper (Fauzi, 2013a, b) redefined and revised the geothermal resource estimate for Indonesia. The methodology, adopted from Fauzi (2013a, b), will be fully described in this paper. As a result of using the revised methodology, the potential geothermal resources and reserves for Indonesia are estimated to be about 24 000 MWe, some 5000 MWe less than the 2013 national estimate.

  6. Methodology for conceptual remote sensing spacecraft technology: insertion analysis balancing performance, cost, and risk

    NASA Astrophysics Data System (ADS)

    Bearden, David A.; Duclos, Donald P.; Barrera, Mark J.; Mosher, Todd J.; Lao, Norman Y.

    1997-12-01

    Emerging technologies and micro-instrumentation are changing the way remote sensing spacecraft missions are developed and implemented. Government agencies responsible for procuring space systems are increasingly requesting analyses to estimate cost, performance and design impacts of advanced technology insertion for both state-of-the-art systems as well as systems to be built 5 to 10 years in the future. Numerous spacecraft technology development programs are being sponsored by Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) agencies with the goal of enhancing spacecraft performance, reducing mass, and reducing cost. However, it is often the case that technology studies, in the interest of maximizing subsystem-level performance and/or mass reduction, do not anticipate synergistic system-level effects. Furthermore, even though technical risks are often identified as one of the largest cost drivers for space systems, many cost/design processes and models ignore effects of cost risk in the interest of quick estimates. To address these issues, the Aerospace Corporation developed a concept analysis methodology and associated software tools. These tools, collectively referred to as the concept analysis and design evaluation toolkit (CADET), facilitate system architecture studies and space system conceptual designs focusing on design heritage, technology selection, and associated effects on cost, risk and performance at the system and subsystem level. CADET allows: (1) quick response to technical design and cost questions; (2) assessment of the cost and performance impacts of existing and new designs/technologies; and (3) estimation of cost uncertainties and risks. These capabilities aid mission designers in determining the configuration of remote sensing missions that meet essential requirements in a cost- effective manner. This paper discuses the development of CADET modules and their application to several remote sensing satellite mission concepts.

  7. Reinforcement-learning-based dual-control methodology for complex nonlinear discrete-time systems with application to spark engine EGR operation.

    PubMed

    Shih, Peter; Kaul, Brian C; Jagannathan, S; Drallmeier, James A

    2008-08-01

    A novel reinforcement-learning-based dual-control methodology adaptive neural network (NN) controller is developed to deliver a desired tracking performance for a class of complex feedback nonlinear discrete-time systems, which consists of a second-order nonlinear discrete-time system in nonstrict feedback form and an affine nonlinear discrete-time system, in the presence of bounded and unknown disturbances. For example, the exhaust gas recirculation (EGR) operation of a spark ignition (SI) engine is modeled by using such a complex nonlinear discrete-time system. A dual-controller approach is undertaken where primary adaptive critic NN controller is designed for the nonstrict feedback nonlinear discrete-time system whereas the secondary one for the affine nonlinear discrete-time system but the controllers together offer the desired performance. The primary adaptive critic NN controller includes an NN observer for estimating the states and output, an NN critic, and two action NNs for generating virtual control and actual control inputs for the nonstrict feedback nonlinear discrete-time system, whereas an additional critic NN and an action NN are included for the affine nonlinear discrete-time system by assuming the state availability. All NN weights adapt online towards minimization of a certain performance index, utilizing gradient-descent-based rule. Using Lyapunov theory, the uniformly ultimate boundedness (UUB) of the closed-loop tracking error, weight estimates, and observer estimates are shown. The adaptive critic NN controller performance is evaluated on an SI engine operating with high EGR levels where the controller objective is to reduce cyclic dispersion in heat release while minimizing fuel intake. Simulation and experimental results indicate that engine out emissions drop significantly at 20% EGR due to reduction in dispersion in heat release thus verifying the dual-control approach.

  8. Ecological-economic assessment of the effects of freshwater flow in the Florida Everglades on recreational fisheries.

    PubMed

    Brown, Christina Estela; Bhat, Mahadev G; Rehage, Jennifer S; Mirchi, Ali; Boucek, Ross; Engel, Victor; Ault, Jerald S; Mozumder, Pallab; Watkins, David; Sukop, Michael

    2018-06-15

    This research develops an integrated methodology to determine the economic value to anglers of recreational fishery ecosystem services in Everglades National Park that could result from different water management scenarios. The study first used bio-hydrological models to link managed freshwater inflows to indicators of fishery productivity and ecosystem health, then link those models to anglers' willingness-to-pay for various attributes of the recreational fishing experience and monthly fishing effort. This approach allowed us to estimate the foregone economic benefits of failing to meet monthly freshwater delivery targets. The study found that the managed freshwater delivery to the Park had declined substantially over the years and had fallen short of management targets. This shortage in the flow resulted in the decline of biological productivity of recreational fisheries in downstream coastal areas. This decline had in turn contributed to reductions in the overall economic value of recreational ecosystem services enjoyed by anglers. The study estimated the annual value of lost recreational services at $68.81 million. The losses were greater in the months of dry season when the water shortage was higher and the number of anglers fishing also was higher than the levels in wet season. The study also developed conservative estimates of implicit price of water for recreation, which ranged from $11.88 per AF in November to $112.11 per AF in April. The annual average price was $41.54 per AF. Linking anglers' recreational preference directly to a decision variable such as water delivery is a powerful and effective way to make management decision. This methodology has relevant applications to water resource management, serving as useful decision-support metrics, as well as for policy and restoration scenario analysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Combining heuristic and statistical techniques in landslide hazard assessments

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Schwendtner, Barbara; Quan, Byron; Nadim, Farrokh; Diaz, Manuel; Molina, Giovanni

    2014-05-01

    As a contribution to the Global Assessment Report 2013 - GAR2013, coordinated by the United Nations International Strategy for Disaster Reduction - UNISDR, a drill-down exercise for landslide hazard assessment was carried out by entering the results of both heuristic and statistical techniques into a new but simple combination rule. The data available for this evaluation included landslide inventories, both historical and event-based. In addition to the application of a heuristic method used in the previous editions of GAR, the availability of inventories motivated the use of statistical methods. The heuristic technique is largely based on the Mora & Vahrson method, which estimates hazard as the product of susceptibility and triggering factors, where classes are weighted based on expert judgment and experience. Two statistical methods were also applied: the landslide index method, which estimates weights of the classes for the susceptibility and triggering factors based on the evidence provided by the density of landslides in each class of the factors; and the weights of evidence method, which extends the previous technique to include both positive and negative evidence of landslide occurrence in the estimation of weights for the classes. One key aspect during the hazard evaluation was the decision on the methodology to be chosen for the final assessment. Instead of opting for a single methodology, it was decided to combine the results of the three implemented techniques using a combination rule based on a normalization of the results of each method. The hazard evaluation was performed for both earthquake- and rainfall-induced landslides. The country chosen for the drill-down exercise was El Salvador. The results indicate that highest hazard levels are concentrated along the central volcanic chain and at the centre of the northern mountains.

  10. Bigger is Better, but at What Cost? Estimating the Economic Value of Incremental Data Assets.

    PubMed

    Dalessandro, Brian; Perlich, Claudia; Raeder, Troy

    2014-06-01

    Many firms depend on third-party vendors to supply data for commercial predictive modeling applications. An issue that has received very little attention in the prior research literature is the estimation of a fair price for purchased data. In this work we present a methodology for estimating the economic value of adding incremental data to predictive modeling applications and present two cases studies. The methodology starts with estimating the effect that incremental data has on model performance in terms of common classification evaluation metrics. This effect is then translated into economic units, which gives an expected economic value that the firm might realize with the acquisition of a particular data asset. With this estimate a firm can then set a data acquisition price that targets a particular return on investment. This article presents the methodology in full detail and illustrates it in the context of two marketing case studies.

  11. Reliability study on high power 638-nm triple emitter broad area laser diode

    NASA Astrophysics Data System (ADS)

    Yagi, T.; Kuramoto, K.; Kadoiwa, K.; Wakamatsu, R.; Miyashita, M.

    2016-03-01

    Reliabilities of the 638-nm triple emitter broad area laser diode (BA-LD) with the window-mirror structure were studied. Methodology to estimate mean time to failure (MTTF) due to catastrophic optical mirror degradation (COMD) in reasonable aging duration was newly proposed. Power at which the LD failed due to COMD (PCOMD) was measured for the aged LDs under the several aging conditions. It was revealed that the PCOMD was proportional to logarithm of aging duration, and MTTF due to COMD (MTTF(COMD)) could be estimated by using this relation. MTTF(COMD) estimated by the methodology with the aging duration of approximately 2,000 hours was consistent with that estimated by the long term aging. By using this methodology, the MTTF of the BA-LD was estimated exceeding 100,000 hours under the output of 2.5 W, duty cycles of 30% .

  12. Statistical Methodology for Assigning Emissions to Industries in the United States, Revised Estimates: 1970 to 1997 (2001)

    EPA Pesticide Factsheets

    This report presents the results of a study that develops a methodology to assign emissions to the manufacturing and nonmanufacturing industries that comprise the industrial sector of the EPA’s national emission estimates for 1970 to 1997.

  13. Valuing Drinking Water Risk Reductions Using the Contingent Valuation Method: A Methodological Study of Risks from THM and Giardia (1986)

    EPA Pesticide Factsheets

    This study develops contingent valuation methods for measuring the benefits of mortality and morbidity drinking water risk reductions. The major effort was devoted to developing and testing a survey instrument to value low-level risk reductions.

  14. Mathematical Modeling to Reduce Waste of Compounded Sterile Products in Hospital Pharmacies

    PubMed Central

    Dobson, Gregory; Haas, Curtis E.; Tilson, David

    2014-01-01

    Abstract In recent years, many US hospitals embarked on “lean” projects to reduce waste. One advantage of the lean operational improvement methodology is that it relies on process observation by those engaged in the work and requires relatively little data. However, the thoughtful analysis of the data captured by operational systems allows the modeling of many potential process options. Such models permit the evaluation of likely waste reductions and financial savings before actual process changes are made. Thus the most promising options can be identified prospectively, change efforts targeted accordingly, and realistic targets set. This article provides one example of such a datadriven process redesign project focusing on waste reduction in an in-hospital pharmacy. A mathematical model of the medication prepared and delivered by the pharmacy is used to estimate the savings from several potential redesign options (rescheduling the start of production, scheduling multiple batches, or reordering production within a batch) as well as the impact of information system enhancements. The key finding is that mathematical modeling can indeed be a useful tool. In one hospital setting, it estimated that waste could be realistically reduced by around 50% by using several process changes and that the greatest benefit would be gained by rescheduling the start of production (for a single batch) away from the period when most order cancellations are made. PMID:25477580

  15. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less

  16. Perceived risk and modal choice: risk compensation in transportation systems.

    PubMed

    Noland, R B

    1995-08-01

    A transportation mode choice analysis is performed that examines behavioral responses to perceived risk in the choice of mode for daily commute trips. This methodology provides a technique for examining, by means of disaggregate individual level data, risk-compensating effects in transportation systems. Various measures of perceived risk are examined for explaining modal choice. Other studies have described how safety regulations have resulted in increases in "driving intensity." This study defines one component of driving intensity to be the increased probability of commuting by automobile. The results show that modal shifts occur when risk perceptions for a given mode are reduced. To demonstrate potential risk-compensating effects within the transportation system, an estimate of changes in accident fatalities due to commuting is derived using rough estimates of fatalities per person-mile travelled. It is shown that a given change in the perceived risk of commuting by automobile results in a less than proportionate change in net commuting fatalities. The relative magnitude is dependent on how objective reductions in risk translate into perceived reductions in risk. This study also shows that perceived safety improvements in bicycle transportation have an aggregate elasticity value that is greater than one. This means that bicycle safety improvements attract proportionately more people to bicycle commuting (i.e. a 10% increase in safety results in a greater than 10% increase in the share of people bicycle commuting).

  17. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  18. Application of the Navigation Guide systematic review methodology to the evidence for developmental and reproductive toxicity of triclosan.

    PubMed

    Johnson, Paula I; Koustas, Erica; Vesterinen, Hanna M; Sutton, Patrice; Atchley, Dylan S; Kim, Allegra N; Campbell, Marlissa; Donald, James M; Sen, Saunak; Bero, Lisa; Zeise, Lauren; Woodruff, Tracey J

    2016-01-01

    There are reports of developmental and reproductive health effects associated with the widely used biocide triclosan. Apply the Navigation Guide systematic review methodology to answer the question: Does exposure to triclosan have adverse effects on human development or reproduction? We applied the first 3 steps of the Navigation Guide methodology: 1) Specify a study question, 2) Select the evidence, and 3) Rate quality and strength of the evidence. We developed a protocol, conducted a comprehensive search of the literature, and identified relevant studies using pre-specified criteria. We assessed the number and type of all relevant studies. We evaluated each included study for risk of bias and rated the quality and strength of the evidence for the selected outcomes. We conducted a meta-analysis on a subset of suitable data. We found 4282 potentially relevant records, and 81 records met our inclusion criteria. Of the more than 100 endpoints identified by our search, we focused our evaluation on hormone concentration outcomes, which had the largest human and non-human mammalian data set. Three human studies and 8 studies conducted in rats reported thyroxine levels as outcomes. The rat data were amenable to meta-analysis. Because only one of the human thyroxine studies quantified exposure, we did not conduct a meta-analysis of the human data. Through meta-analysis of the data for rats, we estimated for prenatal exposure a 0.09% (95% CI: -0.20, 0.02) reduction in thyroxine concentration per mg triclosan/kg-bw in fetal and young rats compared to control. For postnatal exposure we estimated a 0.31% (95% CI: -0.38, -0.23) reduction in thyroxine per mg triclosan/kg-bw, also compared to control. Overall, we found low to moderate risk of bias across the human studies and moderate to high risk of bias across the non-human studies, and assigned a "moderate/low" quality rating to the body of evidence for human thyroid hormone alterations and a "moderate" quality rating to the body of evidence for non-human thyroid hormone alterations. Based on this application of the Navigation Guide systematic review methodology, we concluded that there was "sufficient" non-human evidence and "inadequate" human evidence of an association between triclosan exposure and thyroxine concentrations, and consequently, triclosan is "possibly toxic" to reproductive and developmental health. Thyroid hormone disruption is an upstream indicator of developmental toxicity. Additional endpoints may be identified as being of equal or greater concern as other data are developed or evaluated. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Application of the Navigation Guide Systematic Review Methodology to the Evidence for Developmental and Reproductive Toxicity of Triclosan

    PubMed Central

    Johnson, Paula I.; Koustas, Erica; Vesterinen, Hanna M.; Sutton, Patrice; Atchley, Dylan S.; Kim, Allegra N.; Campbell, Marlissa; Donald, James M.; Sen, Saunak; Bero, Lisa; Zeise, Lauren; Woodruff, Tracey J.

    2016-01-01

    Background There are reports of developmental and reproductive health effects associated with the widely used biocide triclosan. Objective Apply the Navigation Guide systematic review methodology to answer the question: Does exposure to triclosan have adverse effects on human development or reproduction? Methods We applied the first 3 steps of the Navigation Guide methodology: 1) Specify a study question, 2) Select the evidence, and 3) Rate quality and strength of the evidence. We developed a protocol, conducted a comprehensive search of the literature, and identified relevant studies using pre-specified criteria. We assessed the number and type of all relevant studies. We evaluated each included study for risk of bias and rated the quality and strength of the evidence for the selected outcomes. We conducted a meta-analysis on a subset of suitable data. Results We found 4,282 potentially relevant records, and 81 records met our inclusion criteria. Of the more than 100 endpoints identified by our search, we focused our evaluation on hormone concentration outcomes, which had the largest human and non-human mammalian data set. Three human studies and 8 studies conducted in rats reported thyroxine levels as outcomes. The rat data were amenable to meta-analysis. Because only one of the human thyroxine studies quantified exposure, we did not conduct a meta-analysis of the human data. Through meta-analysis of the data for rats, we estimated for prenatal exposure a 0.09% (95% CI: −0.20, 0.02) reduction in thyroxine concentration per mg triclosan/kg-bw in fetal and young rats compared to control. For postnatal exposure we estimated a 0.31% (95% CI: −0.38, −0.23) reduction in thyroxine per mg triclosan/kg-bw, also compared to control. Overall we found low to moderate risk of bias across the human studies and moderate to high risk of bias across the non-human studies, and assigned a “moderate/low” quality rating to the body of evidence for human thyroid hormone alterations and a “moderate” quality rating to the body of evidence for non-human thyroid hormone alterations. Conclusion Based on this application of the Navigation Guide systematic review methodology, we concluded that there was “sufficient” non-human evidence and “inadequate” human evidence of an association between triclosan exposure and thyroxine concentrations, and consequently, triclosan is “possibly toxic” to reproductive and developmental health. Thyroid hormone disruption is an upstream indicator of developmental toxicity. Additional endpoints may be identified as being of equal or greater concern as other data are developed or evaluated. PMID:27156197

  20. Pseudo-spectral methodology for a quantitative assessment of the cover of in-stream vegetation in small streams

    NASA Astrophysics Data System (ADS)

    Hershkovitz, Yaron; Anker, Yaakov; Ben-Dor, Eyal; Schwartz, Guy; Gasith, Avital

    2010-05-01

    In-stream vegetation is a key ecosystem component in many fluvial ecosystems, having cascading effects on stream conditions and biotic structure. Traditionally, ground-level surveys (e.g. grid and transect analyses) are commonly used for estimating cover of aquatic macrophytes. Nonetheless, this methodological approach is highly time consuming and usually yields information which is practically limited to habitat and sub-reach scales. In contrast, remote-sensing techniques (e.g. satellite imagery and airborne photography), enable collection of large datasets over section, stream and basin scales, in relatively short time and reasonable cost. However, the commonly used spatial high resolution (1m) is often inadequate for examining aquatic vegetation on habitat or sub-reach scales. We examined the utility of a pseudo-spectral methodology, using RGB digital photography for estimating the cover of in-stream vegetation in a small Mediterranean-climate stream. We compared this methodology with that obtained by traditional ground-level grid methodology and with an airborne hyper-spectral remote sensing survey (AISA-ES). The study was conducted along a 2 km section of an intermittent stream (Taninim stream, Israel). When studied, the stream was dominated by patches of watercress (Nasturtium officinale) and mats of filamentous algae (Cladophora glomerata). The extent of vegetation cover at the habitat and section scales (100 and 104 m, respectively) were estimated by the pseudo-spectral methodology, using an airborne Roli camera with a Phase-One P 45 (39 MP) CCD image acquisition unit. The swaths were taken in elevation of about 460 m having a spatial resolution of about 4 cm (NADIR). For measuring vegetation cover at the section scale (104 m) we also used a 'push-broom' AISA-ES hyper-spectral swath having a sensor configuration of 182 bands (350-2500 nm) at elevation of ca. 1,200 m (i.e. spatial resolution of ca. 1 m). Simultaneously, with every swath we used an Analytical Spectral Device (ASD) to measure hyper-spectral signatures (2150 bands configuration; 350-2500 nm) of selected ground-level targets (located by GPS) of soil, water; vegetation (common reed, watercress, filamentous algae) and standard EVA foam colored sheets (red, green, blue, black and white). Processing and analysis of the data were performed over an ITT ENVI platform. The hyper-spectral image underwent radiometric calibration according to the flight and sensor calibration parameters on CALIGEO platform and the raw DN scale was converted into radiance scale. Ground level visual survey of vegetation cover and height was applied at the habitat scale (100 m) by placing a 1m2 netted grids (10x10cm cells) along 'bank-to-bank' transect (in triplicates). Estimates of plant cover obtained by the pseudo-spectral methodology at the habitat scale were 35-61% for the watercress, 0.4-25% for the filamentous algae and 27-51% for plant-free patches. The respective estimates by ground level visual survey were 26-50, 14-43% and 36-50%. The pseudo-spectral methodology also yielded estimates for the section scale (104 m) of ca. 39% for the watercress, ca. 32% for the filamentous algae and 6% for plant-free patches. The respective estimates obtained by hyper-spectral swath were 38, 26 and 8%. Validation against ground-level measurements proved that pseudo-spectral methodology gives reasonably good estimates of in-stream plant cover. Therefore, this methodology can serve as a substitute for ground level estimates at small stream scales and for the low resolution hyper-spectral methodology at larger scales.

  1. Allometric scaling theory applied to FIA biomass estimation

    Treesearch

    David C. Chojnacky

    2002-01-01

    Tree biomass estimates in the Forest Inventory and Analysis (FIA) database are derived from numerous methodologies whose abundance and complexity raise questions about consistent results throughout the U.S. A new model based on allometric scaling theory ("WBE") offers simplified methodology and a theoretically sound basis for improving the reliability and...

  2. Support vector machines for nuclear reactor state estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavaljevski, N.; Gross, K. C.

    2000-02-14

    Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformedmore » into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.« less

  3. Association between component costs, study methodologies, and foodborne illness-related factors with the cost of nontyphoidal Salmonella illness.

    PubMed

    McLinden, Taylor; Sargeant, Jan M; Thomas, M Kate; Papadopoulos, Andrew; Fazil, Aamir

    2014-09-01

    Nontyphoidal Salmonella spp. are one of the most common causes of bacterial foodborne illness. Variability in cost inventories and study methodologies limits the possibility of meaningfully interpreting and comparing cost-of-illness (COI) estimates, reducing their usefulness. However, little is known about the relative effect these factors have on a cost-of-illness estimate. This is important for comparing existing estimates and when designing new cost-of-illness studies. Cost-of-illness estimates, identified through a scoping review, were used to investigate the association between descriptive, component cost, methodological, and foodborne illness-related factors such as chronic sequelae and under-reporting with the cost of nontyphoidal Salmonella spp. illness. The standardized cost of nontyphoidal Salmonella spp. illness from 30 estimates reported in 29 studies ranged from $0.01568 to $41.22 United States dollars (USD)/person/year (2012). The mean cost of nontyphoidal Salmonella spp. illness was $10.37 USD/person/year (2012). The following factors were found to be significant in multiple linear regression (p≤0.05): the number of direct component cost categories included in an estimate (0-4, particularly long-term care costs) and chronic sequelae costs (inclusion/exclusion), which had positive associations with the cost of nontyphoidal Salmonella spp. illness. Factors related to study methodology were not significant. Our findings indicated that study methodology may not be as influential as other factors, such as the number of direct component cost categories included in an estimate and costs incurred due to chronic sequelae. Therefore, these may be the most important factors to consider when designing, interpreting, and comparing cost of foodborne illness studies.

  4. Health Insurance Dynamics: Methodological Considerations and a Comparison of Estimates from Two Surveys.

    PubMed

    Graves, John A; Mishra, Pranita

    2016-10-01

    To highlight key methodological issues in studying insurance dynamics and to compare estimates across two commonly used surveys. Nonelderly uninsured adults and children sampled between 2001 and 2011 in the Medical Expenditure Panel Survey and the Survey of Income and Program Participation. We utilized nonparametric Kaplan-Meier methods to estimate quantiles (25th, 50th, and 75th percentiles) in the distribution of uninsured spells. We compared estimates obtained across surveys and across different methodological approaches to address issues like attrition, seam bias, censoring and truncation, and survey weighting method. All data were drawn from publicly available household surveys. Estimated uninsured spell durations in the MEPS were longer than those observed in the SIPP. There were few changes in spell durations between 2001 and 2011, with median durations of 14 months among adults and 5-7 months among children in the MEPS, and 8 months (adults) and 4 months (children) in the SIPP. The use of panel survey data to study insurance dynamics presents a unique set of methodological challenges. Researchers should consider key analytic and survey design trade-offs when choosing which survey can best suit their research goals. © Health Research and Educational Trust.

  5. Methodological considerations in cost of illness studies on Alzheimer disease

    PubMed Central

    2012-01-01

    Cost-of-illness studies (COI) can identify and measure all the costs of a particular disease, including the direct, indirect and intangible dimensions. They are intended to provide estimates about the economic impact of costly disease. Alzheimer disease (AD) is a relevant example to review cost of illness studies because of its costliness.The aim of this study was to review relevant published cost studies of AD to analyze the method used and to identify which dimension had to be improved from a methodological perspective. First, we described the key points of cost study methodology. Secondly, cost studies relating to AD were systematically reviewed, focussing on an analysis of the different methods used. The methodological choices of the studies were analysed using an analytical grid which contains the main methodological items of COI studies. Seventeen articles were retained. Depending on the studies, annual total costs per patient vary from $2,935 to $52, 954. The methods, data sources, and estimated cost categories in each study varied widely. The review showed that cost studies adopted different approaches to estimate costs of AD, reflecting a lack of consensus on the methodology of cost studies. To increase its credibility, closer agreement among researchers on the methodological principles of cost studies would be desirable. PMID:22963680

  6. AMD NOX REDUCTION IMPACTS

    EPA Science Inventory

    This is the first phase of a potentially multi-phase project aimed at identifying scientific methodologies that will lead to the development of innnovative analytical tools supporting the analysis of control strategy effectiveness, namely. accountabilty. Significant reductions i...

  7. Mercury methylation and reduction potentials in marine water: An improved methodology using 197Hg radiotracer.

    PubMed

    Koron, Neža; Bratkič, Arne; Ribeiro Guevara, Sergio; Vahčič, Mitja; Horvat, Milena

    2012-01-01

    A highly sensitive laboratory methodology for simultaneous determination of methylation and reduction of spiked inorganic mercury (Hg(2+)) in marine water labelled with high specific activity radiotracer ((197)Hg prepared from enriched (196)Hg stable isotope) was developed. A conventional extraction protocol for methylmercury (CH(3)Hg(+)) was modified in order to significantly reduce the partitioning of interfering labelled Hg(2+) into the final extract, thus allowing the detection of as little as 0.1% of the Hg(2+) spike transformed to labelled CH(3)Hg(+). The efficiency of the modified CH(3)Hg(+) extraction procedure was assessed by radiolabelled CH(3)Hg(+) spikes corresponding to concentrations of methylmercury between 0.05 and 4ngL(-1). The recoveries were 73.0±6.0% and 77.5±3.9% for marine and MilliQ water, respectively. The reduction potential was assessed by purging and trapping the radiolabelled elemental Hg in a permanganate solution. The method allows detection of the reduction of as little as 0.001% of labelled Hg(2+) spiked to natural waters. To our knowledge, the optimised methodology is among the most sensitive available to study the Hg methylation and reduction potential, therefore allowing experiments to be done at spikes close to natural levels (1-10ngL(-1)). Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Assessing Progress in Reducing the At-Risk Population after 13 Years of the Global Programme to Eliminate Lymphatic Filariasis

    PubMed Central

    Hooper, Pamela J.; Chu, Brian K.; Mikhailov, Alexei; Ottesen, Eric A.; Bradley, Mark

    2014-01-01

    Background In 1997, the World Health Assembly adopted Resolution 50.29, committing to the elimination of lymphatic filariasis (LF) as a public health problem, subsequently targeted for 2020. The initial estimates were that 1.2 billion people were at-risk for LF infection globally. Now, 13 years after the Global Programme to Eliminate Lymphatic Filariasis (GPELF) began implementing mass drug administration (MDA) against LF in 2000—during which over 4.4 billion treatments have been distributed in 56 endemic countries—it is most appropriate to estimate the impact that the MDA has had on reducing the population at risk of LF. Methodology/Principal Findings To assess GPELF progress in reducing the population at-risk for LF, we developed a model based on defining reductions in risk of infection among cohorts of treated populations following each round of MDA. The model estimates that the number of people currently at risk of infection decreased by 46% to 789 million through 2012. Conclusions/Significance Important progress has been made in the global efforts to eliminate LF, but significant scale-up is required over the next 8 years to reach the 2020 elimination goal. PMID:25411843

  9. Estimates of Dietary Sodium Consumption in Patients With Chronic Heart Failure.

    PubMed

    Colin-Ramirez, Eloisa; Arcand, JoAnne; Ezekowitz, Justin A

    2015-12-01

    Estimating dietary sodium intake is a key component of dietary assessment in the clinical setting of HF to effectively implement appropriate dietary interventions for sodium reduction and monitor adherence to the dietary treatment. In a research setting, assessment of sodium intake is crucial to an essential methodology to evaluate outcomes after a dietary or behavioral intervention. Current available sodium intake assessment methods include 24-hour urine collection, spot urine collections, multiple day food records, food recalls, and food frequency questionnaires. However, these methods have inherent limitations that make assessment of sodium intake challenging, and the utility of traditional methods may be questionable for estimating sodium intake in patients with HF. Thus, there are remaining questions about how to best assess dietary sodium intake in this patient population, and there is a need to identify a reliable method to assess and monitor sodium intake in the research and clinical setting of HF. This paper provides a comprehensive review of the current methods for sodium intake assessment, addresses the challenges for its accurate evaluation, and highlights the relevance of applying the highest-quality measurement methods in the research setting to minimize the risk of biased data. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Combining information from multiple flood projections in a hierarchical Bayesian framework

    NASA Astrophysics Data System (ADS)

    Le Vine, Nataliya

    2016-04-01

    This study demonstrates, in the context of flood frequency analysis, the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach explicitly accommodates shared multimodel discrepancy as well as the probabilistic nature of the flood estimates, and treats the available models as a sample from a hypothetical complete (but unobserved) set of models. The methodology is applied to flood estimates from multiple hydrological projections (the Future Flows Hydrology data set) for 135 catchments in the UK. The advantages of the approach are shown to be: (1) to ensure adequate "baseline" with which to compare future changes; (2) to reduce flood estimate uncertainty; (3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; (4) to diminish the importance of model consistency when model biases are large; and (5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.

  11. Review of the WECC EDT phase 2 EIM benefits analysis and results report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veselka, T.D.; Poch, L.A.; Botterud, A.

    A region-wide Energy Imbalance Market (EIM) was recently proposed by the Western Electricity Coordinating Council (WECC). In order for the Western Area Power Administration (Western) to make more informed decisions regarding its involvement in the EIM, Western asked Argonne National Laboratory (Argonne) to review the EIM benefits study (the October 2011 revision) performed by Energy and Environmental Economics, Inc. (E3). Key components of the E3 analysis made use of results from a study conducted by the National Renewable Energy Laboratory (NREL); therefore, we also reviewed the NREL work. This report examines E3 and NREL methods and models used in themore » EIM study. Estimating EIM benefits is very challenging because of the complex nature of the Western Interconnection (WI), the variability and uncertainty of renewable energy resources, and the complex decisions and potentially strategic bidding of market participants. Furthermore, methodologies used for some of the more challenging aspects of the EIM have not yet matured. This review is complimentary of several components of the EIM study. Analysts and modelers clearly took great care when conducting detailed simulations of the WI using well-established industry tools under stringent time and budget constraints. However, it is our opinion that the following aspects of the study and the interpretation of model results could be improved upon in future analyses. The hurdle rate methodology used to estimate current market inefficiencies does not directly model the underlying causes of sub-optimal dispatch and power flows. It assumes that differences between historical flows and modeled flows can be attributed solely to market inefficiencies. However, flow differences between model results and historical data can be attributed to numerous simplifying assumptions used in the model and in the input data. We suggest that alternative approaches be explored in order to better estimate the benefits of introducing market structures like the EIM. In addition to more efficient energy transactions in the WI, the EIM would reduce the amount of flexibility reserves needed to accommodate forecast errors associated with variable production from wind and solar energy resources. The modeling approach takes full advantage of variable resource diversity over the entire market footprint, but the projected reduction in flexibility reserves may be overly optimistic. While some reduction would undoubtedly occur, the EIM is only an energy market and would therefore not realize the same reduction in reserves as an ancillary services market. In our opinion the methodology does not adequately capture the impact of transmission constraints on the deployment of flexibility reserves. Estimates of flexibility reserves assume that forecast errors follow a normal distribution. Improved estimates could be obtained by using other probability distributions to estimate up and down reserves to capture the underlying uncertainty of these resources under specific operating conditions. Also, the use of a persistence forecast method for solar is questionable, because solar insolation follows a deterministic pattern dictated by the sun's path through the sky. We suggest a more rigorous method for forecasting solar insolation using the sun's relatively predictable daily pattern at specific locations. The EIM study considered only one scenario for hydropower resources. While this scenario is within the normal range over the WI footprint, it represents a severe drought condition in the Colorado River Basin from which Western schedules power. Given hydropower's prominent role in the WI, we recommend simulating a range of hydropower conditions since the relationship between water availability and WI dispatch costs is nonlinear. Also, the representation of specific operational constraints faced by hydropower operators in the WI needs improvements. The model used in the study cannot fully capture all of the EIM impacts and complexities of power system operations. In particular, a primary benefit of the EIM is a shorter dispatch interval; namely, 5 minutes. However, the model simulates the dispatch hourly. Therefore it cannot adequately measure the benefits of a more frequent dispatch. A tool with a finer time resolution would significantly improve simulation accuracy. When the study was conducted, the rules for the EIM were not clearly defined and it was appropriate to estimate societal benefits of the EIM assuming a perfect market without a detailed specification of the market design. However, incorporating a more complete description of market rules will allow for better estimates of EIM benefits. Furthermore, performing analyses using specific market rules can identify potential design flaws that may be difficult and expensive to correct after the market is established. Estimated cost savings from a more efficient dispatch are less than one percent of the total cost of electricity production.« less

  12. Uterotonic use immediately following birth: using a novel methodology to estimate population coverage in four countries.

    PubMed

    Ricca, Jim; Dwivedi, Vikas; Varallo, John; Singh, Gajendra; Pallipamula, Suranjeen Prasad; Amade, Nazir; de Luz Vaz, Maria; Bishanga, Dustan; Plotkin, Marya; Al-Makaleh, Bushra; Suhowatsky, Stephanie; Smith, Jeffrey Michael

    2015-01-22

    Postpartum hemorrhage (PPH) is the leading cause of maternal mortality in developing countries. While incidence of PPH can be dramatically reduced by uterotonic use immediately following birth (UUIFB) in both community and facility settings, national coverage estimates are rare. Most national health systems have no indicator to track this, and community-based measurements are even more scarce. To fill this information gap, a methodology for estimating national coverage for UUIFB was developed and piloted in four settings. The rapid estimation methodology consisted of convening a group of national technical experts and using the Delphi method to come to consensus on key data elements that were applied to a simple algorithm, generating a non-precise national estimate of coverage of UUIFB. Data elements needed for the calculation were the distribution of births by location and estimates of UUIFB in each of those settings, adjusted to take account of stockout rates and potency of uterotonics. This exercise was conducted in 2013 in Mozambique, Tanzania, the state of Jharkhand in India, and Yemen. Available data showed that deliveries in public health facilities account for approximately half of births in Mozambique and Tanzania, 16% in Jharkhand and 24% of births in Yemen. Significant proportions of births occur in private facilities in Jharkhand and faith-based facilities in Tanzania. Estimated uterotonic use for facility births ranged from 70 to 100%. Uterotonics are not used routinely for PPH prevention at home births in any of the settings. National UUIFB coverage estimates of all births were 43% in Mozambique, 40% in Tanzania, 44% in Jharkhand, and 14% in Yemen. This methodology for estimating coverage of UUIFB was found to be feasible and acceptable. While the exercise produces imprecise estimates whose validity cannot be assessed objectively in the absence of a gold standard estimate, stakeholders felt they were accurate enough to be actionable. The exercise highlighted information and practice gaps and promoted discussion on ways to improve UUIFB measurement and coverage, particularly of home births. Further follow up is needed to verify actions taken. The methodology produces useful data to help accelerate efforts to reduce maternal mortality.

  13. Validation of a novel air toxic risk model with air monitoring.

    PubMed

    Pratt, Gregory C; Dymond, Mary; Ellickson, Kristie; Thé, Jesse

    2012-01-01

    Three modeling systems were used to estimate human health risks from air pollution: two versions of MNRiskS (for Minnesota Risk Screening), and the USEPA National Air Toxics Assessment (NATA). MNRiskS is a unique cumulative risk modeling system used to assess risks from multiple air toxics, sources, and pathways on a local to a state-wide scale. In addition, ambient outdoor air monitoring data were available for estimation of risks and comparison with the modeled estimates of air concentrations. Highest air concentrations and estimated risks were generally found in the Minneapolis-St. Paul metropolitan area and lowest risks in undeveloped rural areas. Emissions from mobile and area (nonpoint) sources created greater estimated risks than emissions from point sources. Highest cancer risks were via ingestion pathway exposures to dioxins and related compounds. Diesel particles, acrolein, and formaldehyde created the highest estimated inhalation health impacts. Model-estimated air concentrations were generally highest for NATA and lowest for the AERMOD version of MNRiskS. This validation study showed reasonable agreement between available measurements and model predictions, although results varied among pollutants, and predictions were often lower than measurements. The results increased confidence in identifying pollutants, pathways, geographic areas, sources, and receptors of potential concern, and thus provide a basis for informing pollution reduction strategies and focusing efforts on specific pollutants (diesel particles, acrolein, and formaldehyde), geographic areas (urban centers), and source categories (nonpoint sources). The results heighten concerns about risks from food chain exposures to dioxins and PAHs. Risk estimates were sensitive to variations in methodologies for treating emissions, dispersion, deposition, exposure, and toxicity. © 2011 Society for Risk Analysis.

  14. Environmental Risk Assessment of dredging processes - application to Marin harbour (NW Spain)

    NASA Astrophysics Data System (ADS)

    Gómez, A. G.; García Alba, J.; Puente, A.; Juanes, J. A.

    2014-04-01

    A methodological procedure to estimate the environmental risk of dredging operations in aquatic systems has been developed. Environmental risk estimations are based on numerical models results, which provide an appropriated spatio-temporal framework analysis to guarantee an effective decision-making process. The methodological procedure has been applied on a real dredging operation in the port of Marin (NW Spain). Results from Marin harbour confirmed the suitability of the developed methodology and the conceptual approaches as a comprehensive and practical management tool.

  15. Development of weight and cost estimates for lifting surfaces with active controls

    NASA Technical Reports Server (NTRS)

    Anderson, R. D.; Flora, C. C.; Nelson, R. M.; Raymond, E. T.; Vincent, J. H.

    1976-01-01

    Equations and methodology were developed for estimating the weight and cost incrementals due to active controls added to the wing and horizontal tail of a subsonic transport airplane. The methods are sufficiently generalized to be suitable for preliminary design. Supporting methodology and input specifications for the weight and cost equations are provided. The weight and cost equations are structured to be flexible in terms of the active control technology (ACT) flight control system specification. In order to present a self-contained package, methodology is also presented for generating ACT flight control system characteristics for the weight and cost equations. Use of the methodology is illustrated.

  16. Regional Shelter Analysis Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillon, Michael B.; Dennison, Deborah; Kane, Jave

    2015-08-01

    The fallout from a nuclear explosion has the potential to injure or kill 100,000 or more people through exposure to external gamma (fallout) radiation. Existing buildings can reduce radiation exposure by placing material between fallout particles and exposed people. Lawrence Livermore National Laboratory was tasked with developing an operationally feasible methodology that could improve fallout casualty estimates. The methodology, called a Regional Shelter Analysis, combines the fallout protection that existing buildings provide civilian populations with the distribution of people in various locations. The Regional Shelter Analysis method allows the consideration of (a) multiple building types and locations within buildings, (b)more » country specific estimates, (c) population posture (e.g., unwarned vs. minimally warned), and (d) the time of day (e.g., night vs. day). The protection estimates can be combined with fallout predictions (or measurements) to (a) provide a more accurate assessment of exposure and injury and (b) evaluate the effectiveness of various casualty mitigation strategies. This report describes the Regional Shelter Analysis methodology, highlights key operational aspects (including demonstrating that the methodology is compatible with current tools), illustrates how to implement the methodology, and provides suggestions for future work.« less

  17. Vertical microphysical profiles of convective clouds as a tool for obtaining aerosol cloud-mediated climate forcings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenfeld, Daniel

    Quantifying the aerosol/cloud-mediated radiative effect at a global scale requires simultaneous satellite retrievals of cloud condensation nuclei (CCN) concentrations and cloud base updraft velocities (Wb). Hitherto, the inability to do so has been a major cause of high uncertainty regarding anthropogenic aerosol/cloud-mediated radiative forcing. This can be addressed by the emerging capability of estimating CCN and Wb of boundary layer convective clouds from an operational polar orbiting weather satellite. Our methodology uses such clouds as an effective analog for CCN chambers. The cloud base supersaturation (S) is determined by Wb and the satellite-retrieved cloud base drop concentrations (Ndb), which ismore » the same as CCN(S). Developing and validating this methodology was possible thanks to the ASR/ARM measurements of CCN and vertical updraft profiles. Validation against ground-based CCN instruments at the ARM sites in Oklahoma, Manaus, and onboard a ship in the northeast Pacific showed a retrieval accuracy of ±25% to ±30% for individual satellite overpasses. The methodology is presently limited to boundary layer not raining convective clouds of at least 1 km depth that are not obscured by upper layer clouds, including semitransparent cirrus. The limitation for small solar backscattering angles of <25º restricts the satellite coverage to ~25% of the world area in a single day. This methodology will likely allow overcoming the challenge of quantifying the aerosol indirect effect and facilitate a substantial reduction of the uncertainty in anthropogenic climate forcing.« less

  18. Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters

    NASA Astrophysics Data System (ADS)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-09-01

    This paper presents a methodology for estimating the motion of a character's fingers based on the use of motion features provided by a virtual character's hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character's hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.

  19. Classification of small lesions on dynamic breast MRI: Integrating dimension reduction and out-of-sample extension into CADx methodology

    PubMed Central

    Nagarajan, Mahesh B.; Huber, Markus B.; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel

    2014-01-01

    Objective While dimension reduction has been previously explored in computer aided diagnosis (CADx) as an alternative to feature selection, previous implementations of its integration into CADx do not ensure strict separation between training and test data required for the machine learning task. This compromises the integrity of the independent test set, which serves as the basis for evaluating classifier performance. Methods and Materials We propose, implement and evaluate an improved CADx methodology where strict separation is maintained. This is achieved by subjecting the training data alone to dimension reduction; the test data is subsequently processed with out-of-sample extension methods. Our approach is demonstrated in the research context of classifying small diagnostically challenging lesions annotated on dynamic breast magnetic resonance imaging (MRI) studies. The lesions were dynamically characterized through topological feature vectors derived from Minkowski functionals. These feature vectors were then subject to dimension reduction with different linear and non-linear algorithms applied in conjunction with out-of-sample extension techniques. This was followed by classification through supervised learning with support vector regression. Area under the receiver-operating characteristic curve (AUC) was evaluated as the metric of classifier performance. Results Of the feature vectors investigated, the best performance was observed with Minkowski functional ’perimeter’ while comparable performance was observed with ’area’. Of the dimension reduction algorithms tested with ’perimeter’, the best performance was observed with Sammon’s mapping (0.84 ± 0.10) while comparable performance was achieved with exploratory observation machine (0.82 ± 0.09) and principal component analysis (0.80 ± 0.10). Conclusions The results reported in this study with the proposed CADx methodology present a significant improvement over previous results reported with such small lesions on dynamic breast MRI. In particular, non-linear algorithms for dimension reduction exhibited better classification performance than linear approaches, when integrated into our CADx methodology. We also note that while dimension reduction techniques may not necessarily provide an improvement in classification performance over feature selection, they do allow for a higher degree of feature compaction. PMID:24355697

  20. Using State Estimation Residuals to Detect Abnormal SCADA Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Chen, Yousu; Huang, Zhenyu

    2010-06-14

    Detection of manipulated supervisory control and data acquisition (SCADA) data is critically important for the safe and secure operation of modern power systems. In this paper, a methodology of detecting manipulated SCADA data based on state estimation residuals is presented. A framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection process. The BACON algorithm is applied to detect outliers in the state estimation residuals. The IEEE 118-bus system is used asmore » a test case to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less

  1. Sharing global CO2 emission reductions among one billion high emitters

    PubMed Central

    Chakravarty, Shoibal; Chikkatur, Ananth; de Coninck, Heleen; Pacala, Stephen; Socolow, Robert; Tavoni, Massimo

    2009-01-01

    We present a framework for allocating a global carbon reduction target among nations, in which the concept of “common but differentiated responsibilities” refers to the emissions of individuals instead of nations. We use the income distribution of a country to estimate how its fossil fuel CO2 emissions are distributed among its citizens, from which we build up a global CO2 distribution. We then propose a simple rule to derive a universal cap on global individual emissions and find corresponding limits on national aggregate emissions from this cap. All of the world's high CO2-emitting individuals are treated the same, regardless of where they live. Any future global emission goal (target and time frame) can be converted into national reduction targets, which are determined by “Business as Usual” projections of national carbon emissions and in-country income distributions. For example, reducing projected global emissions in 2030 by 13 GtCO2 would require the engagement of 1.13 billion high emitters, roughly equally distributed in 4 regions: the U.S., the OECD minus the U.S., China, and the non-OECD minus China. We also modify our methodology to place a floor on emissions of the world's lowest CO2 emitters and demonstrate that climate mitigation and alleviation of extreme poverty are largely decoupled. PMID:19581586

  2. The Effectiveness of Harm Reduction Programs in Seven Prisons of Iran

    PubMed Central

    ROSHANFEKR, Payam; FARNIA, Marziyeh; DEJMAN, Masoumeh

    2013-01-01

    Abstract Background Starting in 1990 many programs were initiated to prevent and control the spread of HIV/AIDS in prisons in accordance with the policies of the Ministry of Health. This study attempts to evaluate the effectiveness of harm reduction programs vis-à-vis drug abuse and dependency in 7 prisons in Iran. Methods The methodology used is Before-After testing and the sample population is incarcerated prisoners in 7 large prisons in 7 provinces with diverse geographical, criminal, and numerical factors and the population sample is estimated at 2,200 inmates. Results Findings show that Drug addiction tests conducted on prisoners, right after their admittance indicated that 57% used at least one of the three drugs of morphine, amphetamines, and hashish (52% morphine, 4.5% ampheta-mines, and 3.9% hashish). Two months later, on the 2nd phase of the study, test results indicated that only 10% of subjects continued using drugs (P=0.05). Heroin and opium were the two most prevalent drugs. Smoking, oral in-take, and sniffing were the three most popular methods. Of those who continued to use drugs in prison, 95% admitted to drug use records. Conclusion Intervention policies in prisons resulted in reduction of drug consumption, from 57% of the newly admitted inmates to 10% after two months of incarceration. PMID:26060645

  3. Modeling methylene chloride exposure-reduction options for home paint-stripper users.

    PubMed

    Riley, D M; Small, M J; Fischhoff, B

    2000-01-01

    Home improvement is a popular activity, but one that can also involve exposure to hazardous substances. Paint stripping is of particular concern because of the high potential exposures to methylene chloride, a solvent that is a potential human carcinogen and neurotoxicant. This article presents a general methodology for evaluating the effectiveness of behavioral interventions for reducing these risks. It doubles as a model that assesses exposure patterns, incorporating user time-activity patterns and risk-mitigation strategies. The model draws upon recent innovations in indoor air-quality modeling to estimate exposure through inhalation and dermal pathways to paint-stripper users. It is designed to use data gathered from home paint-stripper users about room characteristics, amount of stripper used, time-activity patterns and exposure-reduction strategies (e.g., increased ventilation and modification in the timing of stripper application, scraping, and breaks). Results indicate that the effectiveness of behavioral interventions depends strongly on characteristics of the room (e.g., size, number and size of doors and windows, base air-exchange rates). The greatest simple reduction in exposure is achieved by using an exhaust fan in addition to opening windows and doors. These results can help identify the most important information for product labels and other risk-communication materials.

  4. Weather Observation Systems and Efficiency of Fighting Forest Fires

    NASA Astrophysics Data System (ADS)

    Khabarov, N.; Moltchanova, E.; Obersteiner, M.

    2007-12-01

    Weather observation is an essential component of modern forest fire management systems. Satellite and in-situ based weather observation systems might help to reduce forest loss, human casualties and destruction of economic capital. In this paper, we develop and apply a methodology to assess the benefits of various weather observation systems on reductions of burned area due to early fire detection. In particular, we consider a model where the air patrolling schedule is determined by a fire hazard index. The index is computed from gridded daily weather data for the area covering parts Spain and Portugal. We conduct a number of simulation experiments. First, the resolution of the original data set is artificially reduced. The reduction of the total forest burned area associated with air patrolling based on a finer weather grid indicates the benefit of using higher spatially resolved weather observations. Second, we consider a stochastic model to simulate forest fires and explore the sensitivity of the model with respect to the quality of input data. The analysis of combination of satellite and ground monitoring reveals potential cost saving due to a "system of systems effect" and substantial reduction in burned area. Finally, we estimate the marginal improvement schedule for loss of life and economic capital as a function of the improved fire observing system.

  5. Case matching and the reduction of selection bias in quasi-experiments: The relative importance of pretest measures of outcome, of unreliable measurement, and of mode of data analysis.

    PubMed

    Cook, Thomas D; Steiner, Peter M

    2010-03-01

    In this article, we note the many ontological, epistemological, and methodological similarities between how Campbell and Rubin conceptualize causation. We then explore 3 differences in their written emphases about individual case matching in observational studies. We contend that (a) Campbell places greater emphasis than Rubin on the special role of pretest measures of outcome among matching variables; (b) Campbell is more explicitly concerned with unreliability in the covariates; and (c) for analyzing the outcome, only Rubin emphasizes the advantages of using propensity score over regression methods. To explore how well these 3 factors reduce bias, we reanalyze and review within-study comparisons that contrast experimental and statistically adjusted nonexperimental causal estimates from studies with the same target population and treatment content. In this context, the choice of covariates counts most for reducing selection bias, and the pretest usually plays a special role relative to all the other covariates considered singly. Unreliability in the covariates also influences bias reduction but by less. Furthermore, propensity score and regression methods produce comparable degrees of bias reduction, though these within-study comparisons may not have met the theoretically specified conditions most likely to produce differences due to analytic method.

  6. Impacts of Vehicle Weight Reduction via Material Substitution on Life-Cycle Greenhouse Gas Emissions.

    PubMed

    Kelly, Jarod C; Sullivan, John L; Burnham, Andrew; Elgowainy, Amgad

    2015-10-20

    This study examines the vehicle-cycle and vehicle total life-cycle impacts of substituting lightweight materials into vehicles. We determine part-based greenhouse gas (GHG) emission ratios by collecting material substitution data and evaluating that alongside known mass-based GHG ratios (using and updating Argonne National Laboratory's GREET model) associated with material pair substitutions. Several vehicle parts are lightweighted via material substitution, using substitution ratios from a U.S. Department of Energy report, to determine GHG emissions. We then examine fuel-cycle GHG reductions from lightweighting. The fuel reduction value methodology is applied using FRV estimates of 0.15-0.25, and 0.25-0.5 L/(100km·100 kg), with and without powertrain adjustments, respectively. GHG breakeven values are derived for both driving distance and material substitution ratio. While material substitution can reduce vehicle weight, it often increases vehicle-cycle GHGs. It is likely that replacing steel (the dominant vehicle material) with wrought aluminum, carbon fiber reinforced plastic (CRFP), or magnesium will increase vehicle-cycle GHGs. However, lifetime fuel economy benefits often outweigh the vehicle-cycle, resulting in a net total life-cycle GHG benefit. This is the case for steel replaced by wrought aluminum in all assumed cases, and for CFRP and magnesium except for high substitution ratio and low FRV.

  7. Does Wechsler Intelligence Scale administration and scoring proficiency improve during assessment training?

    PubMed

    Platt, Tyson L; Zachar, Peter; Ray, Glen E; Lobello, Steven G; Underhill, Andrea T

    2007-04-01

    Studies have found that Wechsler scale administration and scoring proficiency is not easily attained during graduate training. These findings may be related to methodological issues. Using a single-group repeated measures design, this study documents statistically significant, though modest, error reduction on the WAIS-III and WISC-III during a graduate course in assessment. The study design does not permit the isolation of training factors related to error reduction, or assessment of whether error reduction is a function of mere practice. However, the results do indicate that previous study findings of no or inconsistent improvement in scoring proficiency may have been the result of methodological factors. Implications for teaching individual intelligence testing and further research are discussed.

  8. Adjustment for survey non‐representativeness using record‐linkage: refined estimates of alcohol consumption by deprivation in Scotland

    PubMed Central

    Gorman, Emma; Leyland, Alastair H.; McCartney, Gerry; Katikireddi, Srinivasa Vittal; Rutherford, Lisa; Graham, Lesley; Robinson, Mark

    2017-01-01

    Abstract Background and aims Analytical approaches to addressing survey non‐participation bias typically use only demographic information to improve estimates. We applied a novel methodology which uses health information from data linkage to adjust for non‐representativeness. We illustrate the method by presenting adjusted alcohol consumption estimates for Scotland. Design Data on consenting respondents to the Scottish Health Surveys (SHeSs) 1995–2010 were linked confidentially to routinely collected hospital admission and mortality records. Synthetic observations representing non‐respondents were created using general population data. Multiple imputation was performed to compute adjusted alcohol estimates given a range of assumptions about the missing data. Adjusted estimates of mean weekly consumption were additionally calibrated to per‐capita alcohol sales data. Setting Scotland. Participants 13 936 male and 18 021 female respondents to the SHeSs 1995–2010, aged 20–64 years. Measurements Weekly alcohol consumption, non‐, binge‐ and problem‐drinking. Findings Initial adjustment for non‐response resulted in estimates of mean weekly consumption that were elevated by up to 17.8% [26.5 units (18.6–34.4)] compared with corrections based solely on socio‐demographic data [22.5 (17.7–27.3)]; other drinking behaviour estimates were little changed. Under more extreme assumptions the overall difference was up to 53%, and calibrating to sales estimates resulted in up to 88% difference. Increases were especially pronounced among males in deprived areas. Conclusions The use of routinely collected health data to reduce bias arising from survey non‐response resulted in higher alcohol consumption estimates among working‐age males in Scotland, with less impact for females. This new method of bias reduction can be generalized to other surveys to improve estimates of alternative harmful behaviours. PMID:28276110

  9. Forecasting human exposure to atmospheric pollutants in Portugal - A modelling approach

    NASA Astrophysics Data System (ADS)

    Borrego, C.; Sá, E.; Monteiro, A.; Ferreira, J.; Miranda, A. I.

    2009-12-01

    Air pollution has become one main environmental concern because of its known impact on human health. Aiming to inform the population about the air they are breathing, several air quality modelling systems have been developed and tested allowing the assessment and forecast of air pollution ambient levels in many countries. However, every day, an individual is exposed to different concentrations of atmospheric pollutants as he/she moves from and to different outdoor and indoor places (the so-called microenvironments). Therefore, a more efficient way to prevent the population from the health risks caused by air pollution should be based on exposure rather than air concentrations estimations. The objective of the present study is to develop a methodology to forecast the human exposure of the Portuguese population based on the air quality forecasting system available and validated for Portugal since 2005. Besides that, a long-term evaluation of human exposure estimates aims to be obtained using one-year of this forecasting system application. Additionally, a hypothetical 50% emission reduction scenario has been designed and studied as a contribution to study emission reduction strategies impact on human exposure. To estimate the population exposure the forecasting results of the air quality modelling system MM5-CHIMERE have been combined with the population spatial distribution over Portugal and their time-activity patterns, i.e. the fraction of the day time spent in specific indoor and outdoor places. The population characterization concerning age, work, type of occupation and related time spent was obtained from national census and available enquiries performed by the National Institute of Statistics. A daily exposure estimation module has been developed gathering all these data and considering empirical indoor/outdoor relations from literature to calculate the indoor concentrations in each one of the microenvironments considered, namely home, office/school, and other indoors (leisure activities like shopping areas, gym, theatre/cinema and restaurants). The results show how this developed modelling system can be useful to anticipate air pollution episodes and to estimate their effects on human health on a long-term basis. The two metropolitan areas of Porto and Lisbon are identified as the most critical ones in terms of air pollution effects on human health over Portugal in a long-term as well as in a short-term perspective. The coexistence of high concentration values and high population density is the key factor for these stressed areas. Regarding the 50% emission reduction scenario, the model results are significantly different for both pollutants: there is a small overall reduction in the individual exposure values of PM 10 (<10 μg m -3 h), but for O 3, in contrast, there is an extended area where exposure values increase with emission reduction. This detailed knowledge is a prerequisite for the development of effective policies to reduce the foreseen adverse impact of air pollution on human health and to act on time.

  10. Methodology to Estimate the Quantity, Composition, and Management of Construction and Demolition Debris in the United States

    EPA Science Inventory

    This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estima...

  11. VALIDATION OF A METHOD FOR ESTIMATING POLLUTION EMISSION RATES FROM AREA SOURCES USING OPEN-PATH FTIR SEPCTROSCOPY AND DISPERSION MODELING TECHNIQUES

    EPA Science Inventory

    The paper describes a methodology developed to estimate emissions factors for a variety of different area sources in a rapid, accurate, and cost effective manner. he methodology involves using an open-path Fourier transform infrared (FTIR) spectrometer to measure concentrations o...

  12. 76 FR 38654 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-01

    ... Project Fetal-Infant Mortality Review: Human Immunodeficiency Virus Prevention Methodology (FHPM)--New... Mortality Review: Human Immunodeficiency Virus Prevention Methodology (FHPM) is designed to identify and... investigation and improvement strategy. In order to address perinatal HIV transmission at the community level...

  13. Tsunami evacuation modelling as a tool for risk reduction: application to the coastal area of El Salvador

    NASA Astrophysics Data System (ADS)

    González-Riancho, P.; Aguirre-Ayerbe, I.; Aniel-Quiroga, I.; Abad, S.; González, M.; Larreynaga, J.; Gavidia, F.; Gutiérrez, O. Q.; Álvarez-Gómez, J. A.; Medina, R.

    2013-12-01

    Advances in the understanding and prediction of tsunami impacts allow the development of risk reduction strategies for tsunami-prone areas. This paper presents an integral framework for the formulation of tsunami evacuation plans based on tsunami vulnerability assessment and evacuation modelling. This framework considers (i) the hazard aspects (tsunami flooding characteristics and arrival time), (ii) the characteristics of the exposed area (people, shelters and road network), (iii) the current tsunami warning procedures and timing, (iv) the time needed to evacuate the population, and (v) the identification of measures to improve the evacuation process. The proposed methodological framework aims to bridge between risk assessment and risk management in terms of tsunami evacuation, as it allows for an estimation of the degree of evacuation success of specific management options, as well as for the classification and prioritization of the gathered information, in order to formulate an optimal evacuation plan. The framework has been applied to the El Salvador case study, demonstrating its applicability to site-specific response times and population characteristics.

  14. A database and probabilistic assessment methodology for carbon dioxide enhanced oil recovery and associated carbon dioxide retention in the United States

    USGS Publications Warehouse

    Warwick, Peter D.; Verma, Mahendra K.; Attanasi, Emil; Olea, Ricardo A.; Blondes, Madalyn S.; Freeman, Philip; Brennan, Sean T.; Merrill, Matthew; Jahediesfanjani, Hossein; Roueche, Jacqueline; Lohr, Celeste D.

    2017-01-01

    The U.S. Geological Survey (USGS) has developed an assessment methodology for estimating the potential incremental technically recoverable oil resources resulting from carbon dioxide-enhanced oil recovery (CO2-EOR) in reservoirs with appropriate depth, pressure, and oil composition. The methodology also includes a procedure for estimating the CO2 that remains in the reservoir after the CO2-EOR process is complete. The methodology relies on a reservoir-level database that incorporates commercially available geologic and engineering data. The mathematical calculations of this assessment methodology were tested and produced realistic results for the Permian Basin Horseshoe Atoll, Upper Pennsylvanian-Wolfcampian Play (Texas, USA). The USGS plans to use the new methodology to conduct an assessment of technically recoverable hydrocarbons and associated CO2 sequestration resulting from CO2-EOR in the United States.

  15. An approach to software cost estimation

    NASA Technical Reports Server (NTRS)

    Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.

    1984-01-01

    A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.

  16. Estimation of carbon dioxide emissions per urban center link unit using data collected by the Advanced Traffic Information System in Daejeon, Korea

    NASA Astrophysics Data System (ADS)

    Ryu, B. Y.; Jung, H. J.; Bae, S. H.; Choi, C. U.

    2013-12-01

    CO2 emissions on roads in urban centers substantially affect global warming. It is important to quantify CO2 emissions in terms of the link unit in order to reduce these emissions on the roads. Therefore, in this study, we utilized real-time traffic data and attempted to develop a methodology for estimating CO2 emissions per link unit. Because of the recent development of the vehicle-to-infrastructure (V2I) communication technology, data from probe vehicles (PVs) can be collected and speed per link unit can be calculated. Among the existing emission calculation methodologies, mesoscale modeling, which is a representative modeling measurement technique, requires speed and traffic data per link unit. As it is not feasible to install fixed detectors at every link for traffic data collection, in this study, we developed a model for traffic volume estimation by utilizing the number of PVs that can be additionally collected when the PV data are collected. Multiple linear regression and an artificial neural network (ANN) were used for estimating the traffic volume. The independent variables and input data for each model are the number of PVs, travel time index (TTI), the number of lanes, and time slots. The result from the traffic volume estimate model shows that the mean absolute percentage error (MAPE) of the ANN is 18.67%, thus proving that it is more effective. The ANN-based traffic volume estimation served as the basis for the calculation of emissions per link unit. The daily average emissions for Daejeon, where this study was based, were 2210.19 ton/day. By vehicle type, passenger cars accounted for 71.28% of the total emissions. By road, Gyeryongro emitted 125.48 ton/day, accounting for 5.68% of the total emission, the highest percentage of all roads. In terms of emissions per kilometer, Hanbatdaero had the highest emission volume, with 7.26 ton/day/km on average. This study proves that real-time traffic data allow an emissions estimate in terms of the link unit. Furthermore, an analysis of CO2 emissions can support traffic management to make decisions related to the reduction of carbon emissions.

  17. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  18. Efficacy of hydrotherapy in fibromyalgia syndrome--a meta-analysis of randomized controlled clinical trials.

    PubMed

    Langhorst, Jost; Musial, Frauke; Klose, Petra; Häuser, Winfried

    2009-09-01

    To systematically review the efficacy of hydrotherapy in FM syndrome (FMS). We screened MEDLINE, PsychInfo, EMBASE, CAMBASE and CENTRAL (through December 2008) and the reference sections of original studies and systematic reviews on hydrotherapy in FMS. Randomized controlled trials (RCTs) on the treatment of FMS with hydrotherapy (spa-, balneo- and thalassotherapy, hydrotherapy and packing and compresses) were analysed. Methodological quality was assessed by the van Tulder score. Effects were summarized using standardized mean differences (SMDs). Ten out of 13 RCTs with 446 subjects, with a median sample size of 41 (range 24-80) and a median treatment time of 240 (range 200-300) min, were included into the meta-analysis. Only three studies had a moderate quality score. There was moderate evidence for reduction of pain (SMD -0.78; 95% CI -1.42, -0.13; P < 0.0001) and improved health-related quality of life (HRQOL) (SMD -1.67; 95% CI -2.91, -0.43; P = 0.008) at the end of therapy. There was moderate evidence that the reduction of pain (SMD -1.27; 95% CI -2.15, -0.38; P = 0.005) and improvement of HRQOL (SMD -1.16; 95% CI -1.96, -0.36; P = 0.005) could be maintained at follow-up (median 14 weeks). There is moderate evidence that hydrotherapy has short-term beneficial effects on pain and HRQOL in FMS patients. There is a risk to over-estimate the effects of hydrotherapy due to methodological weaknesses of the studies and to small trials included in meta-analysis.

  19. Purchasing power of civil servant health workers in Mozambique

    PubMed Central

    Ferrinho, Fátima; Amaral, Marta; Russo, Giuliano; Ferrinho, Paulo

    2012-01-01

    Background Health workers’ purchasing power is an important consideration in the development of strategies for health workforce development. This work explores the purchasing power variation of Mozambican public sector health workers, between 1999 and 2007. In general, the calculated purchasing power increased for most careers under study, and the highest percentage increase was observed for the lowest remuneration careers, contributing in this way for a relative reduction in the difference between the higher and the lower salaries. Methods This was done through a simple and easy-to-apply methodology to estimate salaries’ capitalization rate, by means of the accumulated inflation rate, after taking wage revisions into account. All the career categories in the Ministry of Health and affiliated public sector institutions were considered. Results Health workers’ purchasing power is an important consideration in the development of strategies for health workforce development. This work explores the purchasing power variation of Mozambican public sector health workers, between 1999 and 2007. In general, the calculated purchasing power increased for most careers under study, and the highest percentage increase was observed for the lowest remuneration careers, contributing in this way for a relative reduction in the difference between the higher and the lower salaries. Conclusion These results seem to contradict a commonly held assumption that health sector pay has deteriorated over the years, and with substantial damage for the poorest. Further studies appear to be needed to design a more accurate methodology to better understand the evolution and impact of public sector health workers’ remunerations across the years. PMID:22368757

  20. A Modeling Methodology to Support Evaluation Public Health Impacts on Air Pollution Reduction Programs

    EPA Science Inventory

    Environmental public health protection requires a good understanding of types and locations of pollutant emissions of health concern and their relationship to environmental public health indicators. Therefore, it is necessary to develop the methodologies, data sources, and tools...

  1. Causal Interpretations of Psychological Attributes

    ERIC Educational Resources Information Center

    Kane, Mike

    2017-01-01

    In the article "Rethinking Traditional Methods of Survey Validation" Andrew Maul describes a minimalist validation methodology for survey instruments, which he suggests is widely used in some areas of psychology and then critiques this methodology empirically and conceptually. He provides a reduction ad absurdum argument by showing that…

  2. Hot emission model for mobile sources: application to the metropolitan region of the city of Santiago, Chile.

    PubMed

    Corvalán, Roberto M; Osses, Mauricio; Urrutia, Cristian M

    2002-02-01

    Depending on the final application, several methodologies for traffic emission estimation have been developed. Emission estimation based on total miles traveled or other average factors is a sufficient approach only for extended areas such as national or worldwide areas. For road emission control and strategies design, microscale analysis based on real-world emission estimations is often required. This involves actual driving behavior and emission factors of the local vehicle fleet under study. This paper reports on a microscale model for hot road emissions and its application to the metropolitan region of the city of Santiago, Chile. The methodology considers the street-by-street hot emission estimation with its temporal and spatial distribution. The input data come from experimental emission factors based on local driving patterns and traffic surveys of traffic flows for different vehicle categories. The methodology developed is able to estimate hourly hot road CO, total unburned hydrocarbons (THCs), particulate matter (PM), and NO(x) emissions for predefined day types and vehicle categories.

  3. Universal Approach to Estimate Perfluorocarbons Emissions During Individual High-Voltage Anode Effect for Prebaked Cell Technologies

    NASA Astrophysics Data System (ADS)

    Dion, Lukas; Gaboury, Simon; Picard, Frédéric; Kiss, Laszlo I.; Poncsak, Sandor; Morais, Nadia

    2018-04-01

    Recent investigations on aluminum electrolysis cell demonstrated limitations to the commonly used tier-3 slope methodology to estimate perfluorocarbon (PFC) emissions from high-voltage anode effects (HVAEs). These limitations are greater for smelters with a reduced HVAE frequency. A novel approach is proposed to estimate the specific emissions using a tier 2 model resulting from individual HVAE instead of estimating monthly emissions for pot lines with the slope methodology. This approach considers the nonlinear behavior of PFC emissions as a function of the polarized anode effect duration but also integrates the change in behavior attributed to cell productivity. Validation was performed by comparing the new approach and the slope methodology with measurement campaigns from different smelters. The results demonstrate a good agreement between measured and estimated emissions as well as more accurately reflect individual HVAE dynamics occurring over time. Finally, the possible impact of this approach for the aluminum industry is discussed.

  4. A revised load estimation procedure for the Susquehanna, Potomac, Patuxent, and Choptank rivers

    USGS Publications Warehouse

    Yochum, Steven E.

    2000-01-01

    The U.S. Geological Survey?s Chesapeake Bay River Input Program has updated the nutrient and suspended-sediment load data base for the Susquehanna, Potomac, Patuxent, and Choptank Rivers using a multiple-window, center-estimate regression methodology. The revised method optimizes the seven-parameter regression approach that has been used historically by the program. The revised method estimates load using the fifth or center year of a sliding 9-year window. Each year a new model is run for each site and constituent, the most recent year is added, and the previous 4 years of estimates are updated. The fifth year in the 9-year window is considered the best estimate and is kept in the data base. The last year of estimation shows the most change from the previous year?s estimate and this change approaches a minimum at the fifth year. Differences between loads computed using this revised methodology and the loads populating the historical data base have been noted but the load estimates do not typically change drastically. The data base resulting from the application of this revised methodology is populated by annual and monthly load estimates that are known with greater certainty than in the previous load data base.

  5. A Hierarchical Clustering Methodology for the Estimation of Toxicity

    EPA Science Inventory

    A Quantitative Structure Activity Relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural sim...

  6. 41 CFR 102-80.50 - Are Federal agencies responsible for identifying/estimating risks and for appropriate risk...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... identify and estimate safety and environmental management risks and appropriate risk reduction strategies... responsible for identifying/estimating risks and for appropriate risk reduction strategies? 102-80.50 Section... Environmental Management Risks and Risk Reduction Strategies § 102-80.50 Are Federal agencies responsible for...

  7. Case Example of Dose Optimization Using Data From Bortezomib Dose-Finding Clinical Trials

    PubMed Central

    Backenroth, Daniel; Cheung, Ying Kuen Ken; Hershman, Dawn L.; Vulih, Diana; Anderson, Barry; Ivy, Percy; Minasian, Lori

    2016-01-01

    Purpose The current dose-finding methodology for estimating the maximum tolerated dose of investigational anticancer agents is based on the cytotoxic chemotherapy paradigm. Molecularly targeted agents (MTAs) have different toxicity profiles, which may lead to more long-lasting mild or moderate toxicities as well as to late-onset and cumulative toxicities. Several approved MTAs have been poorly tolerated during long-term administration, leading to postmarketing dose optimization studies to re-evaluate the optimal treatment dose. Using data from completed bortezomib dose-finding trials, we explore its toxicity profile, optimize its dose, and examine the appropriateness of current designs for identifying an optimal dose. Patients and Methods We classified the toxicities captured from 481 patients in 14 bortezomib dose-finding studies conducted through the National Cancer Institute Cancer Therapy Evaluation Program, computed the incidence of late-onset toxicities, and compared the incidence of dose-limiting toxicities (DLTs) among groups of patients receiving different doses of bortezomib. Results A total of 13,008 toxicities were captured: 46% of patients’ first DLTs and 88% of dose reductions or discontinuations of treatment because of toxicity were observed after the first cycle. Moreover, for the approved dose of 1.3 mg/m2, the estimated cumulative incidence of DLT was > 50%, and the estimated cumulative incidence of dose reduction or treatment discontinuation because of toxicity was nearly 40%. Conclusions When considering the entire course of treatment, the approved bortezomib dose exceeds the conventional ceiling DLT rate of 20% to 33%. Retrospective analysis of trial data provides an opportunity for dose optimization of MTAs. Future dose-finding studies of MTAs should take into account late-onset toxicities to ensure that a tolerable dose is identified for future efficacy and comparative trials. PMID:26926682

  8. Health benefits from large-scale ozone reduction in the United States.

    PubMed

    Berman, Jesse D; Fann, Neal; Hollingsworth, John W; Pinkerton, Kent E; Rom, William N; Szema, Anthony M; Breysse, Patrick N; White, Ronald H; Curriero, Frank C

    2012-10-01

    Exposure to ozone has been associated with adverse health effects, including premature mortality and cardiopulmonary and respiratory morbidity. In 2008, the U.S. Environmental Protection Agency (EPA) lowered the primary (health-based) National Ambient Air Quality Standard (NAAQS) for ozone to 75 ppb, expressed as the fourth-highest daily maximum 8-hr average over a 24-hr period. Based on recent monitoring data, U.S. ozone levels still exceed this standard in numerous locations, resulting in avoidable adverse health consequences. We sought to quantify the potential human health benefits from achieving the current primary NAAQS standard of 75 ppb and two alternative standard levels, 70 and 60 ppb, which represent the range recommended by the U.S. EPA Clean Air Scientific Advisory Committee (CASAC). We applied health impact assessment methodology to estimate numbers of deaths and other adverse health outcomes that would have been avoided during 2005, 2006, and 2007 if the current (or lower) NAAQS ozone standards had been met. Estimated reductions in ozone concentrations were interpolated according to geographic area and year, and concentration-response functions were obtained or derived from the epidemiological literature. We estimated that annual numbers of avoided ozone-related premature deaths would have ranged from 1,410 to 2,480 at 75 ppb to 2,450 to 4,130 at 70 ppb, and 5,210 to 7,990 at 60 ppb. Acute respiratory symptoms would have been reduced by 3 million cases and school-loss days by 1 million cases annually if the current 75-ppb standard had been attained. Substantially greater health benefits would have resulted if the CASAC-recommended range of standards (70-60 ppb) had been met. Attaining a more stringent primary ozone standard would significantly reduce ozone-related premature mortality and morbidity.

  9. Health Benefits from Large-Scale Ozone Reduction in the United States

    PubMed Central

    Berman, Jesse D.; Fann, Neal; Hollingsworth, John W.; Pinkerton, Kent E.; Rom, William N.; Szema, Anthony M.; Breysse, Patrick N.; White, Ronald H.

    2012-01-01

    Background: Exposure to ozone has been associated with adverse health effects, including premature mortality and cardiopulmonary and respiratory morbidity. In 2008, the U.S. Environmental Protection Agency (EPA) lowered the primary (health-based) National Ambient Air Quality Standard (NAAQS) for ozone to 75 ppb, expressed as the fourth-highest daily maximum 8-hr average over a 24-hr period. Based on recent monitoring data, U.S. ozone levels still exceed this standard in numerous locations, resulting in avoidable adverse health consequences. Objectives: We sought to quantify the potential human health benefits from achieving the current primary NAAQS standard of 75 ppb and two alternative standard levels, 70 and 60 ppb, which represent the range recommended by the U.S. EPA Clean Air Scientific Advisory Committee (CASAC). Methods: We applied health impact assessment methodology to estimate numbers of deaths and other adverse health outcomes that would have been avoided during 2005, 2006, and 2007 if the current (or lower) NAAQS ozone standards had been met. Estimated reductions in ozone concentrations were interpolated according to geographic area and year, and concentration–response functions were obtained or derived from the epidemiological literature. Results: We estimated that annual numbers of avoided ozone-related premature deaths would have ranged from 1,410 to 2,480 at 75 ppb to 2,450 to 4,130 at 70 ppb, and 5,210 to 7,990 at 60 ppb. Acute respiratory symptoms would have been reduced by 3 million cases and school-loss days by 1 million cases annually if the current 75-ppb standard had been attained. Substantially greater health benefits would have resulted if the CASAC-recommended range of standards (70–60 ppb) had been met. Conclusions: Attaining a more stringent primary ozone standard would significantly reduce ozone-related premature mortality and morbidity. PMID:22809899

  10. Case Example of Dose Optimization Using Data From Bortezomib Dose-Finding Clinical Trials.

    PubMed

    Lee, Shing M; Backenroth, Daniel; Cheung, Ying Kuen Ken; Hershman, Dawn L; Vulih, Diana; Anderson, Barry; Ivy, Percy; Minasian, Lori

    2016-04-20

    The current dose-finding methodology for estimating the maximum tolerated dose of investigational anticancer agents is based on the cytotoxic chemotherapy paradigm. Molecularly targeted agents (MTAs) have different toxicity profiles, which may lead to more long-lasting mild or moderate toxicities as well as to late-onset and cumulative toxicities. Several approved MTAs have been poorly tolerated during long-term administration, leading to postmarketing dose optimization studies to re-evaluate the optimal treatment dose. Using data from completed bortezomib dose-finding trials, we explore its toxicity profile, optimize its dose, and examine the appropriateness of current designs for identifying an optimal dose. We classified the toxicities captured from 481 patients in 14 bortezomib dose-finding studies conducted through the National Cancer Institute Cancer Therapy Evaluation Program, computed the incidence of late-onset toxicities, and compared the incidence of dose-limiting toxicities (DLTs) among groups of patients receiving different doses of bortezomib. A total of 13,008 toxicities were captured: 46% of patients' first DLTs and 88% of dose reductions or discontinuations of treatment because of toxicity were observed after the first cycle. Moreover, for the approved dose of 1.3 mg/m(2), the estimated cumulative incidence of DLT was > 50%, and the estimated cumulative incidence of dose reduction or treatment discontinuation because of toxicity was nearly 40%. When considering the entire course of treatment, the approved bortezomib dose exceeds the conventional ceiling DLT rate of 20% to 33%. Retrospective analysis of trial data provides an opportunity for dose optimization of MTAs. Future dose-finding studies of MTAs should take into account late-onset toxicities to ensure that a tolerable dose is identified for future efficacy and comparative trials. © 2016 by American Society of Clinical Oncology.

  11. Biological Control of the Chagas Disease Vector Triatoma infestans with the Entomopathogenic Fungus Beauveria bassiana Combined with an Aggregation Cue: Field, Laboratory and Mathematical Modeling Assessment

    PubMed Central

    Forlani, Lucas; Pedrini, Nicolás; Girotti, Juan R.; Mijailovsky, Sergio J.; Cardozo, Rubén M.; Gentile, Alberto G.; Hernández-Suárez, Carlos M.; Rabinovich, Jorge E.; Juárez, M. Patricia

    2015-01-01

    Background Current Chagas disease vector control strategies, based on chemical insecticide spraying, are growingly threatened by the emergence of pyrethroid-resistant Triatoma infestans populations in the Gran Chaco region of South America. Methodology and findings We have already shown that the entomopathogenic fungus Beauveria bassiana has the ability to breach the insect cuticle and is effective both against pyrethroid-susceptible and pyrethroid-resistant T. infestans, in laboratory as well as field assays. It is also known that T. infestans cuticle lipids play a major role as contact aggregation pheromones. We estimated the effectiveness of pheromone-based infection boxes containing B. bassiana spores to kill indoor bugs, and its effect on the vector population dynamics. Laboratory assays were performed to estimate the effect of fungal infection on female reproductive parameters. The effect of insect exuviae as an aggregation signal in the performance of the infection boxes was estimated both in the laboratory and in the field. We developed a stage-specific matrix model of T. infestans to describe the fungal infection effects on insect population dynamics, and to analyze the performance of the biopesticide device in vector biological control. Conclusions The pheromone-containing infective box is a promising new tool against indoor populations of this Chagas disease vector, with the number of boxes per house being the main driver of the reduction of the total domestic bug population. This ecologically safe approach is the first proven alternative to chemical insecticides in the control of T. infestans. The advantageous reduction in vector population by delayed-action fungal biopesticides in a contained environment is here shown supported by mathematical modeling. PMID:25969989

  12. Estimating historical anthropogenic global sulfur emission patterns for the period 1850-1990

    NASA Astrophysics Data System (ADS)

    Lefohn, Allen S.; Husar, Janja D.; Husar, Rudolf B.

    It is important to establish a reliable regional emission inventory of sulfur as a function of time when assessing the possible effects of global change and acid rain. This study developed a database of annual estimates of national sulfur emissions from 1850 to 1990. A common methodology was applied across all years and countries allowing for global totals to be produced by adding estimates from all countries. The consistent approach facilitates the modification of the database and the observation of changes at national, regional, or global levels. The emission estimates were based on net production (i.e., production plus imports minus exports), sulfur content, and sulfur retention for each country's production activities. Because the emission estimates were based on the above considerations, our database offers an opportunity to independently compare our results with those estimates based on individual country estimates. Fine temporal resolution clearly shows emission changes associated with specific historical events (e.g., wars, depressions, etc.) on a regional, national, or global basis. The spatial pattern of emissions shows that the US, the USSR, and China were the main sulfur emitters (i.e., approximately 50% of the total) in the world in 1990. The USSR and the US appear to have stabilized their sulfur emissions over the past 20 yr, and the recent increases in global sulfur emissions are linked to the rapid increases in emissions from China. Sulfur emissions have been reduced in some cases by switching from high- to low-sulfur coals. Flue gas desulfurization (FGD) has apparently made important contributions to emission reductions in only a few countries, such as Germany.

  13. The National Visitor Use Monitoring methodology and final results for round 1

    Treesearch

    S.J. Zarnoch; E.M. White; D.B.K. English; Susan M. Kocis; Ross Arnold

    2011-01-01

    A nationwide, systematic monitoring process has been developed to provide improved estimates of recreation visitation on National Forest System lands. Methodology is presented to provide estimates of site visits and national forest visits based on an onsite sampling design of site-days and last-exiting recreationists. Stratification of the site days, based on site type...

  14. Spacecraft alignment estimation. [for onboard sensors

    NASA Technical Reports Server (NTRS)

    Shuster, Malcolm D.; Bierman, Gerald J.

    1988-01-01

    A numerically well-behaved factorized methodology is developed for estimating spacecraft sensor alignments from prelaunch and inflight data without the need to compute the spacecraft attitude or angular velocity. Such a methodology permits the estimation of sensor alignments (or other biases) in a framework free of unknown dynamical variables. In actual mission implementation such an algorithm is usually better behaved than one that must compute sensor alignments simultaneously with the spacecraft attitude, for example by means of a Kalman filter. In particular, such a methodology is less sensitive to data dropouts of long duration, and the derived measurement used in the attitude-independent algorithm usually makes data checking and editing of outliers much simpler than would be the case in the filter.

  15. Using State Estimation Residuals to Detect Abnormal SCADA Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Chen, Yousu; Huang, Zhenyu

    2010-04-30

    Detection of abnormal supervisory control and data acquisition (SCADA) data is critically important for safe and secure operation of modern power systems. In this paper, a methodology of abnormal SCADA data detection based on state estimation residuals is presented. Preceded with a brief overview of outlier detection methods and bad SCADA data detection for state estimation, the framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection algorithm. The BACON algorithm ismore » applied to the outlier detection task. The IEEE 118-bus system is used as a test base to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less

  16. Probabilistic Methodology for Estimation of Number and Economic Loss (Cost) of Future Landslides in the San Francisco Bay Region, California

    USGS Publications Warehouse

    Crovelli, Robert A.; Coe, Jeffrey A.

    2008-01-01

    The Probabilistic Landslide Assessment Cost Estimation System (PLACES) presented in this report estimates the number and economic loss (cost) of landslides during a specified future time in individual areas, and then calculates the sum of those estimates. The analytic probabilistic methodology is based upon conditional probability theory and laws of expectation and variance. The probabilistic methodology is expressed in the form of a Microsoft Excel computer spreadsheet program. Using historical records, the PLACES spreadsheet is used to estimate the number of future damaging landslides and total damage, as economic loss, from future landslides caused by rainstorms in 10 counties of the San Francisco Bay region in California. Estimates are made for any future 5-year period of time. The estimated total number of future damaging landslides for the entire 10-county region during any future 5-year period of time is about 330. Santa Cruz County has the highest estimated number of damaging landslides (about 90), whereas Napa, San Francisco, and Solano Counties have the lowest estimated number of damaging landslides (5?6 each). Estimated direct costs from future damaging landslides for the entire 10-county region for any future 5-year period are about US $76 million (year 2000 dollars). San Mateo County has the highest estimated costs ($16.62 million), and Solano County has the lowest estimated costs (about $0.90 million). Estimated direct costs are also subdivided into public and private costs.

  17. Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications

    DOEpatents

    Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI

    2012-05-29

    A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.

  18. A POLLUTION REDUCTION METHODOLOGY FOR CHEMICAL PROCESS SIMULATORS

    EPA Science Inventory

    A pollution minimization methodology was developed for chemical process design using computer simulation. It is based on a pollution balance that at steady state is used to define a pollution index with units of mass of pollution per mass of products. The pollution balance has be...

  19. Assessment of Anthelmintic Efficacy of Mebendazole in School Children in Six Countries Where Soil-Transmitted Helminths Are Endemic

    PubMed Central

    Levecke, Bruno; Montresor, Antonio; Albonico, Marco; Ame, Shaali M.; Behnke, Jerzy M.; Bethony, Jeffrey M.; Noumedem, Calvine D.; Engels, Dirk; Guillard, Bertrand; Kotze, Andrew C.; Krolewiecki, Alejandro J.; McCarthy, James S.; Mekonnen, Zeleke; Periago, Maria V.; Sopheak, Hem; Tchuem-Tchuenté, Louis-Albert; Duong, Tran Thanh; Huong, Nguyen Thu; Zeynudin, Ahmed; Vercruysse, Jozef

    2014-01-01

    Background Robust reference values for fecal egg count reduction (FECR) rates of the most widely used anthelmintic drugs in preventive chemotherapy (PC) programs for controlling soil-transmitted helminths (STHs; Ascaris lumbricoides, Trichuris trichiura, and hookworm) are still lacking. However, they are urgently needed to ensure detection of reduced efficacies that are predicted to occur due to growing drug pressure. Here, using a standardized methodology, we assessed the FECR rate of a single oral dose of mebendazole (MEB; 500 mg) against STHs in six trials in school children in different locations around the world. Our results are compared with those previously obtained for similarly conducted trials of a single oral dose of albendazole (ALB; 400 mg). Methodology The efficacy of MEB, as assessed by FECR, was determined in six trials involving 5,830 school children in Brazil, Cambodia, Cameroon, Ethiopia, United Republic of Tanzania, and Vietnam. The efficacy of MEB was compared to that of ALB as previously assessed in 8,841 school children in India and all the above-mentioned study sites, using identical methodologies. Principal Findings The estimated FECR rate [95% confidence interval] of MEB was highest for A. lumbricoides (97.6% [95.8; 99.5]), followed by hookworm (79.6% [71.0; 88.3]). For T. trichiura, the estimated FECR rate was 63.1% [51.6; 74.6]. Compared to MEB, ALB was significantly more efficacious against hookworm (96.2% [91.1; 100], p<0.001) and only marginally, although significantly, better against A. lumbricoides infections (99.9% [99.0; 100], p = 0.012), but equally efficacious for T. trichiura infections (64.5% [44.4; 84.7], p = 0.906). Conclusions/Significance A minimum FECR rate of 95% for A. lumbricoides, 70% for hookworm, and 50% for T. trichiura is expected in MEB-dependent PC programs. Lower FECR results may indicate the development of potential drug resistance. PMID:25299391

  20. Development and Validation of a New Air Carrier Block Time Prediction Model and Methodology

    NASA Astrophysics Data System (ADS)

    Litvay, Robyn Olson

    Commercial airline operations rely on predicted block times as the foundation for critical, successive decisions that include fuel purchasing, crew scheduling, and airport facility usage planning. Small inaccuracies in the predicted block times have the potential to result in huge financial losses, and, with profit margins for airline operations currently almost nonexistent, potentially negate any possible profit. Although optimization techniques have resulted in many models targeting airline operations, the challenge of accurately predicting and quantifying variables months in advance remains elusive. The objective of this work is the development of an airline block time prediction model and methodology that is practical, easily implemented, and easily updated. Research was accomplished, and actual U.S., domestic, flight data from a major airline was utilized, to develop a model to predict airline block times with increased accuracy and smaller variance in the actual times from the predicted times. This reduction in variance represents tens of millions of dollars (U.S.) per year in operational cost savings for an individual airline. A new methodology for block time prediction is constructed using a regression model as the base, as it has both deterministic and probabilistic components, and historic block time distributions. The estimation of the block times for commercial, domestic, airline operations requires a probabilistic, general model that can be easily customized for a specific airline’s network. As individual block times vary by season, by day, and by time of day, the challenge is to make general, long-term estimations representing the average, actual block times while minimizing the variation. Predictions of block times for the third quarter months of July and August of 2011 were calculated using this new model. The resulting, actual block times were obtained from the Research and Innovative Technology Administration, Bureau of Transportation Statistics (Airline On-time Performance Data, 2008-2011) for comparison and analysis. Future block times are shown to be predicted with greater accuracy, without exception and network-wide, for a major, U.S., domestic airline.

  1. Carbonate pore system evaluation using the velocity-porosity-pressure relationship, digital image analysis, and differential effective medium theory

    NASA Astrophysics Data System (ADS)

    Lima Neto, Irineu A.; Misságia, Roseane M.; Ceia, Marco A.; Archilha, Nathaly L.; Oliveira, Lucas C.

    2014-11-01

    Carbonate reservoirs exhibit heterogeneous pore systems and a wide variety of grain types, which affect the rock's elastic properties and the reservoir parameter relationships. To study the Albian carbonates in the Campos Basin, a methodology is proposed to predict the amount of microporosity and the representative aspect ratio of these inclusions. The method assumes three pore-space scales in two representative inclusion scenarios: 1) a macro-mesopore median aspect ratio from the thin-section digital image analysis (DIA) and 2) a microporosity aspect ratio predicted based on the measured P-wave velocities. Through a laboratory analysis of 10 grainstone core samples of the Albian age, the P- and S-wave velocities (Vp and Vs) are evaluated at effective pressures of 0-10 MPa. The analytical theories in the proposed methodology are functions of the aspect ratios from the differential effective medium (DEM) theory, the macro-mesopore system recognized from the DIA, the amount of microporosity determined by the difference between the porosities estimated from laboratorial helium-gas and the thin-section petrographic images, and the P-wave velocities under dry effective pressure conditions. The DIA procedure is applied to estimate the local and global parameters, and the textural implications concerning ultrasonic velocities and image resolution. The macro-mesopore inclusions contribute to stiffer rocks and higher velocities, whereas the microporosity inclusions contribute to softer rocks and lower velocities. We observe a high potential for this methodology, which uses the microporosity aspect ratio inverted from Vp to predict Vs with a good agreement. The results acceptably characterize the Albian grainstones. The representative macro-mesopore aspect ratio is 0.5, and the inverted microporosity aspect ratio ranges from 0.01 to 0.07. The effective pressure induced an effect of slight porosity reduction during the triaxial tests, mainly in the microporosity inclusions, slightly changing the amount and the aspect ratio of the microporosity.

  2. Vaccine Effects on Heterogeneity in Susceptibility and Implications for Population Health Management.

    PubMed

    Langwig, Kate E; Wargo, Andrew R; Jones, Darbi R; Viss, Jessie R; Rutan, Barbara J; Egan, Nicholas A; Sá-Guimarães, Pedro; Kim, Min Sun; Kurath, Gael; Gomes, M Gabriela M; Lipsitch, Marc

    2017-11-21

    Heterogeneity in host susceptibility is a key determinant of infectious disease dynamics but is rarely accounted for in assessment of disease control measures. Understanding how susceptibility is distributed in populations, and how control measures change this distribution, is integral to predicting the course of epidemics with and without interventions. Using multiple experimental and modeling approaches, we show that rainbow trout have relatively homogeneous susceptibility to infection with infectious hematopoietic necrosis virus and that vaccination increases heterogeneity in susceptibility in a nearly all-or-nothing fashion. In a simple transmission model with an R 0 of 2, the highly heterogeneous vaccine protection would cause a 35 percentage-point reduction in outbreak size over an intervention inducing homogenous protection at the same mean level. More broadly, these findings provide validation of methodology that can help to reduce biases in predictions of vaccine impact in natural settings and provide insight into how vaccination shapes population susceptibility. IMPORTANCE Differences among individuals influence transmission and spread of infectious diseases as well as the effectiveness of control measures. Control measures, such as vaccines, may provide leaky protection, protecting all hosts to an identical degree, or all-or-nothing protection, protecting some hosts completely while leaving others completely unprotected. This distinction can have a dramatic influence on disease dynamics, yet this distribution of protection is frequently unaccounted for in epidemiological models and estimates of vaccine efficacy. Here, we apply new methodology to experimentally examine host heterogeneity in susceptibility and mode of vaccine action as distinct components influencing disease outcome. Through multiple experiments and new modeling approaches, we show that the distribution of vaccine effects can be robustly estimated. These results offer new experimental and inferential methodology that can improve predictions of vaccine effectiveness and have broad applicability to human, wildlife, and ecosystem health. Copyright © 2017 Langwig et al.

  3. Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models: Appendices

    NASA Technical Reports Server (NTRS)

    Coppolino, Robert N.

    2018-01-01

    Verification and validation (V&V) is a highly challenging undertaking for SLS structural dynamics models due to the magnitude and complexity of SLS subassemblies and subassemblies. Responses to challenges associated with V&V of Space Launch System (SLS) structural dynamics models are presented in Volume I of this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA). (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976). (3) Mode Consolidation (MC). Finally, (4) Experimental Mode Verification (EMV). This document contains the appendices to Volume I.

  4. Trends over 5 Decades in U.S. Occupation-Related Physical Activity and Their Associations with Obesity

    PubMed Central

    Church, Timothy S.; Thomas, Diana M.; Tudor-Locke, Catrine; Katzmarzyk, Peter T.; Earnest, Conrad P.; Rodarte, Ruben Q.; Martin, Corby K.; Blair, Steven N.; Bouchard, Claude

    2011-01-01

    Background The true causes of the obesity epidemic are not well understood and there are few longitudinal population-based data published examining this issue. The objective of this analysis was to examine trends in occupational physical activity during the past 5 decades and explore how these trends relate to concurrent changes in body weight in the U.S. Methodology/Principal Findings Analysis of energy expenditure for occupations in U.S. private industry since 1960 using data from the U.S. Bureau of Labor Statistics. Mean body weight was derived from the U.S. National Health and Nutrition Examination Surveys (NHANES). In the early 1960's almost half the jobs in private industry in the U.S. required at least moderate intensity physical activity whereas now less than 20% demand this level of energy expenditure. Since 1960 the estimated mean daily energy expenditure due to work related physical activity has dropped by more than 100 calories in both women and men. Energy balance model predicted weights based on change in occupation-related daily energy expenditure since 1960 for each NHANES examination period closely matched the actual change in weight for 40–50 year old men and women. For example from 1960–62 to 2003–06 we estimated that the occupation-related daily energy expenditure decreased by 142 calories in men. Given a baseline weight of 76.9 kg in 1960–02, we estimated that a 142 calories reduction would result in an increase in mean weight to 89.7 kg, which closely matched the mean NHANES weight of 91.8 kg in 2003–06. The results were similar for women. Conclusion Over the last 50 years in the U.S. we estimate that daily occupation-related energy expenditure has decreased by more than 100 calories, and this reduction in energy expenditure accounts for a significant portion of the increase in mean U.S. body weights for women and men. PMID:21647427

  5. Estimating the Global Prevalence of Inadequate Zinc Intake from National Food Balance Sheets: Effects of Methodological Assumptions

    PubMed Central

    Wessells, K. Ryan; Singh, Gitanjali M.; Brown, Kenneth H.

    2012-01-01

    Background The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population’s theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1) evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2) generate a model considered to provide the best estimates. Methodology and Principal Findings National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation). Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12–66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57–0.99, P<0.01). A “best-estimate” model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%. Conclusions and Significance Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public health problem in different countries can be drawn based on the country-specific rank order of estimated prevalence of inadequate zinc intake. PMID:23209781

  6. Learning a Novel Detection Metric for the Detection of O’Connell Effect Eclipsing Binaries

    NASA Astrophysics Data System (ADS)

    Johnston, Kyle; Haber, Rana; Knote, Matthew; Caballero-Nieves, Saida Maria; Peter, Adrian; Petit, Véronique

    2018-01-01

    With the advent of digital astronomy, new benefits and new challenges have been presented to the modern day astronomer. No longer can the astronomer rely on manual processing, instead the profession as a whole has begun to adopt more advanced computational means. Here we focus on the construction and application of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern detection algorithm for the targeted identification of eclipsing binaries which demonstrate a feature known as the O’Connell Effect. A methodology for the reduction of stellar variable observations (time-domain data) into Distribution Fields (DF) is presented. Push-Pull metric learning, a variant of LMNN learning, is used to generate a learned distance metric for the specific detection problem proposed. The metric will be trained on a set of a labelled Kepler eclipsing binary data, in particular systems showing the O’Connell effect. Performance estimates will be presented, as well the results of the detector applied to an unlabeled Kepler EB data set; this work is a crucial step in the upcoming era of big data from the next generation of big telescopes, such as LSST.

  7. Development and application of a methodology for a clean development mechanism to avoid methane emissions in closed landfills.

    PubMed

    Janke, Leandro; Lima, André O S; Millet, Maurice; Radetski, Claudemir M

    2013-01-01

    In Brazil, Solid Waste Disposal Sites have operated without consideration of environmental criteria, these areas being characterized by methane (CH4) emissions during the anaerobic degradation of organic matter. The United Nations organization has made efforts to control this situation, through the United Nations Framework Convention on Climate Change (UNFCCC) and the Kyoto Protocol, where projects that seek to reduce the emissions of greenhouse gases (GHG) can be financially rewarded through Certified Emission Reductions (CERs) if they respect the requirements established by the Clean Development Mechanism (CDM), such as the use of methodologies approved by the CDM Executive Board (CDM-EB). Thus, a methodology was developed according to the CDM standards related to the aeration, excavation and composting of closed Municipal Solid Waste (MSW) landfills, which was submitted to CDM-EB for assessment and, after its approval, applied to a real case study in Maringá City (Brazil) with a view to avoiding negative environmental impacts due the production of methane and leachates even after its closure. This paper describes the establishment of this CDM-EB-approved methodology to determine baseline emissions, project emissions and the resultant emission reductions with the application of appropriate aeration, excavation and composting practices at closed MSW landfills. A further result obtained through the application of the methodology in the landfill case study was that it would be possible to achieve an ex-ante emission reduction of 74,013 tCO2 equivalent if the proposed CDM project activity were implemented.

  8. Report: Implementation Plan With Cost Sharing Methodology Needed for Region 8 Senior Environmental Employee Work on Lead Risk Reduction

    EPA Pesticide Factsheets

    Report #13-P-0430, September 24, 2013. The two Region 8 program offices that jointly implement the Lead Renovation, Repair and Painting Program do not have methodology or agreement for sharing SEE funding, which has led to confusion.

  9. 76 FR 38189 - New Proposed Collection; Comment Request; Environmental Science Formative Research Methodology...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    ...; Comment Request; Environmental Science Formative Research Methodology Studies for the National Children's Study SUMMARY: In compliance with the requirement of Section 3506(c)(2)(A) of the Paperwork Reduction... comment was received. The comment questioned the cost and utility of the study specifically and of...

  10. Rapid Prototyping Methodology in Action: A Developmental Study.

    ERIC Educational Resources Information Center

    Jones, Toni Stokes; Richey, Rita C.

    2000-01-01

    Investigated the use of rapid prototyping methodologies in two projects conducted in a natural work setting to determine the nature of its use by designers and customers and the extent to which its use enhances traditional instructional design. Discusses design and development cycle-time reduction, product quality, and customer and designer…

  11. 76 FR 39876 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-07

    ... Survey--Pretest of Proposed Questions and Methodology.'' In accordance with the Paperwork Reduction Act... Health Plan Survey-- Pretest of Proposed Questions and Methodology The Consumer Assessment of Healthcare... year to year. The CAHPS[supreg] program was designed to: Make it possible to compare survey results...

  12. 76 FR 57046 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    ... Survey--Pretest of Proposed Questions and Methodology.'' In accordance with the Paperwork Reduction Act... Health Plan Survey-- Pretest of Proposed Questions and Methodology The Consumer Assessment of Healthcare... often changed from year to year. The CAHPS[reg] program was designed to: Make it possible to compare...

  13. Determining the Effects of High Intensity Ultrasound on the Reduction of Microbes in Milk and Orange Juice Using Response Surface Methodology.

    PubMed

    Ganesan, Balasubramanian; Martini, Silvana; Solorio, Jonathan; Walsh, Marie K

    2015-01-01

    This study investigated the effects of high intensity ultrasound (temperature, amplitude, and time) on the inactivation of indigenous bacteria in pasteurized milk, Bacillus atrophaeus spores inoculated into sterile milk, and Saccharomyces cerevisiae inoculated into sterile orange juice using response surface methodology. The variables investigated were sonication temperature (range from 0 to 84°C), amplitude (range from 0 to 216 μm), and time (range from 0.17 to 5 min) on the response, log microbe reduction. Data were analyzed by statistical analysis system software and three models were developed, each for bacteria, spore, and yeast reduction. Regression analysis identified sonication temperature and amplitude to be significant variables on microbe reduction. Optimization of the inactivation of microbes was found to be at 84.8°C, 216 μm amplitude, and 5.8 min. In addition, the predicted log reductions of microbes at common processing conditions (72°C for 20 sec) using 216 μm amplitude were computed. The experimental responses for bacteria, spore, and yeast reductions fell within the predicted levels, confirming the accuracy of the models.

  14. Determining the Effects of High Intensity Ultrasound on the Reduction of Microbes in Milk and Orange Juice Using Response Surface Methodology

    PubMed Central

    Martini, Silvana; Solorio, Jonathan; Walsh, Marie K.

    2015-01-01

    This study investigated the effects of high intensity ultrasound (temperature, amplitude, and time) on the inactivation of indigenous bacteria in pasteurized milk, Bacillus atrophaeus spores inoculated into sterile milk, and Saccharomyces cerevisiae inoculated into sterile orange juice using response surface methodology. The variables investigated were sonication temperature (range from 0 to 84°C), amplitude (range from 0 to 216 μm), and time (range from 0.17 to 5 min) on the response, log microbe reduction. Data were analyzed by statistical analysis system software and three models were developed, each for bacteria, spore, and yeast reduction. Regression analysis identified sonication temperature and amplitude to be significant variables on microbe reduction. Optimization of the inactivation of microbes was found to be at 84.8°C, 216 μm amplitude, and 5.8 min. In addition, the predicted log reductions of microbes at common processing conditions (72°C for 20 sec) using 216 μm amplitude were computed. The experimental responses for bacteria, spore, and yeast reductions fell within the predicted levels, confirming the accuracy of the models. PMID:26904659

  15. Methodological quality of diagnostic accuracy studies on non-invasive coronary CT angiography: influence of QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) items on sensitivity and specificity.

    PubMed

    Schueler, Sabine; Walther, Stefan; Schuetz, Georg M; Schlattmann, Peter; Dewey, Marc

    2013-06-01

    To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item ("Uninterpretable Results") showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with "no fulfilment" increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. • Good methodological quality is a basic requirement in diagnostic accuracy studies. • Most coronary CT angiography studies have only been of moderate design quality. • Weak methodological quality will affect the sensitivity and specificity. • No improvement in methodological quality was observed over time. • Authors should consider the QUADAS checklist when undertaking accuracy studies.

  16. Lean Methodology Reduces Inappropriate Use of Antipsychotics for Agitation at a Psychiatric Hospital.

    PubMed

    Goga, Joshana K; Depaolo, Antonio; Khushalani, Sunil; Walters, J Ken; Roca, Robert; Zisselman, Marc; Borleis, Christopher

    2017-01-01

    To Evaluate the Effects of Applying Lean Methodology-Improving Quality Increasing Efficiency by Eliminating Waste and Reducing Costs-An Approach To Decrease the Prescribing Frequency of Antipsychotics for The Indication of Agitation. Historically Controlled Study. Bheppard Pratt Health System is the Largest Private Provider of Psychiatric Care in Maryland With a Total Bed Capacity of 300. There Were 4 337 Patient Days From November 1 2012 to October 31 2013 on the Dementia Unit. All Patients Admitted on the Dementia Unit Were 65 Years of Age and Older with a Primary Diagnosis of Dementia. our Multidisciplinary Team Used Lean Methodology to Identify the Root Causes and Interventions Necessary to Reduce Inappropriate Antipsychotic Use. The Primary Outcome Was Rate of Inappropriately Indicating Agitation as the Rationale When Prescribing Antipsychotic Medications. There Was a 90% (P < 0.001) Reduction in Rate Of Antipsychotic Prescribing with an Indication of Agitation. The Lean Methodology Interventions Led To A 90% (P < 0.001) Reduction in the Rate of Antipsychotic Prescribing with an Indication of Agitation and a 10% Rate Reduction in Overall Antipsychotic Prescribing. Key Words: Agitation Alzheimer's Antipsychotics Behavioral and Psychological Symptoms of Dementia Centers For Medicare & Medicaid Services Dementia Root-cause Analysis. BPSD = Behavioral and Psychological Symptoms of Dementia CATIE-AD = Clinical Antipsychotic Trials of Intervention Effectiveness in Alzheimer's Disease EMR = Electronic Medical Records GAO = Government Accountability Office GNCIS = Geriatric Neuropsychiatric Clinical Indicator Scale.

  17. Reductive ring closure methodology toward heteroacenes bearing a dihydropyrrolo[3,2-b]pyrrole core: scope and limitation.

    PubMed

    Qiu, Li; Wang, Xiao; Zhao, Na; Xu, Shiliang; An, Zengjian; Zhuang, Xuhui; Lan, Zhenggang; Wen, Lirong; Wan, Xiaobo

    2014-12-05

    A newly developed reductive ring closure methodology to heteroacenes bearing a dihydropyrrolo[3,2-b]pyrrole core was systematically studied for its scope and limitation. The methodology involves (i) the cyclization of an o-aminobenzoic acid ester derivative to give an eight-membered cyclic dilactam, and (ii) the conversion of the dilactams into the corresponding diimidoyl chloride, which undergoes (iii) reductive ring closure to install the dihydropyrrolo[3,2-b]pyrrole core. The first step of the methodology plays the key role due to its substrate limitation, which suffers from the competition of oligomerization and hydrolysis. All the dilactams could successfully convert to the corresponding diimidoyl chlorides, most of which succeeded to give the dihydropyrrolo[3,2-b]pyrrole core. The influence of the substituents and the elongation of conjugated length on the photophysical properties of the obtained heteroacenes were then investigated systematically using UV-vis spectroscopy and cyclic voltammetry. It was found that chlorination and fluorination had quite a different effect on the photophysical properties of the heteroacene, and the ring fusing pattern also had a drastic influence on the band gap of the heteroacene. The successful preparation of a series of heteroacenes bearing a dihydropyrrolo[3,2-b]pyrrole core would provide a wide variety of candidates for further fabrication of organic field-effect transistor devices.

  18. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  19. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  20. Weekly glacier flow estimation from dense satellite time series using adapted optical flow technology

    NASA Astrophysics Data System (ADS)

    Altena, Bas; Kääb, Andreas

    2017-06-01

    Contemporary optical remote sensing satellites or constellations of satellites can acquire imagery at sub-weekly or even daily timescales. Thus, these systems facilitate the potential for within-season velocity estimation of glacier surfaces. State-of-the-art techniques for displacement estimation are based on matching image pairs and are thus constrained by the need of significant displacement and/or preservation of the surface over time. Consequently, such approaches cannot benefit entirely from the increasing satellite revisit times. Here, we explore an approach that is fundamentally different from image correlation or similar techniques and exploits the concept of optical flow. Our goal is to assess if this concept could overcome above current limitations of image matching and thus give new insights in glacier flow dynamics. We implement two different methods of optical flow, and test these on the SPOT5 Take5 dataset over Kronebreen, Svalbard and over Kaskawulsh Glacier, Yukon. For Kaskawulsh Glacier we are able to extract seasonal velocity variation, that temporally coincide with events of increased air temperatures. Furthermore, even for the cloudy dataset of Kronebreen, we were able to extract spatio-temporal trajectories which correlate well with measured GPS flow paths. Because the underlying concept is simple and computationally efficient due to data-reduction, our methodology can easily be used for exploratory regional studies of several glaciers or estimation of small and slow flowing glaciers.

  1. How large must a treatment effect be before it matters to practitioners? An estimation method and demonstration.

    PubMed

    Miller, William R; Manuel, Jennifer Knapp

    2008-09-01

    Treatment research is sometimes criticised as lacking in clinical relevance, and one potential source of this friction is a disconnection between statistical significance and what clinicians regard to be a meaningful difference in outcomes. This report demonstrates a novel methodology for estimating what substance abuse practitioners regard to be clinically important differences. To illustrate the estimation method, we surveyed 50 substance abuse treatment providers participating in the National Institute on Drug Abuse (NIDA) Clinical Trials Network. Practitioners identified thresholds for clinically meaningful differences on nine common outcome variables, indicated the size of effect that would justify their learning a new treatment method and estimated current outcomes from their services. Clinicians judged a difference between two treatments to be meaningful if outcomes were improved by about 10 - 12 points on the percentage of patients totally abstaining, arrested for driving while intoxicated, employed or having abnormal liver enzymes. A 5 percentage-point reduction in patient mortality was regarded as clinically significant. On continuous outcome measures (such as percentage of days abstinent or drinks per drinking day), practitioners judged an outcome to be significant when it doubled or halved the base rate. When a new treatment meets such criteria, practitioners were interested in learning it. Effects that are statistically significant in clinical trials may be unimpressive to practitioners. Clinicians' judgements of meaningful differences can inform the powering of clinical trials.

  2. Predicting muscle forces during the propulsion phase of single leg triple hop test.

    PubMed

    Alvim, Felipe Costa; Lucareli, Paulo Roberto Garcia; Menegaldo, Luciano Luporini

    2018-01-01

    Functional biomechanical tests allow the assessment of musculoskeletal system impairments in a simple way. Muscle force synergies associated with movement can provide additional information for diagnosis. However, such forces cannot be directly measured noninvasively. This study aims to estimate muscle activations and forces exerted during the preparation phase of the single leg triple hop test. Two different approaches were tested: static optimization (SO) and computed muscle control (CMC). As an indirect validation, model-estimated muscle activations were compared with surface electromyography (EMG) of selected hip and thigh muscles. Ten physically healthy active women performed a series of jumps, and ground reaction forces, kinematics and EMG data were recorded. An existing OpenSim model with 92 musculotendon actuators was used to estimate muscle forces. Reflective markers data were processed using the OpenSim Inverse Kinematics tool. Residual Reduction Algorithm (RRA) was applied recursively before running the SO and CMC. For both, the same adjusted kinematics were used as inputs. Both approaches presented similar residuals amplitudes. SO showed a closer agreement between the estimated activations and the EMGs of some muscles. Due to inherent EMG methodological limitations, the superiority of SO in relation to CMC can be only hypothesized. It should be confirmed by conducting further studies comparing joint contact forces. The workflow presented in this study can be used to estimate muscle forces during the preparation phase of the single leg triple hop test and allows investigating muscle activation and coordination. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Exploiting active subspaces to quantify uncertainty in the numerical simulation of the HyShot II scramjet

    NASA Astrophysics Data System (ADS)

    Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.

    2015-12-01

    We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.

  4. Descriptive epidemiology of cervical dystonia.

    PubMed

    Defazio, Giovanni; Jankovic, Joseph; Giel, Jennifer L; Papapetropoulos, Spyridon

    2013-01-01

    Cervical dystonia (CD), the most common form of adult-onset focal dystonia, has a heterogeneous clinical presentation with variable clinical features, leading to difficulties and delays in diagnosis. Owing to the lack of reviews specifically focusing on the frequency of primary CD in the general population, we performed a systematic literature search to examine its prevalence/incidence and analyze methodological differences among studies. We performed a systematic literature search to examine the prevalence data of primary focal CD. Sixteen articles met our methodological criteria. Because the reported prevalence estimates were found to vary widely across studies, we analyzed methodological differences and other factors to determine whether true differences exist in prevalence rates among geographic areas (and by gender and age distributions), as well as to facilitate recommendations for future studies. Prevalence estimates ranged from 20-4,100 cases/million. Generally, studies that relied on service-based and record-linkage system data likely underestimated the prevalence of CD, whereas population-based studies suffered from over-ascertainment. The more methodologically robust studies yielded a range of estimates of 28-183 cases/million. Despite the varying prevalence estimates, an approximate 2:1 female:male ratio was consistent among many studies. Three studies estimated incidence, ranging from 8-12 cases/million person-years. Although several studies have attempted to estimate the prevalence and incidence of CD, there is a need for additional well-designed epidemiological studies on primary CD that include large populations; use defined CD diagnostic criteria; and stratify for factors such as age, gender, and ethnicity.

  5. New Methodology for Estimating Fuel Economy by Vehicle Class

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, Shih-Miao; Dabbs, Kathryn; Hwang, Ho-Ling

    2011-01-01

    Office of Highway Policy Information to develop a new methodology to generate annual estimates of average fuel efficiency and number of motor vehicles registered by vehicle class for Table VM-1 of the Highway Statistics annual publication. This paper describes the new methodology developed under this effort and compares the results of the existing manual method and the new systematic approach. The methodology developed under this study takes a two-step approach. First, the preliminary fuel efficiency rates are estimated based on vehicle stock models for different classes of vehicles. Then, a reconciliation model is used to adjust the initial fuel consumptionmore » rates from the vehicle stock models and match the VMT information for each vehicle class and the reported total fuel consumption. This reconciliation model utilizes a systematic approach that produces documentable and reproducible results. The basic framework utilizes a mathematical programming formulation to minimize the deviations between the fuel economy estimates published in the previous year s Highway Statistics and the results from the vehicle stock models, subject to the constraint that fuel consumptions for different vehicle classes must sum to the total fuel consumption estimate published in Table MF-21 of the current year Highway Statistics. The results generated from this new approach provide a smoother time series for the fuel economies by vehicle class. It also utilizes the most up-to-date and best available data with sound econometric models to generate MPG estimates by vehicle class.« less

  6. Levels of reduction in van Manen's phenomenological hermeneutic method: an empirical example.

    PubMed

    Heinonen, Kristiina

    2015-05-01

    To describe reduction as a method using van Manen's phenomenological hermeneutic research approach. Reduction involves several levels that can be distinguished for their methodological usefulness. Researchers can use reduction in different ways and dimensions for their methodological needs. A study of Finnish multiple-birth families in which open interviews (n=38) were conducted with public health nurses, family care workers and parents of twins. A systematic literature and knowledge review showed there were no articles on multiple-birth families that used van Manen's method. Discussion The phenomena of the 'lifeworlds' of multiple-birth families consist of three core essential themes as told by parents: 'a state of constant vigilance', 'ensuring that they can continue to cope' and 'opportunities to share with other people'. Reduction provides the opportunity to carry out in-depth phenomenological hermeneutic research and understand people's lives. It helps to keep research stages separate but also enables a consolidated view. Social care and healthcare professionals have to hear parents' voices better to comprehensively understand their situation; they need further tools and training to be able to empower parents of twins. This paper adds an empirical example to the discussion of phenomenology, hermeneutic study and reduction as a method. It opens up reduction for researchers to exploit.

  7. A hierarchical clustering methodology for the estimation of toxicity.

    PubMed

    Martin, Todd M; Harten, Paul; Venkatapathy, Raghuraman; Das, Shashikala; Young, Douglas M

    2008-01-01

    ABSTRACT A quantitative structure-activity relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural similarity is defined in terms of 2-D physicochemical descriptors (such as connectivity and E-state indices). A genetic algorithm-based technique is used to generate statistically valid QSAR models for each cluster (using the pool of descriptors described above). The toxicity for a given query compound is estimated using the weighted average of the predictions from the closest cluster from each step in the hierarchical clustering assuming that the compound is within the domain of applicability of the cluster. The hierarchical clustering methodology was tested using a Tetrahymena pyriformis acute toxicity data set containing 644 chemicals in the training set and with two prediction sets containing 339 and 110 chemicals. The results from the hierarchical clustering methodology were compared to the results from several different QSAR methodologies.

  8. Life-Cycle Cost/Benefit Assessment of Expedite Departure Path (EDP)

    NASA Technical Reports Server (NTRS)

    Wang, Jianzhong Jay; Chang, Paul; Datta, Koushik

    2005-01-01

    This report presents a life-cycle cost/benefit assessment (LCCBA) of Expedite Departure Path (EDP), an air traffic control Decision Support Tool (DST) currently under development at NASA. This assessment is an update of a previous study performed by bd Systems, Inc. (bd) during FY01, with the following revisions: The life-cycle cost assessment methodology developed by bd for the previous study was refined and calibrated using Free Flight Phase 1 (FFP1) cost information for Traffic Management Advisor (TMA, or TMA-SC in the FAA's terminology). Adjustments were also made to the site selection and deployment scheduling methodology to include airspace complexity as a factor. This technique was also applied to the benefit extrapolation methodology to better estimate potential benefits for other years, and at other sites. This study employed a new benefit estimating methodology because bd s previous single year potential benefit assessment of EDP used unrealistic assumptions that resulted in optimistic estimates. This methodology uses an air traffic simulation approach to reasonably predict the impacts from the implementation of EDP. The results of the costs and benefits analyses were then integrated into a life-cycle cost/benefit assessment.

  9. Comparison of Lives Saved Tool model child mortality estimates against measured data from vector control studies in sub-Saharan Africa

    PubMed Central

    2011-01-01

    Background Insecticide-treated mosquito nets (ITNs) and indoor-residual spraying have been scaled-up across sub-Saharan Africa as part of international efforts to control malaria. These interventions have the potential to significantly impact child survival. The Lives Saved Tool (LiST) was developed to provide national and regional estimates of cause-specific mortality based on the extent of intervention coverage scale-up. We compared the percent reduction in all-cause child mortality estimated by LiST against measured reductions in all-cause child mortality from studies assessing the impact of vector control interventions in Africa. Methods We performed a literature search for appropriate studies and compared reductions in all-cause child mortality estimated by LiST to 4 studies that estimated changes in all-cause child mortality following the scale-up of vector control interventions. The following key parameters measured by each study were applied to available country projections: baseline all-cause child mortality rate, proportion of mortality due to malaria, and population coverage of vector control interventions at baseline and follow-up years. Results The percent reduction in all-cause child mortality estimated by the LiST model fell within the confidence intervals around the measured mortality reductions for all 4 studies. Two of the LiST estimates overestimated the mortality reductions by 6.1 and 4.2 percentage points (33% and 35% relative to the measured estimates), while two underestimated the mortality reductions by 4.7 and 6.2 percentage points (22% and 25% relative to the measured estimates). Conclusions The LiST model did not systematically under- or overestimate the impact of ITNs on all-cause child mortality. These results show the LiST model to perform reasonably well at estimating the effect of vector control scale-up on child mortality when compared against measured data from studies across a range of malaria transmission settings. The LiST model appears to be a useful tool in estimating the potential mortality reduction achieved from scaling-up malaria control interventions. PMID:21501453

  10. Is law enforcement of drug-impaired driving cost-efficient? An explorative study of a methodology for cost-benefit analysis.

    PubMed

    Veisten, Knut; Houwing, Sjoerd; Mathijssen, M P M René; Akhtar, Juned

    2013-03-01

    Road users driving under the influence of psychoactive substances may be at much higher relative risk (RR) in road traffic than the average driver. Legislation banning blood alcohol concentrations above certain threshold levels combined with roadside breath-testing of alcohol have been in lieu for decades in many countries, but new legislation and testing of drivers for drug use have recently been implemented in some countries. In this article we present a methodology for cost-benefit analysis (CBA) of increased law enforcement of roadside drug screening. This is an analysis of the profitability for society, where costs of control are weighed against the reduction in injuries expected from fewer drugged drivers on the roads. We specify assumptions regarding costs and the effect of the specificity of the drug screening device, and quantify a deterrence effect related to sensitivity of the device yielding the benefit estimates. Three European countries with different current enforcement levels were studied, yielding benefit-cost ratios in the approximate range of 0.5-5 for a tripling of current levels of enforcement, with costs of about 4000 EUR per convicted and in the range of 1.5 and 13 million EUR per prevented fatality. The applied methodology for CBA has involved a simplistic behavioural response to enforcement increase and control efficiency. Although this methodology should be developed further, it is clearly indicated that the cost-efficiency of increased law enforcement of drug driving offences is dependent on the baseline situation of drug-use in traffic and on the current level of enforcement, as well as the RR and prevalence of drugs in road traffic. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Effect of Body Composition Methodology on Heritability Estimation of Body Fatness

    PubMed Central

    Elder, Sonya J.; Roberts, Susan B.; McCrory, Megan A.; Das, Sai Krupa; Fuss, Paul J.; Pittas, Anastassios G.; Greenberg, Andrew S.; Heymsfield, Steven B.; Dawson-Hughes, Bess; Bouchard, Thomas J.; Saltzman, Edward; Neale, Michael C.

    2014-01-01

    Heritability estimates of human body fatness vary widely and the contribution of body composition methodology to this variability is unknown. The effect of body composition methodology on estimations of genetic and environmental contributions to body fatness variation was examined in 78 adult male and female monozygotic twin pairs reared apart or together. Body composition was assessed by six methods – body mass index (BMI), dual energy x-ray absorptiometry (DXA), underwater weighing (UWW), total body water (TBW), bioelectric impedance (BIA), and skinfold thickness. Body fatness was expressed as percent body fat, fat mass, and fat mass/height2 to assess the effect of body fatness expression on heritability estimates. Model-fitting multivariate analyses were used to assess the genetic and environmental components of variance. Mean BMI was 24.5 kg/m2 (range of 17.8–43.4 kg/m2). There was a significant effect of body composition methodology (p<0.001) on heritability estimates, with UWW giving the highest estimate (69%) and BIA giving the lowest estimate (47%) for fat mass/height2. Expression of body fatness as percent body fat resulted in significantly higher heritability estimates (on average 10.3% higher) compared to expression as fat mass/height2 (p=0.015). DXA and TBW methods expressing body fatness as fat mass/height2 gave the least biased heritability assessments, based on the small contribution of specific genetic factors to their genetic variance. A model combining DXA and TBW methods resulted in a relatively low FM/ht2 heritability estimate of 60%, and significant contributions of common and unique environmental factors (22% and 18%, respectively). The body fatness heritability estimate of 60% indicates a smaller contribution of genetic variance to total variance than many previous studies using less powerful research designs have indicated. The results also highlight the importance of environmental factors and possibly genotype by environmental interactions in the etiology of weight gain and the obesity epidemic. PMID:25067962

  12. Radiological Characterization Methodology of INEEL Stored RH-TRU Waste from ANL-E

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajiv N. Bhatt

    2003-02-01

    An Acceptable Knowledge (AK)-based radiological characterization methodology is being developed for RH TRU waste generated from ANL-E hot cell operations performed on fuel elements irradiated in the EBR-II reactor. The methodology relies on AK for composition of the fresh fuel elements, their irradiation history, and the waste generation and collection processes. Radiological characterization of the waste involves the estimates of the quantities of significant fission products and transuranic isotopes in the waste. Methods based on reactor and physics principles are used to achieve these estimates. Because of the availability of AK and the robustness of the calculation methods, the AK-basedmore » characterization methodology offers a superior alternative to traditional waste assay techniques. Using this methodology, it is shown that the radiological parameters of a test batch of ANL-E waste is well within the proposed WIPP Waste Acceptance Criteria limits.« less

  13. Radiological Characterization Methodology for INEEL-Stored Remote-Handled Transuranic (RH TRU) Waste from Argonne National Laboratory-East

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuan, P.; Bhatt, R.N.

    2003-01-14

    An Acceptable Knowledge (AK)-based radiological characterization methodology is being developed for RH TRU waste generated from ANL-E hot cell operations performed on fuel elements irradiated in the EBR-II reactor. The methodology relies on AK for composition of the fresh fuel elements, their irradiation history, and the waste generation and collection processes. Radiological characterization of the waste involves the estimates of the quantities of significant fission products and transuranic isotopes in the waste. Methods based on reactor and physics principles are used to achieve these estimates. Because of the availability of AK and the robustness of the calculation methods, the AK-basedmore » characterization methodology offers a superior alternative to traditional waste assay techniques. Using the methodology, it is shown that the radiological parameters of a test batch of ANL-E waste is well within the proposed WIPP Waste Acceptance Criteria limits.« less

  14. Environmentally Responsible Redox Chemistry: An Example of Convenient Oxidation Methodology without Chromium Waste

    ERIC Educational Resources Information Center

    Crumbie, Robyn L.

    2006-01-01

    The reactions use recyclable Magtrieve as the oxidant in a simple reaction sequence illustrating the reciprocity of oxidation and reduction processes. The reciprocity of oxidation and reduction reactions are explored while undertaking the reactions in an environmentally friendly manner.

  15. TRACI - THE TOOL FOR THE REDUCTION AND ASSESSMENT OF CHEMICAL AND OTHER ENVIRONMENTAL IMPACTS

    EPA Science Inventory

    TRACI, The Tool for the Reduction and Assessment of Chemical and other environmental Impacts, is described along with its history, the underlying research, methodologies, and insights within individual impact categories. TRACI facilitates the characterization of stressors that ma...

  16. Performance-based, cost- and time-effective pcb analytical methodology.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarado, J. S.

    1998-06-11

    Laboratory applications for the analysis of PCBs (polychlorinated biphenyls) in environmental matrices such as soil/sediment/sludge and oil/waste oil were evaluated for potential reduction in waste, source reduction, and alternative techniques for final determination. As a consequence, new procedures were studied for solvent substitution, miniaturization of extraction and cleanups, minimization of reagent consumption, reduction of cost per analysis, and reduction of time. These new procedures provide adequate data that meet all the performance requirements for the determination of PCBs. Use of the new procedures reduced costs for all sample preparation techniques. Time and cost were also reduced by combining the newmore » sample preparation procedures with the power of fast gas chromatography. Separation of Aroclor 1254 was achieved in less than 6 min by using DB-1 and SPB-608 columns. With the greatly shortened run times, reproducibility can be tested quickly and consequently with low cost. With performance-based methodology, the applications presented here can be applied now, without waiting for regulatory approval.« less

  17. HIV RISK REDUCTION INTERVENTIONS AMONG SUBSTANCE-ABUSING REPRODUCTIVE-AGE WOMEN: A SYSTEMATIC REVIEW

    PubMed Central

    Weissman, Jessica; Kanamori, Mariano; Dévieux, Jessy G.; Trepka, Mary Jo; De La Rosa, Mario

    2017-01-01

    HIV/AIDS is one of the leading causes of death among reproductive-age women throughout the world, and substance abuse plays a major role in HIV infection. We conducted a systematic review, in accordance with the 2015 Preferred Items for Reporting Systematic Reviews and Meta-analysis tool, to assess HIV risk-reduction intervention studies among reproductive-age women who abuse substances. We initially identified 6,506 articles during our search and, after screening titles and abstracts, examining articles in greater detail, and finally excluding those rated methodologically weak, a total of 10 studies were included in this review. Studies that incorporated behavioral skills training into the intervention and were based on theoretical model(s) were the most effective in general at decreasing sex and drug risk behaviors. Additional HIV risk-reduction intervention research with improved methodological designs is warranted to determine the most efficacious HIV risk-reduction intervention for reproductive-age women who abuse substances. PMID:28467160

  18. Optimization and uncertainty assessment of strongly nonlinear groundwater models with high parameter dimensionality

    NASA Astrophysics Data System (ADS)

    Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun

    2010-10-01

    Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.

  19. Decentralized and self-centered estimation architecture for formation flying of spacecraft

    NASA Technical Reports Server (NTRS)

    Kang, B. H.; Hadaegh, F. Y.; Scharf, D. P.; Ke, N. -P.

    2001-01-01

    Formation estimation methodologies for distributed spacecraft systems are formulated and analyzed. A generic form of the formation estimation problem is described by defining a common hardware configuration, observation graph, and feasible estimation topologies.

  20. Historical review of and current progress in coal-resource estimation in the United States.

    USGS Publications Warehouse

    Wood, G.H.

    1981-01-01

    Nine estimates of US coal resources have been published in the past 71yr. Although many details of these estimates differ markedly, those for 1913, 1922, and 1974 are surprisingly similar. Some differences are due to increased data, others reflect changes in terminology, definitions, criteria, guidelines, and methodologies. Thus many early estimates are not particularly useful in modern resource assessments. Preliminary definitions that are being prepared in 1980 by the US Geological Survey are compared with those published in 1976 and currently in use. Anticipated results of the new definitions are: 1) to lessen existing confusion about estimation procedures; 2) to make such procedures easier and more precise; 3) to promote use of a commonly accepted terminology accompanied by standardized definitions, criteria, guidelines, and methodologies for estimating resources. -Author

  1. Integrated cost-effectiveness analysis of agri-environmental measures for water quality.

    PubMed

    Balana, Bedru B; Jackson-Blake, Leah; Martin-Ortega, Julia; Dunn, Sarah

    2015-09-15

    This paper presents an application of integrated methodological approach for identifying cost-effective combinations of agri-environmental measures to achieve water quality targets. The methodological approach involves linking hydro-chemical modelling with economic costs of mitigation measures. The utility of the approach was explored for the River Dee catchment in North East Scotland, examining the cost-effectiveness of mitigation measures for nitrogen (N) and phosphorus (P) pollutants. In-stream nitrate concentration was modelled using the STREAM-N and phosphorus using INCA-P model. Both models were first run for baseline conditions and then their effectiveness for changes in land management was simulated. Costs were based on farm income foregone, capital and operational expenditures. The costs and effects data were integrated using 'Risk Solver Platform' optimization in excel to produce the most cost-effective combination of measures by which target nutrient reductions could be attained at a minimum economic cost. The analysis identified different combination of measures as most cost-effective for the two pollutants. An important aspect of this paper is integration of model-based effectiveness estimates with economic cost of measures for cost-effectiveness analysis of land and water management options. The methodological approach developed is not limited to the two pollutants and the selected agri-environmental measures considered in the paper; the approach can be adapted to the cost-effectiveness analysis of any catchment-scale environmental management options. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Methodology for cost analysis of film-based and filmless portable chest systems

    NASA Astrophysics Data System (ADS)

    Melson, David L.; Gauvain, Karen M.; Beardslee, Brian M.; Kraitsik, Michael J.; Burton, Larry; Blaine, G. James; Brink, Gary S.

    1996-05-01

    Many studies analyzing the costs of film-based and filmless radiology have focused on multi- modality, hospital-wide solutions. Yet due to the enormous cost of converting an entire large radiology department or hospital to a filmless environment all at once, institutions often choose to eliminate film one area at a time. Narrowing the focus of cost-analysis may be useful in making such decisions. This presentation will outline a methodology for analyzing the cost per exam of film-based and filmless solutions for providing portable chest exams to Intensive Care Units (ICUs). The methodology, unlike most in the literature, is based on parallel data collection from existing filmless and film-based ICUs, and is currently being utilized at our institution. Direct costs, taken from the perspective of the hospital, for portable computed radiography chest exams in one filmless and two film-based ICUs are identified. The major cost components are labor, equipment, materials, and storage. Methods for gathering and analyzing each of the cost components are discussed, including FTE-based and time-based labor analysis, incorporation of equipment depreciation, lease, and maintenance costs, and estimation of materials costs. Extrapolation of data from three ICUs to model hypothetical, hospital-wide film-based and filmless ICU imaging systems is described. Performance of sensitivity analysis on the filmless model to assess the impact of anticipated reductions in specific labor, equipment, and archiving costs is detailed. A number of indirect costs, which are not explicitly included in the analysis, are identified and discussed.

  3. Estimating the National Carbon Abatement Potential of City Policies: A Data-Driven Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Shaughnessy, Eric; Heeter, Jenny; Keyser, David

    Cities are increasingly taking actions such as building code enforcement, urban planning, and public transit expansion to reduce emissions of carbon dioxide in their communities and municipal operations. However, many cities lack the quantitative information needed to estimate policy impacts and prioritize city actions in terms of carbon abatement potential and cost effectiveness. This report fills this research gap by providing methodologies to assess the carbon abatement potential of a variety of city actions. The methodologies are applied to an energy use data set of 23,458 cities compiled for the U.S. Department of Energy City Energy Profile tool. The analysismore » develops a national estimate of the carbon abatement potential of realizable city actions in six specific policy areas encompassing the most commonly implemented city actions. The results of this analysis suggest that, in aggregate, cities could reduce nationwide carbon emissions by about 210 million metric tons of carbon dioxide (MMT CO2) per year in a 'moderate abatement scenario' by 2035 and 480 MMT CO2/year in a 'high abatement scenario' by 2035 through these common actions typically within a city's control in the six policy areas. The aggregate carbon abatement potential of these specific areas equates to a reduction of 3%-7% relative to 2013 U.S. emissions. At the city level, the results suggest the average city could reduce carbon emissions by 7% (moderate) to 19% (high) relative to current city-level emissions. In the context of U.S. climate commitments under the 21st session of the Conference of the Parties (COP21), the estimated national abatement potential of the city actions analyzed in this report equates to about 15%-35% of the remaining carbon abatement necessary to achieve the U.S. COP21 target. Additional city actions outside the scope of this report, such as community choice aggregation (city-level purchasing of renewable energy), zero energy districts, and multi-level governance strategies, could significantly augment the carbon abatement contributions of city actions toward national climate targets. The results suggest that cities may play a pivotal role in progress toward national climate targets. In addition to providing carbon and emissions estimates, this report estimates the national net economic impacts of policies for which cost and benefit data are available. Impact metrics include employment, worker earnings, and gross domestic product (GDP). For the policy areas studied, the economic analysis demonstrates that city carbon abatement may be achieved with only minimal and generally slightly positive economic impacts. Employment impacts range from 0.04% to 0.13% of U.S, employment during implementation and zero to 0.1% thereafter. GDP estimates show net impacts of 0.02% to 0.07% of GDP during implementation and impacts from -0.02% to zero thereafter. This report quantitatively demonstrates the material impact of a limited set of local policy areas on national carbon abatement potential. The magnitude of estimated carbon reductions from city policies, 3%-7% of national emissions by 2035, suggests an important role for city-led actions in reaching U.S. climate goals. Multi-level governance at the city, state, and national levels could augment the carbon abatement potential of city actions and make cities a key component of long-term U.S. climate strategies.« less

  4. Costing for the Future: Exploring Cost Estimation With Unmanned Autonomous Systems

    DTIC Science & Technology

    2016-04-30

    account for how cost estimating for autonomy is different than current methodologies and to suggest ways it can be addressed through the integration and...The Development stage involves refining the system requirements, creating a solution description , and building a system. 3. The Operational Test...parameter describes the extent to which efficient fabrication methodologies and processes are used, and the automation of labor-intensive operations

  5. Evaluation of Methodology for Estimating the Cost of Air Force On-The-Job Training. Final Report.

    ERIC Educational Resources Information Center

    Samers, Bernard N.; And Others

    Described is the final phase of a study directed at the development of an on-the-job training (OJT) costing methodology. Utilizing a modification of survey techniques tested and evaluated during the previous phase, estimates were obtained for the cost of OJT for airman training from the l-level (unskilled to the 3-level (semiskilled) in five…

  6. Fuzzy logic modeling of high performance rechargeable batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, P.; Fennie, C. Jr.; Reisner, D.E.

    1998-07-01

    Accurate battery state-of-charge (SOC) measurements are critical in many portable electronic device applications. Yet conventional techniques for battery SOC estimation are limited in their accuracy, reliability, and flexibility. In this paper the authors present a powerful new approach to estimate battery SOC using a fuzzy logic-based methodology. This approach provides a universally applicable, accurate method for battery SOC estimation either integrated within, or as an external monitor to, an electronic device. The methodology is demonstrated in modeling impedance measurements on Ni-MH cells and discharge voltage curves of Li-ion cells.

  7. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  8. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  9. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  10. Valuing the Ozone-Related Health Benefits of Methane Emission Controls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarofim, Marcus C.; Waldhoff, Stephanie T.; Anenberg, Susan C.

    Methane is a greenhouse gas that oxidizes to form ground-level ozone, itself a greenhouse gas and a health-harmful air pollutant. Reducing methane emissions will both slow anthropogenic climate change and reduce ozone-related mortality. We estimate the benefits of reducing methane emissions anywhere in the world for ozone-related premature mortality globally and for eight geographic regions. Our methods are consistent with those used by the U.S. Government to estimate the Social Cost of Carbon (SCC). We find that the global short- and long-term premature mortality benefits due to reduced ozone production from methane mitigation are (2011)$790 and $1775 per tonne methane,more » respectively. These correspond to approximately 70% and 150% of the valuation of methane’s global climate impacts using the SCC after extrapolating from carbon dioxide to methane using Global Warming Potential (GWP) estimates. Results are most sensitive to the choice of VSL and increase for emission years further in the future. Regionally, most of the global mortality benefits accrue in Asia, but 10% accrue in the United States. This methodology can be used to assess the benefits of methane emission reductions anywhere in the world, including those achieved by national and multinational policies.« less

  11. A systematic comparison of the closed shoulder reduction techniques.

    PubMed

    Alkaduhimi, H; van der Linde, J A; Willigenburg, N W; van Deurzen, D F P; van den Bekerom, M P J

    2017-05-01

    To identify the optimal technique for closed reduction for shoulder instability, based on success rates, reduction time, complication risks, and pain level. A PubMed and EMBASE query was performed, screening all relevant literature of closed reduction techniques mentioning the success rate written in English, Dutch, German, and Arabic. Studies with a fracture dislocation or lacking information on success rates for closed reduction techniques were excluded. We used the modified Coleman Methodology Score (CMS) to assess the quality of included studies and excluded studies with a poor methodological quality (CMS < 50). Finally, a meta-analysis was performed on the data from all studies combined. 2099 studies were screened for their title and abstract, of which 217 studies were screened full-text and finally 13 studies were included. These studies included 9 randomized controlled trials, 2 retrospective comparative studies, and 2 prospective non-randomized comparative studies. A combined analysis revealed that the scapular manipulation is the most successful (97%), fastest (1.75 min), and least painful reduction technique (VAS 1,47); the "Fast, Reliable, and Safe" (FARES) method also scores high in terms of successful reduction (92%), reduction time (2.24 min), and intra-reduction pain (VAS 1.59); the traction-countertraction technique is highly successful (95%), but slower (6.05 min) and more painful (VAS 4.75). For closed reduction of anterior shoulder dislocations, the combined data from the selected studies indicate that scapular manipulation is the most successful and fastest technique, with the shortest mean hospital stay and least pain during reduction. The FARES method seems the best alternative.

  12. Biosecurity Risk Assessment Methodology (BioRAM) v. 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CASKEY, SUSAN; GAUDIOSO, JENNIFER; SALERNO, REYNOLDS

    Sandia National Laboratories International Biological Threat Reduction Dept (SNL/IBTR) has an ongoing mission to enhance biosecurity assessment methodologies, tools, and guise. These will aid labs seeking to implement biosecurity as advocated in the recently released WHO's Biorisk Management: Lab Biosecurity Guidance. BioRAM 2.0 is the software tool developed initially using the SNL LDRD process and designed to complement the "Laboratory Biosecurity Risk Handbook" written by Ren Salerno and Jennifer Gaudioso defining biosecurity risk assessment methodologies.

  13. Quantifying the impact of community quarantine on SARS transmission in Ontario: estimation of secondary case count difference and number needed to quarantine.

    PubMed

    Bondy, Susan J; Russell, Margaret L; Laflèche, Julie Ml; Rea, Elizabeth

    2009-12-24

    Community quarantine is controversial, and the decision to use and prepare for it should be informed by specific quantitative evidence of benefit. Case-study reports on 2002-2004 SARS outbreaks have discussed the role of quarantine in the community in transmission. However, this literature has not yielded quantitative estimates of the reduction in secondary cases attributable to quarantine as would be seen in other areas of health policy and cost-effectiveness analysis. Using data from the 2003 Ontario, Canada, SARS outbreak, two novel expressions for the impact of quarantine are presented. Secondary Case Count Difference (SCCD) reflects reduction in the average number of transmissions arising from a SARS case in quarantine, relative to not in quarantine, at onset of symptoms. SCCD was estimated using Poisson and negative binomial regression models (with identity link function) comparing the number of secondary cases to each index case for quarantine relative to non-quarantined index cases. The inverse of this statistic is proposed as the number needed to quarantine (NNQ) to prevent one additional secondary transmission. Our estimated SCCD was 0.133 fewer secondary cases per quarantined versus non-quarantined index case; and a NNQ of 7.5 exposed individuals to be placed in community quarantine to prevent one additional case of transmission in the community. This analysis suggests quarantine can be an effective preventive measure, although these estimates lack statistical precision. Relative to other health policy areas, literature on quarantine tends to lack in quantitative expressions of effectiveness, or agreement on how best to report differences in outcomes attributable to control measure. We hope to further this discussion through presentation of means to calculate and express the impact of population control measures. The study of quarantine effectiveness presents several methodological and statistical challenges. Further research and discussion are needed to understand the costs and benefits of enacting quarantine, and this includes a discussion of how quantitative benefit should be communicated to decision-makers and the public, and evaluated.

  14. Quantifying the impact of community quarantine on SARS transmission in Ontario: estimation of secondary case count difference and number needed to quarantine

    PubMed Central

    2009-01-01

    Background Community quarantine is controversial, and the decision to use and prepare for it should be informed by specific quantitative evidence of benefit. Case-study reports on 2002-2004 SARS outbreaks have discussed the role of quarantine in the community in transmission. However, this literature has not yielded quantitative estimates of the reduction in secondary cases attributable to quarantine as would be seen in other areas of health policy and cost-effectiveness analysis. Methods Using data from the 2003 Ontario, Canada, SARS outbreak, two novel expressions for the impact of quarantine are presented. Secondary Case Count Difference (SCCD) reflects reduction in the average number of transmissions arising from a SARS case in quarantine, relative to not in quarantine, at onset of symptoms. SCCD was estimated using Poisson and negative binomial regression models (with identity link function) comparing the number of secondary cases to each index case for quarantine relative to non-quarantined index cases. The inverse of this statistic is proposed as the number needed to quarantine (NNQ) to prevent one additional secondary transmission. Results Our estimated SCCD was 0.133 fewer secondary cases per quarantined versus non-quarantined index case; and a NNQ of 7.5 exposed individuals to be placed in community quarantine to prevent one additional case of transmission in the community. This analysis suggests quarantine can be an effective preventive measure, although these estimates lack statistical precision. Conclusions Relative to other health policy areas, literature on quarantine tends to lack in quantitative expressions of effectiveness, or agreement on how best to report differences in outcomes attributable to control measure. We hope to further this discussion through presentation of means to calculate and express the impact of population control measures. The study of quarantine effectiveness presents several methodological and statistical challenges. Further research and discussion are needed to understand the costs and benefits of enacting quarantine, and this includes a discussion of how quantitative benefit should be communicated to decision-makers and the public, and evaluated. PMID:20034405

  15. Normative standards for land use in Vermont: Implications for biodiversity

    USGS Publications Warehouse

    Bettigole, Charles A.; Donovan, Therese M.; Manning, Robert; Austin, John

    2014-01-01

    The conversion of natural lands to developed uses poses a great threat to global terrestrial biodiversity. Natural resource managers, tasked with managing wildlife as a public trust, require techniques for predicting how much and where wildlife habitat is likely to be converted in the future. Here, we develop a methodology to estimate the “social carrying capacity for development” – SKd – for 251 towns across the state of Vermont, USA. SKd represents town residents’ minimum acceptable human population size and level of development within town boundaries. To estimate SKd across towns within the state of Vermont (USA), as well as the average state-wide SKd, we administered a visual preference survey (n = 1505 responses) to Vermont residents, and asked respondents to rate alternative landuse scenarios in a fictional Vermont town on a scale of +4 (highly acceptable) to −4 (highly unacceptable). We additionally collected demographic data such as age and income, as well as ancillary information such as participation in town-planning meetings and location of residence. We used model selection and AIC to fit a cubic function to the response data, allowing us to estimate SKd at a town scale based on town demographic characteristics. On average, Vermonters had a SKd of 9.1% development on the landscape; this estimate is 68% higher than year 2000 levels for development (5.4%). Respondents indicated that management action to curb development was appropriate at 9.4% development (roughly the statewide SKd average). Management by local, regional, and state levels were considered acceptable for curbing development while federal level management of development was considered unacceptable. Given a scenario where development levels were at SKd, we predicted a 16,753 km2 reduction in forested land (−11.16%) and a 1038 km2 reduction in farmland (−60.45%). Such changes would dramatically alter biodiversity patterns state-wide. In a companion paper, we estimate how these changes would affect the distribution of wildlife species.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashford, Mike

    The report describes the prospects for energy efficiency and greenhouse gas emissions reductions in Mexico, along with renewable energy potential. A methodology for developing emissions baselines is shown, in order to prepare project emissions reductions calculations. An application to the USIJI program was also prepared through this project, for a portfolio of energy efficiency projects.

  17. GENERIC VERIFICATION PROTOCOL FOR THE VERIFICATION OF PESTICIDE SPRAY DRIFT REDUCTION TECHNOLOGIES FOR ROW AND FIELD CROPS

    EPA Science Inventory

    This ETV program generic verification protocol was prepared and reviewed for the Verification of Pesticide Drift Reduction Technologies project. The protocol provides a detailed methodology for conducting and reporting results from a verification test of pesticide drift reductio...

  18. Reductions in Average Lengths of Stays for Surgical Procedures Between the 2008 and 2014 United States National Inpatient Samples Were Not Associated With Greater Incidences of Use of Postacute Care Facilities.

    PubMed

    Dexter, Franklin; Epstein, Richard H

    2018-03-01

    Diagnosis-related group (DRG) based reimbursement creates incentives for reduction in hospital length of stay (LOS). Such reductions might be accomplished by lesser incidences of discharges to home. However, we previously reported that, while controlling for DRG, each 1-day decrease in hospital median LOS was associated with lesser odds of transfer to a postacute care facility (P = .0008). The result, though, was limited to elective admissions, 15 common surgical DRGs, and the 2013 US National Readmission Database. We studied the same potential relationship between decreased LOS and postacute care using different methodology and over 2 different years. The observational study was performed using summary measures from the 2008 and 2014 US National Inpatient Sample, with 3 types of categories (strata): (1) Clinical Classifications Software's classes of procedures (CCS), (2) DRGs including a major operating room procedure during hospitalization, or (3) CCS limiting patients to those with US Medicare as the primary payer. Greater reductions in the mean LOS were associated with smaller percentages of patients with disposition to postacute care. Analyzed using 72 different CCSs, 174 DRGs, or 70 CCSs limited to Medicare patients, each pairwise reduction in the mean LOS by 1 day was associated with an estimated 2.6% ± 0.4%, 2.3% ± 0.3%, or 2.4% ± 0.3% (absolute) pairwise reduction in the mean incidence of use of postacute care, respectively. These 3 results obtained using bivariate weighted least squares linear regression were all P < .0001, as were the corresponding results obtained using unweighted linear regression or the Spearman rank correlation. In the United States, reductions in hospital LOS, averaged over many surgical procedures, are not accomplished through a greater incidence of use of postacute care.

  19. Linking trading ratio with TMDL (total maximum daily load) allocation matrix and uncertainty analysis.

    PubMed

    Zhang, H X

    2008-01-01

    An innovative approach for total maximum daily load (TMDL) allocation and implementation is the watershed-based pollutant trading. Given the inherent scientific uncertainty for the tradeoffs between point and nonpoint sources, setting of trading ratios can be a contentious issue and was already listed as an obstacle by several pollutant trading programs. One of the fundamental reasons that a trading ratio is often set higher (e.g. greater than 2) is to allow for uncertainty in the level of control needed to attain water quality standards, and to provide a buffer in case traded reductions are less effective than expected. However, most of the available studies did not provide an approach to explicitly address the determination of trading ratio. Uncertainty analysis has rarely been linked to determination of trading ratio.This paper presents a practical methodology in estimating "equivalent trading ratio (ETR)" and links uncertainty analysis with trading ratio determination from TMDL allocation process. Determination of ETR can provide a preliminary evaluation of "tradeoffs" between various combination of point and nonpoint source control strategies on ambient water quality improvement. A greater portion of NPS load reduction in overall TMDL load reduction generally correlates with greater uncertainty and thus requires greater trading ratio. The rigorous quantification of trading ratio will enhance the scientific basis and thus public perception for more informed decision in overall watershed-based pollutant trading program. (c) IWA Publishing 2008.

  20. Estimation of the limit of detection in semiconductor gas sensors through linearized calibration models.

    PubMed

    Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago

    2018-07-12

    The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Applications of physiological bases of ageing to forensic sciences. Estimation of age-at-death.

    PubMed

    C Zapico, Sara; Ubelaker, Douglas H

    2013-03-01

    Age-at-death estimation is one of the main challenges in forensic sciences since it contributes to the identification of individuals. There are many anthropological techniques to estimate the age at death in children and adults. However, in adults this methodology is less accurate and requires population specific references. For that reason, new methodologies have been developed. Biochemical methods are based on the natural process of ageing, which induces different biochemical changes that lead to alterations in cells and tissues. In this review, we describe different attempts to estimate the age in adults based on these changes. Chemical approaches imply modifications in molecules or accumulation of some products. Molecular biology approaches analyze the modifications in DNA and chromosomes. Although the most accurate technique appears to be aspartic acid racemization, it is important to take into account the other techniques because the forensic context and the human remains available will determine the possibility to apply one or another methodology. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Experimental Validation of Strategy for the Inverse Estimation of Mechanical Properties and Coefficient of Friction in Flat Rolling

    NASA Astrophysics Data System (ADS)

    Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker

    2017-08-01

    Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.

  3. Artificial Neural Identification and LMI Transformation for Model Reduction-Based Control of the Buck Switch-Mode Regulator

    NASA Astrophysics Data System (ADS)

    Al-Rabadi, Anas N.

    2009-10-01

    This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.

  4. Optimization of educational paths for higher education

    NASA Astrophysics Data System (ADS)

    Tarasyev, Alexandr A.; Agarkov, Gavriil; Medvedev, Aleksandr

    2017-11-01

    In our research, we combine the theory of economic behavior and the methodology of increasing efficiency of the human capital to estimate the optimal educational paths. We provide an optimization model for higher education process to analyze possible educational paths for each rational individual. The preferences of each rational individual are compared to the best economically possible educational path. The main factor of the individual choice, which is formed by the formation of optimal educational path, deals with higher salaries level in the chosen economic sector after graduation. Another factor that influences on the economic profit is the reduction of educational costs or the possibility of the budget support for the student. The main outcome of this research consists in correction of the governmental policy of investment in human capital based on the results of educational paths optimal control.

  5. Approach for Assessing Direct Flood Damages

    NASA Astrophysics Data System (ADS)

    Gaňová, Lenka; Zeleňáková, Martina; Słyś, Daniel; Purcz, Pavol

    2014-11-01

    This article presents a methodological approach to flood direct tangible damage - damage to assets and direct intangible damage - environmental damage and loss of life assessment. The assessment of flood risk is an essential part of the risk management approach, which is the conceptual basis for the EU directive 2007/60/ES on the assessment and management of flood risk. The purpose of this directive is to establish a framework for the assessment and management of flood risk, aiming at the reduction of the adverse consequences for human health, the environment, cultural heritage and economic activity associated with flood in the community. Overall, an accurate estimation of negative effects on assets, environment and people is important in order to be able to determine the economy, environmental and social flood risk level in a system and the effects of risk mitigation measures.

  6. Analysis of longitudinal marginal structural models.

    PubMed

    Bryan, Jenny; Yu, Zhuo; Van Der Laan, Mark J

    2004-07-01

    In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.

  7. Improved population estimates through the use of auxiliary information

    USGS Publications Warehouse

    Johnson, D.H.; Ralph, C.J.; Scott, J.M.

    1981-01-01

    When estimating the size of a population of birds, the investigator may have, in addition to an estimator based on a statistical sample, information on one of several auxiliary variables, such as: (1) estimates of the population made on previous occasions, (2) measures of habitat variables associated with the size of the population, and (3) estimates of the population sizes of other species that correlate with the species of interest. Although many studies have described the relationships between each of these kinds of data and the population size to be estimated, very little work has been done to improve the estimator by incorporating such auxiliary information. A statistical methodology termed 'empirical Bayes' seems to be appropriate to these situations. The potential that empirical Bayes methodology has for improved estimation of the population size of the Mallard (Anas platyrhynchos) is explored. In the example considered, three empirical Bayes estimators were found to reduce the error by one-fourth to one-half of that of the usual estimator.

  8. JEDI Methodology | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Methodology JEDI Methodology The intent of the Jobs and Economic Development Impact (JEDI) models costs) to demonstrate the employment and economic impacts that will likely result during the estimate of overall economic impacts from specific scenarios. Please see Limitations of JEDI Models for

  9. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    NASA Astrophysics Data System (ADS)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  10. Development of Methodologies for the Estimation of Thermal Properties Associated with Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1996-01-01

    A thermal stress analysis is an important aspect in the design of aerospace structures and vehicles such as the High Speed Civil Transport (HSCT) at the National Aeronautics and Space Administration Langley Research Center (NASA-LaRC). These structures are complex and are often composed of numerous components fabricated from a variety of different materials. The thermal loads on these structures induce temperature variations within the structure, which in turn result in the development of thermal stresses. Therefore, a thermal stress analysis requires knowledge of the temperature distributions within the structures which consequently necessitates the need for accurate knowledge of the thermal properties, boundary conditions and thermal interface conditions associated with the structural materials. The goal of this proposed multi-year research effort was to develop estimation methodologies for the determination of the thermal properties and interface conditions associated with aerospace vehicles. Specific objectives focused on the development and implementation of optimal experimental design strategies and methodologies for the estimation of thermal properties associated with simple composite and honeycomb structures. The strategy used in this multi-year research effort was to first develop methodologies for relatively simple systems and then systematically modify these methodologies to analyze complex structures. This can be thought of as a building block approach. This strategy was intended to promote maximum usability of the resulting estimation procedure by NASA-LARC researchers through the design of in-house experimentation procedures and through the use of an existing general purpose finite element software.

  11. A methodology for least-squares local quasi-geoid modelling using a noisy satellite-only gravity field model

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-04-01

    The paper is about a methodology to combine a noisy satellite-only global gravity field model (GGM) with other noisy datasets to estimate a local quasi-geoid model using weighted least-squares techniques. In this way, we attempt to improve the quality of the estimated quasi-geoid model and to complement it with a full noise covariance matrix for quality control and further data processing. The methodology goes beyond the classical remove-compute-restore approach, which does not account for the noise in the satellite-only GGM. We suggest and analyse three different approaches of data combination. Two of them are based on a local single-scale spherical radial basis function (SRBF) model of the disturbing potential, and one is based on a two-scale SRBF model. Using numerical experiments, we show that a single-scale SRBF model does not fully exploit the information in the satellite-only GGM. We explain this by a lack of flexibility of a single-scale SRBF model to deal with datasets of significantly different bandwidths. The two-scale SRBF model performs well in this respect, provided that the model coefficients representing the two scales are estimated separately. The corresponding methodology is developed in this paper. Using the statistics of the least-squares residuals and the statistics of the errors in the estimated two-scale quasi-geoid model, we demonstrate that the developed methodology provides a two-scale quasi-geoid model, which exploits the information in all datasets.

  12. The IDF Diabetes Atlas methodology for estimating global prevalence of hyperglycaemia in pregnancy.

    PubMed

    Linnenkamp, U; Guariguata, L; Beagley, J; Whiting, D R; Cho, N H

    2014-02-01

    Hyperglycaemia is one of the most prevalent metabolic disorders occurring during pregnancy. Limited data are available on the global prevalence of hyperglycaemia in pregnancy. The International Diabetes Federation (IDF) has developed a methodology for generating estimates of the prevalence of hyperglycaemia in pregnancy, including hyperglycaemia first detected in pregnancy and live births to women with known diabetes, among women of childbearing age (20-49 years). A systematic review of the literature for studies reporting the prevalence of gestational diabetes was conducted. Studies were evaluated and scored to favour those that were representative of a large population, conducted recently, reported age-specific estimates, and case identification was based on blood test. Age-specific prevalence data from studies were entered to produce estimates for five-year age groups using logistic regression to smooth curves, with age as the independent variable. The derived age-specific prevalence was adjusted for differences in diagnostic criteria in the underlying data. Cases of hyperglycaemia in pregnancy were derived from age-specific estimates of fertility and age-specific population estimates. Country-specific estimates were generated for countries with available data. Regional and global estimates were generated based on aggregation and extrapolation for 219 countries and territories. Available fertility rates and diabetes prevalence estimates were used to estimate the proportion of hyperglycaemia in pregnancy that may be due to total diabetes in pregnancy - pregnancy in women with known diabetes and diabetes first detected in pregnancy. The literature review identified 199 studies that were eligible for characterisation and selection. After scoring and exclusion requirements, 46 studies were selected representing 34 countries. More than 50% of selected studies came from Europe and North America and Caribbean. The smallest number of identified studies came from sub-Saharan Africa. The majority of studies were for high-income countries, although low- and middle-income countries were also represented. Prevalence estimates of hyperglycaemia in pregnancy are sensitive to the data from which they are derived. The IDF methodology is a transparent, reproducible, and modifiable method for estimating the burden of hyperglycaemia in pregnancy. More data are needed, in particular from developing countries, to strengthen the methodology. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Estimating age-based antiretroviral therapy costs for HIV-infected children in resource-limited settings based on World Health Organization weight-based dosing recommendations.

    PubMed

    Doherty, Kathleen; Essajee, Shaffiq; Penazzato, Martina; Holmes, Charles; Resch, Stephen; Ciaranello, Andrea

    2014-05-02

    Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0-13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments.

  14. Infant Mortality

    MedlinePlus

    ... estimating the gestational age of a newborn. These methodological changes prevent the direct comparison of trends prior ... high of 12.8 percent in 2006. A methodological change caused a sharp decline from 2006 to ...

  15. Estimated trichloroethene transformation rates due to naturally occurring biodegradation in a fractured-rock aquifer

    USGS Publications Warehouse

    Chapelle, Francis H.; Lacombe, Pierre J.; Bradley, Paul M.

    2012-01-01

    Rates of trichloroethene (TCE) mass transformed by naturally occurring biodegradation processes in a fractured rock aquifer underlying a former Naval Air Warfare Center (NAWC) site in West Trenton, New Jersey, were estimated. The methodology included (1) dividing the site into eight elements of equal size and vertically integrating observed concentrations of two daughter products of TCE biodegradation–cis-dichloroethene (cis-DCE) and chloride–using water chemistry data from a network of 88 observation wells; (2) summing the molar mass of cis-DCE, the first biodegradation product of TCE, to provide a probable underestimate of reductive biodegradation of TCE, (3) summing the molar mass of chloride, the final product of chlorinated ethene degradation, to provide a probable overestimate of overall biodegradation. Finally, lower and higher estimates of aquifer porosities and groundwater residence times were used to estimate a range of overall transformation rates. The highest TCE transformation rates estimated using this procedure for the combined overburden and bedrock aquifers was 945 kg/yr, and the lowest was 37 kg/yr. However, hydrologic considerations suggest that approximately 100 to 500 kg/yr is the probable range for overall TCE transformation rates in this system. Estimated rates of TCE transformation were much higher in shallow overburden sediments (approximately 100 to 500 kg/yr) than in the deeper bedrock aquifer (approximately 20 to 0.15 kg/yr), which reflects the higher porosity and higher contaminant mass present in the overburden. By way of comparison, pump-and-treat operations at the NAWC site are estimated to have removed between 1,073 and 1,565 kg/yr of TCE between 1996 and 2009.

  16. Valuation Diagramming and Accounting of Transactive Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makhmalbaf, Atefe; Hammerstrom, Donald J.; Huang, Qiuhua

    Transactive energy (TE) systems support both economic and technical objectives of a power system including efficiency and reliability. TE systems utilize value-driven mechanisms to coordinate and balance responsive supply and demand in the power system. Economic performance of TE systems cannot be assessed without estimating their value. Estimating the potential value of transactive energy systems requires a systematic valuation methodology that can capture value exchanges among different stakeholders (i.e., actors) and ultimately estimate impact of one TE design and compare it against another one. Such a methodology can help decision makers choose the alternative that results in preferred outcomes. Thismore » paper presents a valuation methodology developed to assess value of TE systems. A TE use-case example is discussed, and metrics identified in the valuation process are quantified using a TE simulation program.« less

  17. Estimation of snow in extratropical cyclones from multiple frequency airborne radar observations. An Expectation-Maximization approach

    NASA Astrophysics Data System (ADS)

    Grecu, M.; Tian, L.; Heymsfield, G. M.

    2017-12-01

    A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.

  18. Direct one-pot reductive amination of aldehydes with nitroarenes in a domino fashion: catalysis by gum-acacia-stabilized palladium nanoparticles.

    PubMed

    Sreedhar, B; Reddy, P Surendra; Devi, D Keerthi

    2009-11-20

    This note describes the direct reductive amination of carbonyl compounds with nitroarenes using gum acacia-palladium nanoparticles, employing molecular hydrogen as the reductant. This methodology is found to be applicable to both aliphatic and aromatic aldehydes and a wide range of nitroarenes. The operational simplicity and the mild reaction conditions add to the value of this method as a practical alternative to the reductive amination of carbonyl compounds.

  19. Pros and Cons of 3D Image Fusion in Endovascular Aortic Repair: A Systematic Review and Meta-analysis.

    PubMed

    Goudeketting, Seline R; Heinen, Stefan G H; Ünlü, Çağdaş; van den Heuvel, Daniel A F; de Vries, Jean-Paul P M; van Strijen, Marco J; Sailer, Anna M

    2017-08-01

    To systematically review and meta-analyze the added value of 3-dimensional (3D) image fusion technology in endovascular aortic repair for its potential to reduce contrast media volume, radiation dose, procedure time, and fluoroscopy time. Electronic databases were systematically searched for studies published between January 2010 and March 2016 that included a control group describing 3D fusion imaging in endovascular aortic procedures. Two independent reviewers assessed the methodological quality of the included studies and extracted data on iodinated contrast volume, radiation dose, procedure time, and fluoroscopy time. The contrast use for standard and complex endovascular aortic repairs (fenestrated, branched, and chimney) were pooled using a random-effects model; outcomes are reported as the mean difference with 95% confidence intervals (CIs). Seven studies, 5 retrospective and 2 prospective, involving 921 patients were selected for analysis. The methodological quality of the studies was moderate (median 17, range 15-18). The use of fusion imaging led to an estimated mean reduction in iodinated contrast of 40.1 mL (95% CI 16.4 to 63.7, p=0.002) for standard procedures and a mean 70.7 mL (95% CI 44.8 to 96.6, p<0.001) for complex repairs. Secondary outcome measures were not pooled because of potential bias in nonrandomized data, but radiation doses, procedure times, and fluoroscopy times were lower, although not always significantly, in the fusion group in 6 of the 7 studies. Compared with the control group, 3D fusion imaging is associated with a significant reduction in the volume of contrast employed for standard and complex endovascular aortic procedures, which can be particularly important in patients with renal failure. Radiation doses, procedure times, and fluoroscopy times were reduced when 3D fusion was used.

  20. Estimate of the benefits of a population-based reduction in dietary sodium additives on hypertension and its related health care costs in Canada.

    PubMed

    Joffres, Michel R; Campbell, Norm R C; Manns, Braden; Tu, Karen

    2007-05-01

    Hypertension is the leading risk factor for mortality worldwide. One-quarter of the adult Canadian population has hypertension, and more than 90% of the population is estimated to develop hypertension if they live an average lifespan. Reductions in dietary sodium additives significantly lower systolic and diastolic blood pressure, and population reductions in dietary sodium are recommended by major scientific and public health organizations. To estimate the reduction in hypertension prevalence and specific hypertension management cost savings associated with a population-wide reduction in dietary sodium additives. Based on data from clinical trials, reducing dietary sodium additives by 1840 mg/day would result in a decrease of 5.06 mmHg (systolic) and 2.7 mmHg (diastolic) blood pressures. Using Canadian Heart Health Survey data, the resulting reduction in hypertension was estimated. Costs of laboratory testing and physician visits were based on 2001 to 2003 Ontario Health Insurance Plan data, and the number of physician visits and costs of medications for patients with hypertension were taken from 2003 IMS Canada. To estimate the reduction in total physician visits and laboratory costs, current estimates of aware hypertensive patients in Canada were used from the Canadian Community Health Survey. Reducing dietary sodium additives may decrease hypertension prevalence by 30%, resulting in one million fewer hypertensive patients in Canada, and almost double the treatment and control rate. Direct cost savings related to fewer physician visits, laboratory tests and lower medication use are estimated to be approximately $430 million per year. Physician visits and laboratory costs would decrease by 6.5%, and 23% fewer treated hypertensive patients would require medications for control of blood pressure. Based on these estimates, lowering dietary sodium additives would lead to a large reduction in hypertension prevalence and result in health care cost savings in Canada.

  1. Estimate of the benefits of a population-based reduction in dietary sodium additives on hypertension and its related health care costs in Canada

    PubMed Central

    Joffres, Michel R; Campbell, Norm RC; Manns, Braden; Tu, Karen

    2007-01-01

    BACKGROUND: Hypertension is the leading risk factor for mortality worldwide. One-quarter of the adult Canadian population has hypertension, and more than 90% of the population is estimated to develop hypertension if they live an average lifespan. Reductions in dietary sodium additives significantly lower systolic and diastolic blood pressure, and population reductions in dietary sodium are recommended by major scientific and public health organizations. OBJECTIVES: To estimate the reduction in hypertension prevalence and specific hypertension management cost savings associated with a population-wide reduction in dietary sodium additives. METHODS: Based on data from clinical trials, reducing dietary sodium additives by 1840 mg/day would result in a decrease of 5.06 mmHg (systolic) and 2.7 mmHg (diastolic) blood pressures. Using Canadian Heart Health Survey data, the resulting reduction in hypertension was estimated. Costs of laboratory testing and physician visits were based on 2001 to 2003 Ontario Health Insurance Plan data, and the number of physician visits and costs of medications for patients with hypertension were taken from 2003 IMS Canada. To estimate the reduction in total physician visits and laboratory costs, current estimates of aware hypertensive patients in Canada were used from the Canadian Community Health Survey. RESULTS: Reducing dietary sodium additives may decrease hypertension prevalence by 30%, resulting in one million fewer hypertensive patients in Canada, and almost double the treatment and control rate. Direct cost savings related to fewer physician visits, laboratory tests and lower medication use are estimated to be approximately $430 million per year. Physician visits and laboratory costs would decrease by 6.5%, and 23% fewer treated hypertensive patients would require medications for control of blood pressure. CONCLUSIONS: Based on these estimates, lowering dietary sodium additives would lead to a large reduction in hypertension prevalence and result in health care cost savings in Canada. PMID:17487286

  2. A methodology for estimating risks associated with landslides of contaminated soil into rivers.

    PubMed

    Göransson, Gunnel; Norrman, Jenny; Larson, Magnus; Alén, Claes; Rosén, Lars

    2014-02-15

    Urban areas adjacent to surface water are exposed to soil movements such as erosion and slope failures (landslides). A landslide is a potential mechanism for mobilisation and spreading of pollutants. This mechanism is in general not included in environmental risk assessments for contaminated sites, and the consequences associated with contamination in the soil are typically not considered in landslide risk assessments. This study suggests a methodology to estimate the environmental risks associated with landslides in contaminated sites adjacent to rivers. The methodology is probabilistic and allows for datasets with large uncertainties and the use of expert judgements, providing quantitative estimates of probabilities for defined failures. The approach is illustrated by a case study along the river Göta Älv, Sweden, where failures are defined and probabilities for those failures are estimated. Failures are defined from a pollution perspective and in terms of exceeding environmental quality standards (EQSs) and acceptable contaminant loads. Models are then suggested to estimate probabilities of these failures. A landslide analysis is carried out to assess landslide probabilities based on data from a recent landslide risk classification study along the river Göta Älv. The suggested methodology is meant to be a supplement to either landslide risk assessment (LRA) or environmental risk assessment (ERA), providing quantitative estimates of the risks associated with landslide in contaminated sites. The proposed methodology can also act as a basis for communication and discussion, thereby contributing to intersectoral management solutions. From the case study it was found that the defined failures are governed primarily by the probability of a landslide occurring. The overall probabilities for failure are low; however, if a landslide occurs the probabilities of exceeding EQS are high and the probability of having at least a 10% increase in the contamination load within one year is also high. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Recent Approaches to Estimate Associations Between Source-Specific Air Pollution and Health.

    PubMed

    Krall, Jenna R; Strickland, Matthew J

    2017-03-01

    Estimating health effects associated with source-specific exposure is important for better understanding how pollution impacts health and for developing policies to better protect public health. Although epidemiologic studies of sources can be informative, these studies are challenging to conduct because source-specific exposures (e.g., particulate matter from vehicles) often are not directly observed and must be estimated. We reviewed recent studies that estimated associations between pollution sources and health to identify methodological developments designed to address important challenges. Notable advances in epidemiologic studies of sources include approaches for (1) propagating uncertainty in source estimation into health effect estimates, (2) assessing regional and seasonal variability in emissions sources and source-specific health effects, and (3) addressing potential confounding in estimated health effects. Novel methodological approaches to address challenges in studies of pollution sources, particularly evaluation of source-specific health effects, are important for determining how source-specific exposure impacts health.

  4. Methodology for estimating human perception to tremors in high-rise buildings

    NASA Astrophysics Data System (ADS)

    Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien

    2017-07-01

    Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.

  5. Rotorcraft noise

    NASA Technical Reports Server (NTRS)

    Huston, R. J. (Compiler)

    1982-01-01

    The establishment of a realistic plan for NASA and the U.S. helicopter industry to develop a design-for-noise methodology, including plans for the identification and development of promising noise reduction technology was discussed. Topics included: noise reduction techniques, scaling laws, empirical noise prediction, psychoacoustics, and methods of developing and validing noise prediction methods.

  6. Evaluating a Health Risk Reduction Program.

    ERIC Educational Resources Information Center

    Nagelberg, Daniel B.

    1981-01-01

    A health risk reduction program at Bowling Green State University (Ohio) tested the efficacy of peer education against the efficacy of returning (by mail) health questionnaire results. A peer health education program did not appear to be effective in changing student attitudes or lifestyles; however, the research methodology may not have been…

  7. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A; Cole, Wesley J; Sun, Yinong

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss commonmore » modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges associate with integration of variable generation resources.« less

  8. Energy efficiency analysis and implementation of AES on an FPGA

    NASA Astrophysics Data System (ADS)

    Kenney, David

    The Advanced Encryption Standard (AES) was developed by Joan Daemen and Vincent Rjimen and endorsed by the National Institute of Standards and Technology in 2001. It was designed to replace the aging Data Encryption Standard (DES) and be useful for a wide range of applications with varying throughput, area, power dissipation and energy consumption requirements. Field Programmable Gate Arrays (FPGAs) are flexible and reconfigurable integrated circuits that are useful for many different applications including the implementation of AES. Though they are highly flexible, FPGAs are often less efficient than Application Specific Integrated Circuits (ASICs); they tend to operate slower, take up more space and dissipate more power. There have been many FPGA AES implementations that focus on obtaining high throughput or low area usage, but very little research done in the area of low power or energy efficient FPGA based AES; in fact, it is rare for estimates on power dissipation to be made at all. This thesis presents a methodology to evaluate the energy efficiency of FPGA based AES designs and proposes a novel FPGA AES implementation which is highly flexible and energy efficient. The proposed methodology is implemented as part of a novel scripting tool, the AES Energy Analyzer, which is able to fully characterize the power dissipation and energy efficiency of FPGA based AES designs. Additionally, this thesis introduces a new FPGA power reduction technique called Opportunistic Combinational Operand Gating (OCOG) which is used in the proposed energy efficient implementation. The AES Energy Analyzer was able to estimate the power dissipation and energy efficiency of the proposed AES design during its most commonly performed operations. It was found that the proposed implementation consumes less energy per operation than any previous FPGA based AES implementations that included power estimations. Finally, the use of Opportunistic Combinational Operand Gating on an AES cipher was found to reduce its dynamic power consumption by up to 17% when compared to an identical design that did not employ the technique.

  9. Budget Reduction in the Navy

    DTIC Science & Technology

    1990-12-01

    process; (4) the degree efbudgetary responsiveness in DOD/DON cutback budgeting to criteria developed from two theoretical models of fical reduction... developed from two theoretical models of fiscal reduction methodology. |V ..... A A,’ 4 . 0 f .; . . Dis Apm a al@r Di3t I peala iii, TARLK Or COUTENS...accompanying reshaping of U.S. forces include a continuation of the positive developments in Eastern Europe and the Soviet Union, completion of

  10. The Medicare Hospital Readmissions Reduction Program: potential unintended consequences for hospitals serving vulnerable populations.

    PubMed

    Gu, Qian; Koenig, Lane; Faerberg, Jennifer; Steinberg, Caroline Rossi; Vaz, Christopher; Wheatley, Mary P

    2014-06-01

    To explore the impact of the Hospital Readmissions Reduction Program (HRRP) on hospitals serving vulnerable populations. Medicare inpatient claims to calculate condition-specific readmission rates. Medicare cost reports and other sources to determine a hospital's share of duals, profit margin, and characteristics. Regression analyses and projections were used to estimate risk-adjusted readmission rates and financial penalties under the HRRP. Findings were compared across groups of hospitals, determined based on their share of duals, to assess differential impacts of the HRRP. Both patient dual-eligible status and a hospital's dual-eligible share of Medicare discharges have a positive impact on risk-adjusted hospital readmission rates. Under current Centers for Medicare and Medicaid Service methodology, which does not adjust for socioeconomic status, high-dual hospitals are more likely to have excess readmissions than low-dual hospitals. As a result, HRRP penalties will disproportionately fall on high-dual hospitals, which are more likely to have negative all-payer margins, raising concerns of unintended consequences of the program for vulnerable populations. Policies to reduce hospital readmissions must balance the need to ensure continued access to quality care for vulnerable populations. © Health Research and Educational Trust.

  11. The Medicare Hospital Readmissions Reduction Program: Potential Unintended Consequences for Hospitals Serving Vulnerable Populations

    PubMed Central

    Gu, Qian; Koenig, Lane; Faerberg, Jennifer; Steinberg, Caroline Rossi; Vaz, Christopher; Wheatley, Mary P

    2014-01-01

    Objective To explore the impact of the Hospital Readmissions Reduction Program (HRRP) on hospitals serving vulnerable populations. Data Sources/Study Setting Medicare inpatient claims to calculate condition-specific readmission rates. Medicare cost reports and other sources to determine a hospital's share of duals, profit margin, and characteristics. Study Design Regression analyses and projections were used to estimate risk-adjusted readmission rates and financial penalties under the HRRP. Findings were compared across groups of hospitals, determined based on their share of duals, to assess differential impacts of the HRRP. Principal Findings Both patient dual-eligible status and a hospital's dual-eligible share of Medicare discharges have a positive impact on risk-adjusted hospital readmission rates. Under current Centers for Medicare and Medicaid Service methodology, which does not adjust for socioeconomic status, high-dual hospitals are more likely to have excess readmissions than low-dual hospitals. As a result, HRRP penalties will disproportionately fall on high-dual hospitals, which are more likely to have negative all-payer margins, raising concerns of unintended consequences of the program for vulnerable populations. Conclusions Policies to reduce hospital readmissions must balance the need to ensure continued access to quality care for vulnerable populations. PMID:24417309

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra

    Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less

  13. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  14. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  15. Utilization of accident databases and fuzzy sets to estimate frequency of HazMat transport accidents.

    PubMed

    Qiao, Yuanhua; Keren, Nir; Mannan, M Sam

    2009-08-15

    Risk assessment and management of transportation of hazardous materials (HazMat) require the estimation of accident frequency. This paper presents a methodology to estimate hazardous materials transportation accident frequency by utilizing publicly available databases and expert knowledge. The estimation process addresses route-dependent and route-independent variables. Negative binomial regression is applied to an analysis of the Department of Public Safety (DPS) accident database to derive basic accident frequency as a function of route-dependent variables, while the effects of route-independent variables are modeled by fuzzy logic. The integrated methodology provides the basis for an overall transportation risk analysis, which can be used later to develop a decision support system.

  16. The Statistical point of view of Quality: the Lean Six Sigma methodology

    PubMed Central

    Viti, Andrea; Terzi, Alberto

    2015-01-01

    Six Sigma and Lean are two quality improvement methodologies. The Lean Six Sigma methodology is applicable to repetitive procedures. Therefore, the use of this methodology in the health-care arena has focused mainly on areas of business operations, throughput, and case management and has focused on efficiency outcomes. After the revision of methodology, the paper presents a brief clinical example of the use of Lean Six Sigma as a quality improvement method in the reduction of the complications during and after lobectomies. Using Lean Six Sigma methodology, the multidisciplinary teams could identify multiple modifiable points across the surgical process. These process improvements could be applied to different surgical specialties and could result in a measurement, from statistical point of view, of the surgical quality. PMID:25973253

  17. The Statistical point of view of Quality: the Lean Six Sigma methodology.

    PubMed

    Bertolaccini, Luca; Viti, Andrea; Terzi, Alberto

    2015-04-01

    Six Sigma and Lean are two quality improvement methodologies. The Lean Six Sigma methodology is applicable to repetitive procedures. Therefore, the use of this methodology in the health-care arena has focused mainly on areas of business operations, throughput, and case management and has focused on efficiency outcomes. After the revision of methodology, the paper presents a brief clinical example of the use of Lean Six Sigma as a quality improvement method in the reduction of the complications during and after lobectomies. Using Lean Six Sigma methodology, the multidisciplinary teams could identify multiple modifiable points across the surgical process. These process improvements could be applied to different surgical specialties and could result in a measurement, from statistical point of view, of the surgical quality.

  18. Investigating the influence on safety of retrofitting Italian motorways with barriers meeting a new EU standard.

    PubMed

    Cafiso, Salvatore; D'Agostino, Carmelo; Persaud, Bhagwant

    2017-04-03

    A new European Union (EU) regulation for safety barriers, which is based on performance, has encouraged road agencies to perform an upgrade of old barriers, with the expectation that there will be safety benefits at the retrofitted sites. The new class of barriers was designed and installed in compliance with the 1998 (European Norm) EN 1317 standards for road restraint systems, which lays down common requirements for the testing and certification of road restraint systems in all countries of the European Committee for Standardization (CEN). Both the older and new barriers are made of steel and are installed in such a way as to avoid vehicle intrusion, but the older ones are thought to be only effective at low speeds and large angles of impact. The new standard seeks to remedy this by providing better protection at higher speeds. This article seeks to quantify the effect on the frequency of fatal and injury crashes of retrofitting motorways with barriers meeting the new standards. The estimation of the crash modification was carried out by performing an empirical Bayes before-after analysis based on data from the A18 Messina-Catania motorway in Italy. The methodology has the great advantage to account for the regression to the mean effects. Besides, to account for time trend effects and dispersion of crash data, a modified calibration methodology of safety performance was used. This study, based on data collected on 76 km of motorway in the period 2000-2012, derived Crash Modification Factor point estimates that indicate reductions of 72% for run-off-road fatal and injury crashes and 38% in total fatal and injury crashes that could be expected by upgrading an old safety barrier by complying with new EU 1317 standards. The estimated benefit-cost ratio of 5.57 for total crashes indicates that the treatment is cost effective. The magnitude of this benefit indicates that the retrofits are cost-effective even for total crashes and should continue in any European country inasmuch as the estimated Crash Modification Factors are based on treatment sites that are reasonably representative of all European motorways.

  19. Changing cluster composition in cluster randomised controlled trials: design and analysis considerations

    PubMed Central

    2014-01-01

    Background There are many methodological challenges in the conduct and analysis of cluster randomised controlled trials, but one that has received little attention is that of post-randomisation changes to cluster composition. To illustrate this, we focus on the issue of cluster merging, considering the impact on the design, analysis and interpretation of trial outcomes. Methods We explored the effects of merging clusters on study power using standard methods of power calculation. We assessed the potential impacts on study findings of both homogeneous cluster merges (involving clusters randomised to the same arm of a trial) and heterogeneous merges (involving clusters randomised to different arms of a trial) by simulation. To determine the impact on bias and precision of treatment effect estimates, we applied standard methods of analysis to different populations under analysis. Results Cluster merging produced a systematic reduction in study power. This effect depended on the number of merges and was most pronounced when variability in cluster size was at its greatest. Simulations demonstrate that the impact on analysis was minimal when cluster merges were homogeneous, with impact on study power being balanced by a change in observed intracluster correlation coefficient (ICC). We found a decrease in study power when cluster merges were heterogeneous, and the estimate of treatment effect was attenuated. Conclusions Examples of cluster merges found in previously published reports of cluster randomised trials were typically homogeneous rather than heterogeneous. Simulations demonstrated that trial findings in such cases would be unbiased. However, simulations also showed that any heterogeneous cluster merges would introduce bias that would be hard to quantify, as well as having negative impacts on the precision of estimates obtained. Further methodological development is warranted to better determine how to analyse such trials appropriately. Interim recommendations include avoidance of cluster merges where possible, discontinuation of clusters following heterogeneous merges, allowance for potential loss of clusters and additional variability in cluster size in the original sample size calculation, and use of appropriate ICC estimates that reflect cluster size. PMID:24884591

  20. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework.

    PubMed

    Yang, Guoxiang; Best, Elly P H

    2015-09-15

    Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Organizational Change Efforts: Methodologies for Assessing Organizational Effectiveness and Program Costs versus Benefits.

    ERIC Educational Resources Information Center

    Macy, Barry A.; Mirvis, Philip H.

    1982-01-01

    A standardized methodology for identifying, defining, and measuring work behavior and performance rather than production, and a methodology that estimates the costs and benefits of work innovation are presented for assessing organizational effectiveness and program costs versus benefits in organizational change programs. Factors in a cost-benefit…

  2. 78 FR 21162 - Notice of Intent to Establish an Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-09

    ... Programs. NCSES, under generic clearance (OMB 3145-0174), has conducted a methodological study to test a.... Estimate of Burden: In the methodological study, HAs required 1 hour on average to complete these tasks...,206 hours. Most ECs were able to complete this task in less than 30 minutes in the methodological...

  3. Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure

    ERIC Educational Resources Information Center

    Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M.

    2014-01-01

    Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…

  4. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  5. Specific surface to evaluate the efficiencies of milling and pretreatment of wood for enzymatic saccharification

    Treesearch

    Junyong Zhu; G.S. Wang; X.J. Pan; Roland Gleisner

    2009-01-01

    Sieving methods have been almost exclusively used for feedstock size-reduction characterization in the biomass refining literature. This study demonstrates a methodology to properly characterize specific surface of biomass substrates through two dimensional measurement of each fiber of the substrate using a wet imaging technique. The methodology provides more...

  6. Area estimation using multiyear designs and partial crop identification

    NASA Technical Reports Server (NTRS)

    Sielken, R. L., Jr.

    1983-01-01

    Progress is reported for the following areas: (1) estimating the stratum's crop acreage proportion using the multiyear area estimation model; (2) assessment of multiyear sampling designs; and (3) development of statistical methodology for incorporating partially identified sample segments into crop area estimation.

  7. Methodology for computing the burden of disease of adverse events following immunization.

    PubMed

    McDonald, Scott A; Nijsten, Danielle; Bollaerts, Kaatje; Bauwens, Jorgen; Praet, Nicolas; van der Sande, Marianne; Bauchau, Vincent; de Smedt, Tom; Sturkenboom, Miriam; Hahné, Susan

    2018-03-24

    Composite disease burden measures such as disability-adjusted life-years (DALY) have been widely used to quantify the population-level health impact of disease or injury, but application has been limited for the estimation of the burden of adverse events following immunization. Our objective was to assess the feasibility of adapting the DALY approach for estimating adverse event burden. We developed a practical methodological framework, explicitly describing all steps involved: acquisition of relative or absolute risks and background event incidence rates, selection of disability weights and durations, and computation of the years lived with disability (YLD) measure, with appropriate estimation of uncertainty. We present a worked example, in which YLD is computed for 3 recognized adverse reactions following 3 childhood vaccination types, based on background incidence rates and relative/absolute risks retrieved from the literature. YLD provided extra insight into the health impact of an adverse event over presentation of incidence rates only, as severity and duration are additionally incorporated. As well as providing guidance for the deployment of DALY methodology in the context of adverse events associated with vaccination, we also identified where data limitations potentially occur. Burden of disease methodology can be applied to estimate the health burden of adverse events following vaccination in a systematic way. As with all burden of disease studies, interpretation of the estimates must consider the quality and accuracy of the data sources contributing to the DALY computation. © 2018 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  8. Development of methodologies for the estimation of thermal properties associated with aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1993-01-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.

  9. Development of methodologies for the estimation of thermal properties associated with aerospace vehicles

    NASA Astrophysics Data System (ADS)

    Scott, Elaine P.

    1993-12-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles such as the National Aero-Space Plane (NASP) and the High-Speed Civil Transport (HSCT) at NASA-LaRC. These analyses require knowledge of the temperature within the structures which consequently necessitates the need for thermal property data. The initial goal of this research effort was to develop a methodology for the estimation of thermal properties of aerospace structural materials at room temperature and to develop a procedure to optimize the estimation process. The estimation procedure was implemented utilizing a general purpose finite element code. In addition, an optimization procedure was developed and implemented to determine critical experimental parameters to optimize the estimation procedure. Finally, preliminary experiments were conducted at the Aircraft Structures Branch (ASB) laboratory.

  10. Total materials consumption; an estimation methodology and example using lead; a materials flow analysis

    USGS Publications Warehouse

    Biviano, Marilyn B.; Wagner, Lorie A.; Sullivan, Daniel E.

    1999-01-01

    Materials consumption estimates, such as apparent consumption of raw materials, can be important indicators of sustainability. Apparent consumption of raw materials does not account for material contained in manufactured products that are imported or exported and may thus under- or over-estimate total consumption of materials in the domestic economy. This report demonstrates a methodology to measure the amount of materials contained in net imports (imports minus exports), using lead as an example. The analysis presents illustrations of differences between apparent and total consumption of lead and distributes these differences into individual lead-consuming sectors.

  11. A methodology framework for weighting genetic traits that impact greenhouse gas emission intensities in selection indexes.

    PubMed

    Amer, P R; Hely, F S; Quinton, C D; Cromie, A R

    2018-01-01

    A methodological framework was presented for deriving weightings to be applied in selection indexes to account for the impact genetic change in traits will have on greenhouse gas emissions intensities (EIs). Although the emission component of the breeding goal was defined as the ratio of total emissions relative to a weighted combination of farm outputs, the resulting trait-weighting factors can be applied as linear weightings in a way that augments any existing breeding objective before consideration of EI. Calculus was used to define the parameters and assumptions required to link each trait change to the expected changes in EI for an animal production system. Four key components were identified. The potential impact of the trait on relative numbers of emitting animals per breeding female first has a direct effect on emission output but, second, also has a dilution effect from the extra output associated with the extra animals. Third, each genetic trait can potentially change the amount of emissions generated per animal and, finally, the potential impact of the trait on product output is accounted for. Emission intensity weightings derived from this equation require further modifications to integrate them into an existing breeding objective. These include accounting for different timing and frequency of trait expressions as well as a weighting factor to determine the degree of selection emphasis that is diverted away from improving farm profitability in order to achieve gains in EI. The methodology was demonstrated using a simple application to dairy cattle breeding in Ireland to quantify gains in EI reduction from existing genetic trends in milk production as well as in fertility and survival traits. Most gains were identified as coming through the dilution effect of genetic increases in milk protein per cow, although gains from genetic improvements in survival by reducing emissions from herd replacements were also significant. Emission intensities in the Irish dairy industry were estimated to be reduced by ~5% in the last 10 years because of genetic trends in production, fertility and survival traits, and a further 15% reduction was projected over the next 15 years because of an observed acceleration of genetic trends.

  12. Banana orchard inventory using IRS LISS sensors

    NASA Astrophysics Data System (ADS)

    Nishant, Nilay; Upadhayay, Gargi; Vyas, S. P.; Manjunath, K. R.

    2016-04-01

    Banana is one of the major crops of India with increasing export potential. It is important to estimate the production and acreage of the crop. Thus, the present study was carried out to evolve a suitable methodology for estimating banana acreage. Area estimation methodology was devised around the fact that unlike other crops, the time of plantation of banana is different for different farmers as per their local practices or conditions. Thus in order to capture the peak signatures, biowindow of 6 months was considered, its NDVI pattern studied and the optimum two months were considered when banana could be distinguished from other competing crops. The final area of banana for the particular growing cycle was computed by integrating the areas of these two months using LISS III data with spatial resolution of 23m. Estimated banana acreage in the three districts were 11857Ha, 15202ha and 11373Ha for Bharuch, Anand and Vadodara respectively with corresponding accuracy of 91.8%, 90% and 88.16%. Study further compared the use of LISS IV data of 5.8m spatial resolution for estimation of banana using object based as well as per-pixel classification and the results were compared with statistical reports for both the approaches. In the current paper we depict the various methodologies to accurately estimate the banana acreage.

  13. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  14. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    PubMed

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  15. High School Students' Accuracy in Estimating the Cost of College: A Proposed Methodological Approach and Differences among Racial/Ethnic Groups and College Financial-Related Factors

    ERIC Educational Resources Information Center

    Nienhusser, H. Kenny; Oshio, Toko

    2017-01-01

    High school students' accuracy in estimating the cost of college (AECC) was examined by utilizing a new methodological approach, the absolute-deviation-continuous construct. This study used the High School Longitudinal Study of 2009 (HSLS:09) data and examined 10,530 11th grade students in order to measure their AECC for 4-year public and private…

  16. Evaluation of Physically and Empirically Based Models for the Estimation of Green Roof Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Digiovanni, K. A.; Montalto, F. A.; Gaffin, S.; Rosenzweig, C.

    2010-12-01

    Green roofs and other urban green spaces can provide a variety of valuable benefits including reduction of the urban heat island effect, reduction of stormwater runoff, carbon sequestration, oxygen generation, air pollution mitigation etc. As many of these benefits are directly linked to the processes of evaporation and transpiration, accurate and representative estimation of urban evapotranspiration (ET) is a necessary tool for predicting and quantifying such benefits. However, many common ET estimation procedures were developed for agricultural applications, and thus carry inherent assumptions that may only be rarely applicable to urban green spaces. Various researchers have identified the estimation of expected urban ET rates as critical, yet poorly studied components of urban green space performance prediction and cite that further evaluation is needed to reconcile differences in predictions from varying ET modeling approaches. A small scale green roof lysimeter setup situated on the green roof of the Ethical Culture Fieldston School in the Bronx, NY has been the focus of ongoing monitoring initiated in June 2009. The experimental setup includes a 0.6 m by 1.2 m Lysimeter replicating the anatomy of the 500 m2 green roof of the building, with a roof membrane, drainage layer, 10 cm media depth, and planted with a variety of Sedum species. Soil moisture sensors and qualitative runoff measurements are also recorded in the Lysimeter, while a weather station situated on the rooftop records climatologic data. Direct quantification of actual evapotranspiration (AET) from the green roof weighing lysimeter was achieved through a mass balance approaches during periods absent of precipitation and drainage. A comparison of AET to estimates of potential evapotranspiration (PET) calculated from empirically and physically based ET models was performed in order to evaluate the applicability of conventional ET equations for the estimation of ET from green roofs. Results have shown that the empirically based Thornthwaite approach for estimating monthly average PET underestimates compared to AET by 54% over the course of a one year period, and performs similarly on a monthly basis. Estimates of PET from the Northeast Regional Climate Center MORECS model based on a variation of the Penman-Monteith model, overestimates compared to AET by only 2% over a one year period. However, monthly and daily estimates were not accurate, with the model overestimating during warm, summer months by as much as 206% and underestimating during winter months by as much as 58%, which would have significant implications if such estimates were utilized for the evaluation of potential benefits from green roofs. Thus, further evaluation and improvement of these and other methodologies are needed and will be pursued for estimation of ET from green roofs and other urban green spaces including NYC Greenstreets and urban parks.

  17. Theoretical basis to measure the impact of short-lasting control of an infectious disease on the epidemic peak

    PubMed Central

    2011-01-01

    Background While many pandemic preparedness plans have promoted disease control effort to lower and delay an epidemic peak, analytical methods for determining the required control effort and making statistical inferences have yet to be sought. As a first step to address this issue, we present a theoretical basis on which to assess the impact of an early intervention on the epidemic peak, employing a simple epidemic model. Methods We focus on estimating the impact of an early control effort (e.g. unsuccessful containment), assuming that the transmission rate abruptly increases when control is discontinued. We provide analytical expressions for magnitude and time of the epidemic peak, employing approximate logistic and logarithmic-form solutions for the latter. Empirical influenza data (H1N1-2009) in Japan are analyzed to estimate the effect of the summer holiday period in lowering and delaying the peak in 2009. Results Our model estimates that the epidemic peak of the 2009 pandemic was delayed for 21 days due to summer holiday. Decline in peak appears to be a nonlinear function of control-associated reduction in the reproduction number. Peak delay is shown to critically depend on the fraction of initially immune individuals. Conclusions The proposed modeling approaches offer methodological avenues to assess empirical data and to objectively estimate required control effort to lower and delay an epidemic peak. Analytical findings support a critical need to conduct population-wide serological survey as a prior requirement for estimating the time of peak. PMID:21269441

  18. Estimating age-based antiretroviral therapy costs for HIV-infected children in resource-limited settings based on World Health Organization weight-based dosing recommendations

    PubMed Central

    2014-01-01

    Background Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. Methods We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0–13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. Results The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. Conclusions The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments. PMID:24885453

  19. Safety evaluation methodology for advanced coal extraction systems

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.

    1981-01-01

    Qualitative and quantitative evaluation methods for coal extraction systems were developed. The analysis examines the soundness of the design, whether or not the major hazards have been eliminated or reduced, and how the reduction would be accomplished. The quantitative methodology establishes the approximate impact of hazards on injury levels. The results are weighted by peculiar geological elements, specialized safety training, peculiar mine environmental aspects, and reductions in labor force. The outcome is compared with injury level requirements based on similar, safer industries to get a measure of the new system's success in reducing injuries. This approach provides a more detailed and comprehensive analysis of hazards and their effects than existing safety analyses.

  20. Nitrous oxide abatement potential from the wastewater sector and the monetary value of the emissions credits

    NASA Astrophysics Data System (ADS)

    Wang, J. S.; Hamburg, S. P.; Pryor, D.

    2009-12-01

    As an illustration of the monetary opportunities afforded by greenhouse gas emissions markets, we estimated the potential value of greenhouse gas credits generated in the wastewater sector by switching from secondary to tertiary treatment. Our methodology for estimating emissions is a modification of that used by the Environmental Protection Agency for the U.S. greenhouse gas inventories. Focusing on N2O, we found that tertiary treatment in some situations will result in a net decrease in emissions, though the full range of reported emission factors for treatment plants and effluent in receiving waters could result in a net increase as well. Implementation of tertiary treatment across the U.S. could reduce emissions by up to 800,000 tonnes of N2O per year, generating greenhouse gas emissions credits worth up to 10 billion per year (assuming a market price of 10-40/tonne CO2 equivalents). In practice, it will be important to account for potential increases in CO2 emissions associated with the additional power consumption and chemical use required by tertiary treatment that would reduce the net climatic benefit. The net credits would reduce the cost of operating and maintaining tertiary treatment plants and provide an incentive for managers to optimize operating conditions for N2O reductions, a critical benefit of raising awareness of the link between tertiary treatment and N2O emissions. We outline a strategy for minimizing the uncertainty in quantifying N2O reductions in the hopes of accelerating implementation of a N2O crediting system for tertiary wastewater treatment plants.

  1. Toxicity Estimation Software Tool (TEST)

    EPA Science Inventory

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  2. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Cancer.gov

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  3. Class Size Reduction in a Large Urban School District: A Mixed Methodology Evaluation Research Study.

    ERIC Educational Resources Information Center

    Munoz, Marco A.

    This study evaluated the Class Size Reduction (CSR) program in 34 elementary schools in Kentucky's Jefferson County Public Schools. The CSR program is a federal initiative to help elementary schools improve student learning by hiring additional teachers. Qualitative data were collected using unstructured interviews, site observations, and document…

  4. Proceedings of Workshop on Methodology for Evaluating the Effectiveness of Transit Crime Reduction Measures in Automated Guideway Transit Systems

    DOT National Transportation Integrated Search

    1977-07-01

    The workshop focused on current methods of assessing the effectiveness of crime and vandalism reduction methods that are used in conventional urban mass transit systems, and on how they might be applied to new AGT systems. Conventional as well as nov...

  5. Methods for estimating the labour force insured by the Ontario Workplace Safety and Insurance Board: 1990-2000.

    PubMed

    Smith, Peter M; Mustard, Cameron A; Payne, Jennifer I

    2004-01-01

    This paper presents a methodology for estimating the size and composition of the Ontario labour force eligible for coverage under the Ontario Workplace Safety & Insurance Act (WSIA). Using customized tabulations from Statistics Canada's Labour Force Survey (LFS), we made adjustments for self-employment, unemployment, part-time employment and employment in specific industrial sectors excluded from insurance coverage under the WSIA. Each adjustment to the LFS reduced the estimates of the insured labour force relative to the total Ontario labour force. These estimates were then developed for major occupational and industrial groups stratified by gender. Additional estimates created to test assumptions used in the methodology produced similar results. The methods described in this paper advance those previously used to estimate the insured labour force, providing researchers with a useful tool to describe trends in the rate of injury across differing occupational, industrial and gender groups in Ontario.

  6. Greenhouse gas and criteria emission benefits through reduction of vessel speed at sea.

    PubMed

    Khan, M Yusuf; Agrawal, Harshit; Ranganathan, Sindhuja; Welch, William A; Miller, J Wayne; Cocker, David R

    2012-11-20

    Reducing emissions from ocean-going vessels (OGVs) as they sail near populated areas is a widely recognized goal, and Vessel Speed Reduction (VSR) is one of several strategies that is being adopted by regulators and port authorities. The goal of this research was to measure the emission benefits associated with greenhouse gas and criteria pollutants by operating OGVs at reduced speed. Emissions were measured from one Panamax and one post-Panamax class container vessels as their vessel speed was reduced from cruise to 15 knots or below. VSR to 12 knots yielded carbon dioxide (CO(2)) and nitrogen oxides (NO(x)) emissions reductions (in kg/nautical mile (kg/nmi)) of approximately 61% and 56%, respectively, as compared to vessel cruise speed. The mass emission rate (kg/nmi) of PM(2.5) was reduced by 69% with VSR to 12 knots alone and by ~97% when coupled with the use of the marine gas oil (MGO) with 0.00065% sulfur content. Emissions data from vessels while operating at sea are scarce and measurements from this research demonstrated that tidal current is a significant parameter affecting emission factors (EFs) at lower engine loads. Emissions factors at ≤20% loads calculated by methodology adopted by regulatory agencies were found to underestimate PM(2.5) and NO(x) by 72% and 51%, respectively, when compared to EFs measured in this study. Total pollutant emitted (TPE) in the emission control area (ECA) was calculated, and emission benefits were estimated as the VSR zone increased from 24 to 200 nmi. TPE(CO2) and TPE(PM2.5) estimated for large container vessels showed benefits for CO(2) (2-26%) and PM(2.5) (4-57%) on reducing speeds from 15 to 12 knots, whereas TPE(CO2) and TPE(PM2.5) for small and medium container vessels were similar at 15 and 12 knots.

  7. Estimation of the collective ionizing dose in the Portuguese population for the years 2011 and 2012, due to nuclear medicine exams.

    PubMed

    Costa, F; Teles, P; Nogueira, A; Barreto, A; Santos, A I; Carvalho, A; Martins, B; Oliveira, C; Gaspar, C; Barros, C; Neves, D; Costa, D; Rodrigues, E; Godinho, F; Alves, F; Cardoso, G; Cantinho, G; Conde, I; Vale, J; Santos, J; Isidoro, J; Pereira, J; Salgado, L; Rézio, M; Vieira, M; Simãozinho, P; Almeida, P; Castro, R; Parafita, R; Pintão, S; Lúcio, T; Reis, T; Vaz, P

    2015-01-01

    In 2009-2010 a Portuguese consortium was created to implement the methodologies proposed by the Dose Datamed II (DDM2) project, aiming to collect data from diagnostic X-ray and nuclear medicine (NM) procedures, in order to determine the most frequently prescribed exams and the associated ionizing radiation doses for the Portuguese population. The current study is the continuation of this work, although it focuses only on NM exams for the years 2011 and 2012. The annual frequency of each of the 28 selected NM exams and the average administered activity per procedure was obtained by means of a nationwide survey sent to the 35 NM centres in Portugal. The results show a reduction of the number of cardiac exams performed in the last two years compared with 2010, leading to a reduction of the annual average effective dose of Portuguese population due to NM exams from 0.08 mSv ± 0.017 mSv/caput to 0.059 ± 0.011 mSv/caput in 2011 and 0.054 ± 0.011 mSv/caput in 2012. Portuguese total annual average collective effective dose due to medical procedures was estimated to be 625.6 ± 110.9 manSv in 2011 and 565.1 ± 117.3 manSv in 2012, a reduction in comparison with 2010 (840.3 ± 183.8 manSv). The most frequent exams and the ones that contributed the most for total population dose were the cardiac and bone exams, although a decrease observed in 2011 and in 2012 was verified. The authors intend to perform this study periodically to identify trends in the annual Portuguese average effective dose and to help to raise awareness about the potential dose optimization. Copyright © 2014 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  8. The Effect of Vaccination Coverage and Climate on Japanese Encephalitis in Sarawak, Malaysia

    PubMed Central

    Impoinvil, Daniel E.; Ooi, Mong How; Diggle, Peter J.; Caminade, Cyril; Cardosa, Mary Jane; Morse, Andrew P.

    2013-01-01

    Background Japanese encephalitis (JE) is the leading cause of viral encephalitis across Asia with approximately 70,000 cases a year and 10,000 to 15,000 deaths. Because JE incidence varies widely over time, partly due to inter-annual climate variability effects on mosquito vector abundance, it becomes more complex to assess the effects of a vaccination programme since more or less climatically favourable years could also contribute to a change in incidence post-vaccination. Therefore, the objective of this study was to quantify vaccination effect on confirmed Japanese encephalitis (JE) cases in Sarawak, Malaysia after controlling for climate variability to better understand temporal dynamics of JE virus transmission and control. Methodology/principal findings Monthly data on serologically confirmed JE cases were acquired from Sibu Hospital in Sarawak from 1997 to 2006. JE vaccine coverage (non-vaccine years vs. vaccine years) and meteorological predictor variables, including temperature, rainfall and the Southern Oscillation index (SOI) were tested for their association with JE cases using Poisson time series analysis and controlling for seasonality and long-term trend. Over the 10-years surveillance period, 133 confirmed JE cases were identified. There was an estimated 61% reduction in JE risk after the introduction of vaccination, when no account is taken of the effects of climate. This reduction is only approximately 45% when the effects of inter-annual variability in climate are controlled for in the model. The Poisson model indicated that rainfall (lag 1-month), minimum temperature (lag 6-months) and SOI (lag 6-months) were positively associated with JE cases. Conclusions/significance This study provides the first improved estimate of JE reduction through vaccination by taking account of climate inter-annual variability. Our analysis confirms that vaccination has substantially reduced JE risk in Sarawak but this benefit may be overestimated if climate effects are ignored. PMID:23951373

  9. Instruments evaluating the quality of the clinical learning environment in nursing education: A systematic review of psychometric properties.

    PubMed

    Mansutti, Irene; Saiani, Luisa; Grassetti, Luca; Palese, Alvisa

    2017-03-01

    The clinical learning environment is fundamental to nursing education paths, capable of affecting learning processes and outcomes. Several instruments have been developed in nursing education, aimed at evaluating the quality of the clinical learning environments; however, no systematic review of the psychometric properties and methodological quality of these studies has been performed to date. The aims of the study were: 1) to identify validated instruments evaluating the clinical learning environments in nursing education; 2) to evaluate critically the methodological quality of the psychometric property estimation used; and 3) to compare psychometric properties across the instruments available. A systematic review of the literature (using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines) and an evaluation of the methodological quality of psychometric properties (using the COnsensus-based Standards for the selection of health Measurement INstruments guidelines). The Medline and CINAHL databases were searched. Eligible studies were those that satisfied the following criteria: a) validation studies of instruments evaluating the quality of clinical learning environments; b) in nursing education; c) published in English or Italian; d) before April 2016. The included studies were evaluated for the methodological quality of the psychometric properties measured and then compared in terms of both the psychometric properties and the methodological quality of the processes used. The search strategy yielded a total of 26 studies and eight clinical learning environment evaluation instruments. A variety of psychometric properties have been estimated for each instrument, with differing qualities in the methodology used. Concept and construct validity were poorly assessed in terms of their significance and rarely judged by the target population (nursing students). Some properties were rarely considered (e.g., reliability, measurement error, criterion validity), whereas others were frequently estimated, but using different coefficients and statistical analyses (e.g., internal consistency, structural validity), thus rendering comparison across instruments difficult. Moreover, the methodological quality adopted in the property assessments was poor or fair in most studies, compromising the goodness of the psychometric values estimated. Clinical learning placements represent the key strategies in educating the future nursing workforce: instruments evaluating the quality of the settings, as well as their capacity to promote significant learning, are strongly recommended. Studies estimating psychometric properties, using an increased quality of research methodologies are needed in order to support nursing educators in the process of clinical placements accreditation and quality improvement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Utility Estimation for Pediatric Vesicoureteral Reflux: Methodological Considerations Using an Online Survey Platform.

    PubMed

    Tejwani, Rohit; Wang, Hsin-Hsiao S; Lloyd, Jessica C; Kokorowski, Paul J; Nelson, Caleb P; Routh, Jonathan C

    2017-03-01

    The advent of online task distribution has opened a new avenue for efficiently gathering community perspectives needed for utility estimation. Methodological consensus for estimating pediatric utilities is lacking, with disagreement over whom to sample, what perspective to use (patient vs parent) and whether instrument induced anchoring bias is significant. We evaluated what methodological factors potentially impact utility estimates for vesicoureteral reflux. Cross-sectional surveys using a time trade-off instrument were conducted via the Amazon Mechanical Turk® (https://www.mturk.com) online interface. Respondents were randomized to answer questions from child, parent or dyad perspectives on the utility of a vesicoureteral reflux health state and 1 of 3 "warm-up" scenarios (paralysis, common cold, none) before a vesicoureteral reflux scenario. Utility estimates and potential predictors were fitted to a generalized linear model to determine what factors most impacted utilities. A total of 1,627 responses were obtained. Mean respondent age was 34.9 years. Of the respondents 48% were female, 38% were married and 44% had children. Utility values were uninfluenced by child/personal vesicoureteral reflux/urinary tract infection history, income or race. Utilities were affected by perspective and were higher in the child group (34% lower in parent vs child, p <0.001, and 13% lower in dyad vs child, p <0.001). Vesicoureteral reflux utility was not significantly affected by the presence or type of time trade-off warm-up scenario (p = 0.17). Time trade-off perspective affects utilities when estimated via an online interface. However, utilities are unaffected by the presence, type or absence of warm-up scenarios. These findings could have significant methodological implications for future utility elicitations regarding other pediatric conditions. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  11. 42 CFR 495.204 - Incentive payments to qualifying MA organizations for qualifying MA-EPs and qualifying MA...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... methodological proposal for estimating the portion of each qualifying MA EP's salary or revenue attributable to... enrollees of the MA organization in the payment year. The methodological proposal— (i) Must be approved by... account for the MA-enrollee related Part B practice costs of the qualifying MA EP. (iii) Methodological...

  12. 42 CFR 495.204 - Incentive payments to qualifying MA organizations for qualifying MA-EPs and qualifying MA...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... methodological proposal for estimating the portion of each qualifying MA EP's salary or revenue attributable to... enrollees of the MA organization in the payment year. The methodological proposal— (i) Must be approved by... account for the MA-enrollee related Part B practice costs of the qualifying MA EP. (iii) Methodological...

  13. Estimating Unbiased Land Cover Change Areas In The Colombian Amazon Using Landsat Time Series And Statistical Inference Methods

    NASA Astrophysics Data System (ADS)

    Arevalo, P. A.; Olofsson, P.; Woodcock, C. E.

    2017-12-01

    Unbiased estimation of the areas of conversion between land categories ("activity data") and their uncertainty is crucial for providing more robust calculations of carbon emissions to the atmosphere, as well as their removals. This is particularly important for the REDD+ mechanism of UNFCCC where an economic compensation is tied to the magnitude and direction of such fluxes. Dense time series of Landsat data and statistical protocols are becoming an integral part of forest monitoring efforts, but there are relatively few studies in the tropics focused on using these methods to advance operational MRV systems (Monitoring, Reporting and Verification). We present the results of a prototype methodology for continuous monitoring and unbiased estimation of activity data that is compliant with the IPCC Approach 3 for representation of land. We used a break detection algorithm (Continuous Change Detection and Classification, CCDC) to fit pixel-level temporal segments to time series of Landsat data in the Colombian Amazon. The segments were classified using a Random Forest classifier to obtain annual maps of land categories between 2001 and 2016. Using these maps, a biannual stratified sampling approach was implemented and unbiased stratified estimators constructed to calculate area estimates with confidence intervals for each of the stable and change classes. Our results provide evidence of a decrease in primary forest as a result of conversion to pastures, as well as increase in secondary forest as pastures are abandoned and the forest allowed to regenerate. Estimating areas of other land transitions proved challenging because of their very small mapped areas compared to stable classes like forest, which corresponds to almost 90% of the study area. Implications on remote sensing data processing, sample allocation and uncertainty reduction are also discussed.

  14. Assimilation of remote sensing data into a process-based ecosystem model for monitoring changes of soil water content in croplands

    NASA Astrophysics Data System (ADS)

    Ju, Weimin; Gao, Ping; Wang, Jun; Li, Xianfeng; Chen, Shu

    2008-10-01

    Soil water content (SWC) is an important factor affecting photosynthesis, growth, and final yields of crops. The information on SWC is of importance for mitigating the reduction of crop yields caused by drought through proper agricultural water management. A variety of methodologies have been developed to estimate SWC at local and regional scales, including field sampling, remote sensing monitoring and model simulations. The reliability of regional SWC simulation depends largely on the accuracy of spatial input datasets, including vegetation parameters, soil and meteorological data. Remote sensing has been proved to be an effective technique for controlling uncertainties in vegetation parameters. In this study, the vegetation parameters (leaf area index and land cover type) derived from the Moderate Resolution Imaging Spectrometer (MODIS) were assimilated into a process-based ecosystem model BEPS for simulating the variations of SWC in croplands of Jiangsu province, China. Validation shows that the BEPS model is able to capture 81% and 83% of across-site variations of SWC at 10 and 20 cm depths during the period from September to December, 2006 when a serous autumn drought occurred. The simulated SWC responded the events of rainfall well at regional scale, demonstrating the usefulness of our methodology for SWC and practical agricultural water management at large scales.

  15. Audit of Trichomonas vaginalis test requesting by community referrers after a change from culture to molecular testing, including a cost analysis.

    PubMed

    Bissessor, Liselle; Wilson, Janet; McAuliffe, Gary; Upton, Arlo

    2017-06-16

    Trichomonas vaginalis (TV) prevalence varies among different communities and peoples. The availability of robust molecular platforms for the detection of TV has advanced diagnosis; however, molecular tests are more costly than phenotypic methodologies, and testing all urogenital samples is costly. We recently replaced culture methods with the Aptima Trichomonas vaginalis nucleic acid amplification test on specific request and as reflex testing by the laboratory, and have audited this change. Data were collected from August 2015 (microbroth culture and microscopy) and August 2016 (Aptima TV assay) including referrer, testing volumes, results and test cost estimates. In August 2015, 10,299 vaginal swabs, and in August 2016, 2,189 specimens (urogenital swabs and urines), were tested. The positivity rate went from 0.9% to 5.3%, and overall more TV infections were detected in 2016. The number needed to test and cost for one positive TV result respectively was 111 and $902.55 in 2015, and 19 and $368.92 in 2016. Request volumes and positivity rates differed among referrers. The methodology change was associated with higher overall detection of TV, and reductions in the numbers needed to test/cost for one TV diagnosis. Our audit suggests that there is room for improvement with TV test requesting in our community.

  16. A hybrid approach to survival model building using integration of clinical and molecular information in censored data.

    PubMed

    Choi, Ickwon; Kattan, Michael W; Wells, Brian J; Yu, Changhong

    2012-01-01

    In medical society, the prognostic models, which use clinicopathologic features and predict prognosis after a certain treatment, have been externally validated and used in practice. In recent years, most research has focused on high dimensional genomic data and small sample sizes. Since clinically similar but molecularly heterogeneous tumors may produce different clinical outcomes, the combination of clinical and genomic information, which may be complementary, is crucial to improve the quality of prognostic predictions. However, there is a lack of an integrating scheme for clinic-genomic models due to the P ≥ N problem, in particular, for a parsimonious model. We propose a methodology to build a reduced yet accurate integrative model using a hybrid approach based on the Cox regression model, which uses several dimension reduction techniques, L₂ penalized maximum likelihood estimation (PMLE), and resampling methods to tackle the problem. The predictive accuracy of the modeling approach is assessed by several metrics via an independent and thorough scheme to compare competing methods. In breast cancer data studies on a metastasis and death event, we show that the proposed methodology can improve prediction accuracy and build a final model with a hybrid signature that is parsimonious when integrating both types of variables.

  17. [Methodologies for estimating the indirect costs of traffic accidents].

    PubMed

    Carozzi, Soledad; Elorza, María Eugenia; Moscoso, Nebel Silvana; Ripari, Nadia Vanina

    2017-01-01

    Traffic accidents generate multiple costs to society, including those associated with the loss of productivity. However, there is no consensus about the most appropriate methodology for estimating those costs. The aim of this study was to review methods for estimating indirect costs applied in crash cost studies. A thematic review of the literature was carried out between 1995 and 2012 in PubMed with the terms cost of illness, indirect cost, road traffic injuries, productivity loss. For the assessment of costs we used the the human capital method, on the basis of the wage-income lost during the time of treatment and recovery of patients and caregivers. In the case of premature death or total disability, the discount rate was applied to obtain the present value of lost future earnings. The computed years arose by subtracting to life expectancy at birth the average age of those affected who are not incorporated into the economically active life. The interest in minimizing the problem is reflected in the evolution of the implemented methodologies. We expect that this review is useful to estimate efficiently the real indirect costs of traffic accidents.

  18. A Comparative Study of Three Spatial Interpolation Methodologies for the Analysis of Air Pollution Concentrations in Athens, Greece

    NASA Astrophysics Data System (ADS)

    Deligiorgi, Despina; Philippopoulos, Kostas; Thanou, Lelouda; Karvounis, Georgios

    2010-01-01

    Spatial interpolation in air pollution modeling is the procedure for estimating ambient air pollution concentrations at unmonitored locations based on available observations. The selection of the appropriate methodology is based on the nature and the quality of the interpolated data. In this paper, an assessment of three widely used interpolation methodologies is undertaken in order to estimate the errors involved. For this purpose, air quality data from January 2001 to December 2005, from a network of seventeen monitoring stations, operating at the greater area of Athens in Greece, are used. The Nearest Neighbor and the Liner schemes were applied to the mean hourly observations, while the Inverse Distance Weighted (IDW) method to the mean monthly concentrations. The discrepancies of the estimated and measured values are assessed for every station and pollutant, using the correlation coefficient, the scatter diagrams and the statistical residuals. The capability of the methods to estimate air quality data in an area with multiple land-use types and pollution sources, such as Athens, is discussed.

  19. Soil Moisture Content Estimation Based on Sentinel-1 and Auxiliary Earth Observation Products. A Hydrological Approach

    PubMed Central

    Alexakis, Dimitrios D.; Mexis, Filippos-Dimitrios K.; Vozinaki, Anthi-Eirini K.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.

    2017-01-01

    A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R2 values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies. PMID:28635625

  20. Eigenspace perturbations for uncertainty estimation of single-point turbulence closures

    NASA Astrophysics Data System (ADS)

    Iaccarino, Gianluca; Mishra, Aashwin Ananda; Ghili, Saman

    2017-02-01

    Reynolds-averaged Navier-Stokes (RANS) models represent the workhorse for predicting turbulent flows in complex industrial applications. However, RANS closures introduce a significant degree of epistemic uncertainty in predictions due to the potential lack of validity of the assumptions utilized in model formulation. Estimating this uncertainty is a fundamental requirement for building confidence in such predictions. We outline a methodology to estimate this structural uncertainty, incorporating perturbations to the eigenvalues and the eigenvectors of the modeled Reynolds stress tensor. The mathematical foundations of this framework are derived and explicated. Thence, this framework is applied to a set of separated turbulent flows, while compared to numerical and experimental data and contrasted against the predictions of the eigenvalue-only perturbation methodology. It is exhibited that for separated flows, this framework is able to yield significant enhancement over the established eigenvalue perturbation methodology in explaining the discrepancy against experimental observations and high-fidelity simulations. Furthermore, uncertainty bounds of potential engineering utility can be estimated by performing five specific RANS simulations, reducing the computational expenditure on such an exercise.

  1. Soil Moisture Content Estimation Based on Sentinel-1 and Auxiliary Earth Observation Products. A Hydrological Approach.

    PubMed

    Alexakis, Dimitrios D; Mexis, Filippos-Dimitrios K; Vozinaki, Anthi-Eirini K; Daliakopoulos, Ioannis N; Tsanis, Ioannis K

    2017-06-21

    A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R² values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies.

  2. Effect of Using Different Vehicle Weight Groups on the Estimated Relationship Between Mass Reduction and U.S. Societal Fatality Risk per Vehicle Miles of Travel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Tom P.

    This report recalculates the estimated relationship between vehicle mass and societal fatality risk, using alternative groupings by vehicle weight, to test whether the trend of decreasing fatality risk from mass reduction as case vehicle mass increases, holds over smaller increments of the range in case vehicle masses. The NHTSA baseline regression model estimates the relationship using for two weight groups for cars and light trucks; we re-estimated the mass reduction coefficients using four, six, and eight bins of vehicle mass. The estimated effect of mass reduction on societal fatality risk was not consistent over the range in vehicle masses inmore » these weight bins. These results suggest that the relationship indicated by the NHTSA baseline model is a result of other, unmeasured attributes of the mix of vehicles in the lighter vs. heavier weight bins, and not necessarily the result of a correlation between mass reduction and societal fatality risk. An analysis of the average vehicle, driver, and crash characteristics across the various weight groupings did not reveal any strong trends that might explain the lack of a consistent trend of decreasing fatality risk from mass reduction in heavier vehicles.« less

  3. Methodological Considerations in Social Cost Studies of Addictive Substances: A Systematic Literature Review.

    PubMed

    Verhaeghe, Nick; Lievens, Delfine; Annemans, Lieven; Vander Laenen, Freya; Putman, Koen

    2016-01-01

    Alcohol, tobacco, illicit drugs, and psychoactive pharmaceuticals' use is associated with a higher likelihood of developing several diseases and injuries and, as a consequence, considerable health-care expenditures. There is yet a lack of consistent methodologies to estimate the economic impact of addictive substances to society. The aim was to assess the methodological approaches applied in social cost studies estimating the economic impact of alcohol, tobacco, illicit drugs, and psychoactive pharmaceuticals. A systematic literature review through the electronic databases, Medline (PubMed) and Web of Science, was performed. Studies in English published from 1997 examining the social costs of the addictive substances alcohol, tobacco, illicit drugs, and psychoactive pharmaceuticals were eligible for inclusion. Twelve social cost studies met the inclusion criteria. In all studies, the direct and indirect costs were measured, but the intangible costs were seldom taken into account. A wide variety in cost items included across studies was observed. Sensitivity analyses to address the uncertainty around certain cost estimates were conducted in eight studies considered in the review. Differences in cost items included in cost-of-illness studies limit the comparison across studies. It is clear that it is difficult to deal with all consequences of substance use in cost-of-illness studies. Future social cost studies should be based on sound methodological principles in order to result in more reliable cost estimates of the economic burden of substance use.

  4. Manned Mars mission cost estimate

    NASA Technical Reports Server (NTRS)

    Hamaker, Joseph; Smith, Keith

    1986-01-01

    The potential costs of several options of a manned Mars mission are examined. A cost estimating methodology based primarily on existing Marshall Space Flight Center (MSFC) parametric cost models is summarized. These models include the MSFC Space Station Cost Model and the MSFC Launch Vehicle Cost Model as well as other modes and techniques. The ground rules and assumptions of the cost estimating methodology are discussed and cost estimates presented for six potential mission options which were studied. The estimated manned Mars mission costs are compared to the cost of the somewhat analogous Apollo Program cost after normalizing the Apollo cost to the environment and ground rules of the manned Mars missions. It is concluded that a manned Mars mission, as currently defined, could be accomplished for under $30 billion in 1985 dollars excluding launch vehicle development and mission operations.

  5. Reliable estimation of antimicrobial use and its evolution between 2010 and 2013 in French swine farms.

    PubMed

    Hémonic, Anne; Chauvin, Claire; Delzescaux, Didier; Verliat, Fabien; Corrégé, Isabelle

    2018-01-01

    There has been a strong implication of both the French swine industry and the national authorities on reducing the use of antimicrobials in swine production since 2010. The annual monitoring of antimicrobial sales by the French Veterinary Medicines Agency (Anses-ANMV) provides estimates but not detailed figures on actual on-farm usage of antimicrobials in swine production. In order to provide detailed information on the 2010 and 2013 antimicrobial use in the French swine industry, the methodology of cross-sectional retrospective study on a representative sample of at least 150 farms has been elected. The analysis of the collected data shows a strong and significant decrease in antimicrobial exposure of pigs between 2010 and 2013. Over three years, the average number of days of treatment significantly decreased by 29% in suckling piglets and by 19% in weaned piglets. In fattening pigs, the drop (- 29%) was not statistically significant. Only usage in sows did increase over that period (+ 17%, non-significant), which might be associated with the transition to group-housing of pregnant sows that took place at the time. Also, over that period, the use of third- and fourth generation cephalosporins in suckling piglets decreased by 89%, and by 82% in sows, which confirms that the voluntary moratorium on these classes of antimicrobials decided at the end of 2010 has been effectively implemented. The methodology of random sampling of farms appears as a precise and robust tool to monitor antimicrobial use within a production animal species, able to fulfil industry and national authorities' objectives and requirements to assess the outcome of concerted efforts on antimicrobial use reduction. It demonstrates that the use of antimicrobials decreased in the French swine industry between 2010 and 2013, including the classes considered as critical for human medicine.

  6. Methodology for quantification of waste generated in Spanish railway construction works

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzman Baez, Ana de; Villoria Saez, Paola; Rio Merino, Mercedes del

    Highlights: Black-Right-Pointing-Pointer Two equations for C and D waste estimation in railway construction works are developed. Black-Right-Pointing-Pointer Mixed C and D waste is the most generated category during railway construction works. Black-Right-Pointing-Pointer Tunnel construction is essential to quantify the waste generated during the works. Black-Right-Pointing-Pointer There is a relationship between C and D waste generated and railway functional units. Black-Right-Pointing-Pointer The methodology proposed can be used to obtain new constants for other areas. - Abstract: In the last years, the European Union (EU) has been focused on the reduction of construction and demolition (C and D) waste. Specifically, in 2006,more » Spain generated roughly 47 million tons of C and D waste, of which only 13.6% was recycled. This situation has lead to the drawing up of many regulations on C and D waste during the past years forcing EU countries to include new measures for waste prevention and recycling. Among these measures, the mandatory obligation to quantify the C and D waste expected to be originated during a construction project is mandated. However, limited data is available on civil engineering projects. Therefore, the aim of this research study is to improve C and D waste management in railway projects, by developing a model for C and D waste quantification. For this purpose, we develop two equations which estimate in advance the amount, both in weight and volume, of the C and D waste likely to be generated in railway construction projects, including the category of C and D waste generated for the entire project.« less

  7. Alpha particle spectroscopy using FNTD and SIM super-resolution microscopy.

    PubMed

    Kouwenberg, J J M; Kremers, G J; Slotman, J A; Wolterbeek, H T; Houtsmuller, A B; Denkova, A G; Bos, A J J

    2018-06-01

    Structured illumination microscopy (SIM) for the imaging of alpha particle tracks in fluorescent nuclear track detectors (FNTD) was evaluated and compared to confocal laser scanning microscopy (CLSM). FNTDs were irradiated with an external alpha source and imaged using both methodologies. SIM imaging resulted in improved resolution, without increase in scan time. Alpha particle energy estimation based on the track length, direction and intensity produced results in good agreement with the expected alpha particle energy distribution. A pronounced difference was seen in the spatial scattering of alpha particles in the detectors, where SIM showed an almost 50% reduction compared to CLSM. The improved resolution of SIM allows for more detailed studies of the tracks induced by ionising particles. The combination of SIM and FNTDs for alpha radiation paves the way for affordable and fast alpha spectroscopy and dosimetry. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.

  8. Chapter 17: Adding Value to the Biorefinery with Lignin: An Engineer's Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biddy, Mary J

    There is a long-standing belief that 'you can make anything out of lignin...except money.' This chapter serves to highlight that opportunities for making money from biomass-derived lignin exist both with current technology in the production of steam and power to new emerging areas of R&D focused on value-added chemical and material coproducts from lignin. To understand and quantify the economic potential for lignin valorization, the techno-economic analysis methodology is first described in detail. As demonstrated in the provided case study, these types of economic evaluations serve not only to estimate the economic impacts that lignin conversion could have for anmore » integrated biorefinery and outline drivers for further cost reduction but also identify data gaps and R&D needs for improving the design basis and reducing the risk for process scale-up.« less

  9. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  10. Development and Testing of a High Stability Engine Control (HISTEC) System

    NASA Technical Reports Server (NTRS)

    Orme, John S.; DeLaat, John C.; Southwick, Robert D.; Gallops, George W.; Doane, Paul M.

    1998-01-01

    Flight tests were recently completed to demonstrate an inlet-distortion-tolerant engine control system. These flight tests were part of NASA's High Stability Engine Control (HISTEC) program. The objective of the HISTEC program was to design, develop, and flight demonstrate an advanced integrated engine control system that uses measurement-based, real-time estimates of inlet airflow distortion to enhance engine stability. With improved stability and tolerance of inlet airflow distortion, future engine designs may benefit from a reduction in design stall-margin requirements and enhanced reliability, with a corresponding increase in performance and decrease in fuel consumption. This paper describes the HISTEC methodology, presents an aircraft test bed description (including HISTEC-specific modifications) and verification and validation ground tests. Additionally, flight test safety considerations, test plan and technique design and approach, and flight operations are addressed. Some illustrative results are presented to demonstrate the type of analysis and results produced from the flight test program.

  11. [Abortion and crime].

    PubMed

    Citoni, Guido

    2011-01-01

    In this article we address the issue, with a tentative empirical application to the Italian data, of the relationship, very debated mainly in north America, between abortion legalization and reduction of crime rates of youth. The rationale of this relationship is that there is a causal factor at work: the more unwanted pregnancies aborted, the less unwanted children breeding their criminal attitude in an hostile/deprived family environment. Many methodological and empirical criticisms have been raised against the proof of the existence of such a relationship: our attempt to test if this link is valid for Italy cannot endorse its existence. The data we used made necessary some assumptions and the reliability of official estimates of crime rates was debatable (probably downward biased). We conclude that, at least for Italy, the suggested relationship is unproven: other reasons for the need of legal abortion have been and should be put forward.

  12. An electrochemical sensing platform based on local repression of electrolyte diffusion for single-step, reagentless, sensitive detection of a sequence-specific DNA-binding protein.

    PubMed

    Zhang, Yun; Liu, Fang; Nie, Jinfang; Jiang, Fuyang; Zhou, Caibin; Yang, Jiani; Fan, Jinlong; Li, Jianping

    2014-05-07

    In this paper, we report for the first time an electrochemical biosensor for single-step, reagentless, and picomolar detection of a sequence-specific DNA-binding protein using a double-stranded, electrode-bound DNA probe terminally modified with a redox active label close to the electrode surface. This new methodology is based upon local repression of electrolyte diffusion associated with protein-DNA binding that leads to reduction of the electrochemical response of the label. In the proof-of-concept study, the resulting electrochemical biosensor was quantitatively sensitive to the concentrations of the TATA binding protein (TBP, a model analyte) ranging from 40 pM to 25.4 nM with an estimated detection limit of ∼10.6 pM (∼80 to 400-fold improvement on the detection limit over previous electrochemical analytical systems).

  13. Reduction of Complications of Local Anaesthesia in Dental Healthcare Setups by Application of the Six Sigma Methodology: A Statistical Quality Improvement Technique.

    PubMed

    Akifuddin, Syed; Khatoon, Farheen

    2015-12-01

    Health care faces challenges due to complications, inefficiencies and other concerns that threaten the safety of patients. The purpose of his study was to identify causes of complications encountered after administration of local anaesthesia for dental and oral surgical procedures and to reduce the incidence of complications by introduction of six sigma methodology. DMAIC (Define, Measure, Analyse, Improve and Control) process of Six Sigma was taken into consideration to reduce the incidence of complications encountered after administration of local anaesthesia injections for dental and oral surgical procedures using failure mode and effect analysis. Pareto analysis was taken into consideration to analyse the most recurring complications. Paired z-sample test using Minitab Statistical Inference and Fisher's exact test was used to statistically analyse the obtained data. The p-value <0.05 was considered as significant value. Total 54 systemic and 62 local complications occurred during three months of analyse and measure phase. Syncope, failure of anaesthesia, trismus, auto mordeduras and pain at injection site was found to be most recurring complications. Cumulative defective percentage was 7.99 in case of pre-improved data and decreased to 4.58 in the control phase. Estimate for difference was 0.0341228 and 95% lower bound for difference was 0.0193966. p-value was found to be highly significant with p= 0.000. The application of six sigma improvement methodology in healthcare tends to deliver consistently better results to the patients as well as hospitals and results in better patient compliance as well as satisfaction.

  14. Rape and Sexual Assault Victimization Among College-Age Females, 1995-2013

    MedlinePlus

    ... similar manner to the NCVS context and questions. Methodological differences that lead to higher estimates of rape ... nonstudent victimizations should not be affected by the methodological differences impacting the overall level of rape and ...

  15. Refining a methodology for determining the economic impacts of transportation improvements.

    DOT National Transportation Integrated Search

    2012-07-01

    Estimating the economic impact of transportation improvements has previously proven to be a difficult task. After an exhaustive literature review, it was clear that the transportation profession lacked standards and methodologies for determining econ...

  16. TEST (Toxicity Estimation Software Tool) Ver 4.1

    EPA Science Inventory

    The Toxicity Estimation Software Tool (T.E.S.T.) has been developed to allow users to easily estimate toxicity and physical properties using a variety of QSAR methodologies. T.E.S.T allows a user to estimate toxicity without requiring any external programs. Users can input a chem...

  17. MEGASTAR: The meaning of growth. An assessment of systems, technologies, and requirements. [methodology for display and analysis of energy production and consumption

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A methodology for the display and analysis of postulated energy futures for the United States is presented. A systems approach methodology including the methodology of technology assessment is used to examine three energy scenarios--the Westinghouse Nuclear Electric Economy, the Ford Technical Fix Base Case and a MEGASTAR generated Alternate to the Ford Technical Fix Base Case. The three scenarios represent different paths of energy consumption from the present to the year 2000. Associated with these paths are various mixes of fuels, conversion, distribution, conservation and end-use technologies. MEGASTAR presents the estimated times and unit requirements to supply the fuels, conversion and distribution systems for the postulated end uses for the three scenarios and then estimates the aggregate manpower, materials, and capital requirements needed to develop the energy system described by the particular scenario.

  18. Urban Earthquake Shaking and Loss Assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation of accelerometric and other macroseismic data (similar to the USGS ShakeMap System). The building inventory data for the Level 2 analysis will consist of grid (geo-cell) based urban building and demographic inventories. For building grouping the European building typology developed within the EU-FP5 RISK-EU project is used. The building vulnerability/fragility relationships to be used can be user selected from a list of applicable relationships developed on the basis of a comprehensive study, Both empirical and analytical relationships (based on the Coefficient Method, Equivalent Linearization Method and the Reduction Factor Method of analysis) can be employed. Casualties in Level 2 analysis are estimated based on the number of buildings in different damaged states and the casualty rates for each building type and damage level. Modifications to the casualty rates can be used if necessary. ELER Level 2 analysis will include calculation of direct monetary losses as a result building damage that will allow for repair-cost estimations and specific investigations associated with earthquake insurance applications (PML and AAL estimations). ELER Level 2 analysis loss results obtained for Istanbul for a scenario earthquake using different techniques will be presented with comparisons using different earthquake damage assessment software. The urban earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation, related Monte-Carlo type simulations and eathquake insurance applications.

  19. Inference regarding multiple structural changes in linear models with endogenous regressors☆

    PubMed Central

    Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021

  20. Potential evapotranspiration and continental drying

    USGS Publications Warehouse

    Milly, Paul C.D.; Dunne, Krista A.

    2016-01-01

    By various measures (drought area and intensity, climatic aridity index, and climatic water deficits), some observational analyses have suggested that much of the Earth’s land has been drying during recent decades, but such drying seems inconsistent with observations of dryland greening and decreasing pan evaporation. ‘Offline’ analyses of climate-model outputs from anthropogenic climate change (ACC) experiments portend continuation of putative drying through the twenty-first century, despite an expected increase in global land precipitation. A ubiquitous increase in estimates of potential evapotranspiration (PET), driven by atmospheric warming, underlies the drying trends, but may be a methodological artefact. Here we show that the PET estimator commonly used (the Penman–Monteith PET for either an open-water surface or a reference crop) severely overpredicts the changes in non-water-stressed evapotranspiration computed in the climate models themselves in ACC experiments. This overprediction is partially due to neglect of stomatal conductance reductions commonly induced by increasing atmospheric CO2 concentrations in climate models. Our findings imply that historical and future tendencies towards continental drying, as characterized by offline-computed runoff, as well as other PET-dependent metrics, may be considerably weaker and less extensive than previously thought.

  1. Summary of Full-Scale Blade Displacement Measurements of the UH- 60A Airloads Rotor

    NASA Technical Reports Server (NTRS)

    Abrego, Anita I.; Meyn, Larry; Burner, Alpheus W.; Barrows, Danny A.

    2016-01-01

    Blade displacement measurements using multi-camera photogrammetry techniques were acquired for a full-scale UH-60A rotor, tested in the National Full-Scale Aerodynamic Complex 40-Foot by 80-Foot Wind Tunnel. The measurements, acquired over the full rotor azimuth, encompass a range of test conditions that include advance ratios from 0.15 to 1.0, thrust coefficient to rotor solidity ratios from 0.01 to 0.13, and rotor shaft angles from -10.0 to 8.0 degrees. The objective was to measure the blade displacements and deformations of the four rotor blades and provide a benchmark blade displacement database to be utilized in the development and validation of rotorcraft prediction techniques. An overview of the blade displacement measurement methodology, system development, and data analysis techniques are presented. Sample results based on the final set of camera calibrations, data reduction procedures and estimated corrections that account for registration errors due to blade elasticity are shown. Differences in blade root pitch, flap and lag between the previously reported results and the current results are small. However, even small changes in estimated root flap and pitch can lead to significant differences in the blade elasticity values.

  2. Benefit/cost comparison for utility SMES applications

    NASA Astrophysics Data System (ADS)

    Desteese, J. G.; Dagle, J. E.

    1991-08-01

    This paper summarizes eight case studies that account for the benefits and costs of superconducting magnetic energy storage (SMES) in system-specific utility applications. Four of these scenarios are hypothetical SMES applications in the Pacific Northwest, where relatively low energy costs impose a stringent test on the viability of the concept. The other four scenarios address SMES applications on high-voltage, direct-current (HVDC) transmission lines. While estimated SMES benefits are based on a previously reported methodology, this paper presents results of an improved cost-estimating approach that includes an assumed reduction in the cost of the power conditioning system (PCS) from approximately $160/kW to $80/kW. The revised approach results in all the SMES scenarios showing higher benefit/cost ratios than those reported earlier. However, in all but two cases, the value of any single benefit is still less than the unit's levelized cost. This suggests, as a general principle, that the total value of multiple benefits should always be considered if SMES is to appear cost effective in many utility applications. These results should offer utilities further encouragement to conduct more detailed analyses of SMES benefits in scenarios that apply to individual systems.

  3. Greenhouse Gas Emissions from Reservoir Water Surfaces: A ...

    EPA Pesticide Factsheets

    Collectively, reservoirs are an important anthropogenic source of greenhouse gases (GHGs) to the atmosphere. Attempts to model reservoir GHG fluxes, however, have been limited by inconsistencies in methodological approaches and data availability. An increase in the number of published reservoir GHG flux estimates during the last 15 years warrants a comprehensive analysis of the magnitude and potential controls on these fluxes. Here we synthesize worldwide reservoir CH4, CO2, and N2O emission data and estimate that GHG emissions from reservoirs account for 80.2 Tmol CO2 equivalents yr-1, thus constituting approximately 5% of anthropogenic radiative forcing. The majority (93%) of these emissions are from CH4, and mainly in the form of bubbles. While age and latitude have historically been linked to reservoir GHG emissions, we found that factors related to reservoir nutrient status and rainfall were better predictors. In particular, nutrient-rich eutrophic reservoirs were found to have an order of magnitude higher per-area CH4 fluxes, on average, than their nutrient-poor oligotrophic counterparts. Therefore, management measures to reduce reservoir eutrophication may result in an important co-benefit, the reduction of GHG emissions to the atmosphere. Greenhouse gas emissions (GHG)

  4. Global carbon monoxide cycle: Modeling and data analysis

    NASA Astrophysics Data System (ADS)

    Arellano, Avelino F., Jr.

    The overarching goal of this dissertation is to develop robust, spatially and temporally resolved CO sources, using global chemical transport modeling, CO measurements from Climate Monitoring and Diagnostic Laboratory (CMDL) and Measurement of Pollution In The Troposphere (MOPITT), under the framework of Bayesian synthesis inversion. To rigorously quantify the CO sources, I conducted five sets of inverse analyses, with each set investigating specific methodological and scientific issues. The first two inverse analyses separately explored two different CO observations to estimate CO sources by region and sector. Under a range of scenarios relating to inverse methodology and data quality issues, top-down estimates using CMDL CO surface and MOPITT CO remote-sensed measurements show consistent results particularly on a significantly large fossil fuel/biofuel (FFBF) emission in East Asia than present bottom-up estimates. The robustness of this estimate is strongly supported by forward and inverse modeling studies in the region particularly from TRansport and Chemical Evolution over the Pacific (TRACE-P) campaign. The use of high-resolution measurement for the first time in CO inversion also draws attention to a methodology issue that the range of estimates from the scenarios is larger than posterior uncertainties, suggesting that estimate uncertainties may be underestimated. My analyses highlight the utility of top-down approach to provide additional constraints on present global estimates by also pointing to other discrepancies including apparent underestimation of FFBF from Africa/Latin America and biomass burning (BIOM) sources in Africa, southeast Asia and north-Latin America, indicating inconsistencies on our current understanding of fuel use and land-use patterns in these regions. Inverse analysis using MOPITT is extended to determine the extent of MOPITT information and estimate monthly regional CO sources. A major finding, which is consistent with other atmospheric observations but differ with satellite area-burned observations, is a significant overestimation in southern Africa for June/July relative to satellite-and-model-constrained BIOM emissions of CO. Sensitivity inverse analyses on observation error covariance and structure, and sequential inversion using NOAA CMDL to fully exploit available information, confirm the robustness of the estimates and further recognize the limitations of the approach, implying the need to further improve the methodology and to reconcile discrepancies.

  5. Assessing the recent estimates of the global burden of disease for ambient air pollution: Methodological changes and implications for low- and middle-income countries.

    PubMed

    Ostro, Bart; Spadaro, Joseph V; Gumy, Sophie; Mudu, Pierpaolo; Awe, Yewande; Forastiere, Francesco; Peters, Annette

    2018-06-04

    The Global Burden of Disease (GBD) is a comparative assessment of the health impact of the major and well-established risk factors, including ambient air pollution (AAP) assessed by concentrations of PM2.5 (particles less than 2.5 µm) and ozone. Over the last two decades, major improvements have emerged for two important inputs in the methodology for estimating the impacts of PM2.5: the assessment of global exposure to PM2.5 and the development of integrated exposure risk models (IERs) that relate the entire range of global exposures of PM2.5 to cause-specific mortality. As a result, the estimated annual mortality attributed to AAP increased from less than 1 million in 2000 to roughly 3 million for GBD in years 2010 and 2013, to 4.2 million for GBD 2015. However, the magnitude of the recent change and uncertainty regarding its rationale have resulted, in some cases, in skepticism and reduced confidence in the overall estimates. To understand the underlying reasons for the change in mortality, we examined the estimates for the years 2013 and 2015 to determine the quantitative implications of alternative model input assumptions. We calculated that the year 2013 estimates increased by 8% after applying the updated exposure data used in GBD 2015, and increased by 23% with the application of the updated IERs from GBD 2015. The application of both upgraded methodologies together increased the GBD 2013 estimates by 35%, or about one million deaths. We also quantified the impact of the changes in demographics and the assumed threshold level. Since the global estimates of air pollution-related deaths will continue to change over time, a clear documentation of the modifications in the methodology and their impacts is necessary. In addition, there is need for additional monitoring and epidemiological studies to reduce uncertainties in the estimates for low- and medium-income countries, which contribute to about one-half of the mortality. Copyright © 2018. Published by Elsevier Inc.

  6. Bayesian WLS/GLS regression for regional skewness analysis for regions with large crest stage gage networks

    USGS Publications Warehouse

    Veilleux, Andrea G.; Stedinger, Jery R.; Eash, David A.

    2012-01-01

    This paper summarizes methodological advances in regional log-space skewness analyses that support flood-frequency analysis with the log Pearson Type III (LP3) distribution. A Bayesian Weighted Least Squares/Generalized Least Squares (B-WLS/B-GLS) methodology that relates observed skewness coefficient estimators to basin characteristics in conjunction with diagnostic statistics represents an extension of the previously developed B-GLS methodology. B-WLS/B-GLS has been shown to be effective in two California studies. B-WLS/B-GLS uses B-WLS to generate stable estimators of model parameters and B-GLS to estimate the precision of those B-WLS regression parameters, as well as the precision of the model. The study described here employs this methodology to develop a regional skewness model for the State of Iowa. To provide cost effective peak-flow data for smaller drainage basins in Iowa, the U.S. Geological Survey operates a large network of crest stage gages (CSGs) that only record flow values above an identified recording threshold (thus producing a censored data record). CSGs are different from continuous-record gages, which record almost all flow values and have been used in previous B-GLS and B-WLS/B-GLS regional skewness studies. The complexity of analyzing a large CSG network is addressed by using the B-WLS/B-GLS framework along with the Expected Moments Algorithm (EMA). Because EMA allows for the censoring of low outliers, as well as the use of estimated interval discharges for missing, censored, and historic data, it complicates the calculations of effective record length (and effective concurrent record length) used to describe the precision of sample estimators because the peak discharges are no longer solely represented by single values. Thus new record length calculations were developed. The regional skewness analysis for the State of Iowa illustrates the value of the new B-WLS/BGLS methodology with these new extensions.

  7. Precipitable water vapour content from ESR/SKYNET sun-sky radiometers: validation against GNSS/GPS and AERONET over three different sites in Europe

    NASA Astrophysics Data System (ADS)

    Campanelli, Monica; Mascitelli, Alessandra; Sanò, Paolo; Diémoz, Henri; Estellés, Victor; Federico, Stefano; Iannarelli, Anna Maria; Fratarcangeli, Francesca; Mazzoni, Augusto; Realini, Eugenio; Crespi, Mattia; Bock, Olivier; Martínez-Lozano, Jose A.; Dietrich, Stefano

    2018-01-01

    The estimation of the precipitable water vapour content (W) with high temporal and spatial resolution is of great interest to both meteorological and climatological studies. Several methodologies based on remote sensing techniques have been recently developed in order to obtain accurate and frequent measurements of this atmospheric parameter. Among them, the relative low cost and easy deployment of sun-sky radiometers, or sun photometers, operating in several international networks, allowed the development of automatic estimations of W from these instruments with high temporal resolution. However, the great problem of this methodology is the estimation of the sun-photometric calibration parameters. The objective of this paper is to validate a new methodology based on the hypothesis that the calibration parameters characterizing the atmospheric transmittance at 940 nm are dependent on vertical profiles of temperature, air pressure and moisture typical of each measurement site. To obtain the calibration parameters some simultaneously seasonal measurements of W, from independent sources, taken over a large range of solar zenith angle and covering a wide range of W, are needed. In this work yearly GNSS/GPS datasets were used for obtaining a table of photometric calibration constants and the methodology was applied and validated in three European ESR-SKYNET network sites, characterized by different atmospheric and climatic conditions: Rome, Valencia and Aosta. Results were validated against the GNSS/GPS and AErosol RObotic NETwork (AERONET) W estimations. In both the validations the agreement was very high, with a percentage RMSD of about 6, 13 and 8 % in the case of GPS intercomparison at Rome, Aosta and Valencia, respectively, and of 8 % in the case of AERONET comparison in Valencia. Analysing the results by W classes, the present methodology was found to clearly improve W estimation at low W content when compared against AERONET in terms of % bias, bringing the agreement with the GPS (considered the reference one) from a % bias of 5.76 to 0.52.

  8. Estimating Agricultural Nitrous Oxide Emissions

    USDA-ARS?s Scientific Manuscript database

    Nitrous oxide emissions are highly variable in space and time and different methodologies have not agreed closely, especially at small scales. However, as scale increases, so does the agreement between estimates based on soil surface measurements (bottom up approach) and estimates derived from chang...

  9. Modelling of aflatoxin G1 reduction by kefir grain using response surface methodology.

    PubMed

    Ansari, Farzaneh; Khodaiyan, Faramarz; Rezaei, Karamatollah; Rahmani, Anosheh

    2015-01-01

    Aflatoxin G1 (AFG1) is one of the main toxic contaminants in pistachio nuts and causes potential health hazards. Hence, AFG1 reduction is one of the main concerns in food safety. Kefir-grains contain symbiotic association of microorganisms well known for their aflatoxin decontamination effects. In this study, a central composite design (CCD) using response surface methodology (RSM) was applied to develop a model in order to predict AFG1 reduction in pistachio nuts by kefir-grain (already heated at 70 and 110°C). The independent variables were: toxin concentration (X1: 5, 10, 15, 20 and 25 ng/g), kefir-grain level (X2: 5, 10, 20, 10 and 25%), contact time (X3: 0, 2, 4, 6 and 8 h), and incubation temperature (X4: 20, 30, 40, 50 and 60°C). There was a significant reduction in AFG1 (p < 0.05) when pre-heat-treated kefir-grain used. The variables including X1, X3 and the interactions between X2-X4 as well as X3-X4 have significant effects on AFG1 reduction. The model provided a good prediction of AFG1 reduction under the assay conditions. Optimization was used to enhance the efficiency of kefir-grain on AFG1 reduction. The optimum conditions for the highest AFG1 reduction (96.8%) were predicted by the model as follows: toxin concentration = 20 ng/g, kefir-grain level = 10%, contact time = 6 h, and incubation temperature = 30°C which validated practically in six replications.

  10. A Community-Based Social Marketing Campaign at Pacific University Oregon: Recycling, Paper Reduction, and Environmentally Preferable Purchasing

    ERIC Educational Resources Information Center

    Cole, Elaine J.; Fieselman, Laura

    2013-01-01

    Purpose: The purpose of this paper is to design a community-based social marketing (CBSM) campaign to foster sustainable behavior change in paper reduction, commingled recycling, and purchasing environmentally preferred products (EPP) with faculty and staff at Pacific University Oregon. Design/methodology/approach: A CBSM campaign was developed…

  11. Treatment effects model for assessing disease management: measuring outcomes and strengthening program management.

    PubMed

    Wendel, Jeanne; Dumitras, Diana

    2005-06-01

    This paper describes an analytical methodology for obtaining statistically unbiased outcomes estimates for programs in which participation decisions may be correlated with variables that impact outcomes. This methodology is particularly useful for intraorganizational program evaluations conducted for business purposes. In this situation, data is likely to be available for a population of managed care members who are eligible to participate in a disease management (DM) program, with some electing to participate while others eschew the opportunity. The most pragmatic analytical strategy for in-house evaluation of such programs is likely to be the pre-intervention/post-intervention design in which the control group consists of people who were invited to participate in the DM program, but declined the invitation. Regression estimates of program impacts may be statistically biased if factors that impact participation decisions are correlated with outcomes measures. This paper describes an econometric procedure, the Treatment Effects model, developed to produce statistically unbiased estimates of program impacts in this type of situation. Two equations are estimated to (a) estimate the impacts of patient characteristics on decisions to participate in the program, and then (b) use this information to produce a statistically unbiased estimate of the impact of program participation on outcomes. This methodology is well-established in economics and econometrics, but has not been widely applied in the DM outcomes measurement literature; hence, this paper focuses on one illustrative application.

  12. Methodological Framework for World Health Organization Estimates of the Global Burden of Foodborne Disease

    PubMed Central

    Devleesschauwer, Brecht; Haagsma, Juanita A.; Angulo, Frederick J.; Bellinger, David C.; Cole, Dana; Döpfer, Dörte; Fazil, Aamir; Fèvre, Eric M.; Gibb, Herman J.; Hald, Tine; Kirk, Martyn D.; Lake, Robin J.; Maertens de Noordhout, Charline; Mathers, Colin D.; McDonald, Scott A.; Pires, Sara M.; Speybroeck, Niko; Thomas, M. Kate; Torgerson, Paul R.; Wu, Felicia; Havelaar, Arie H.; Praet, Nicolas

    2015-01-01

    Background The Foodborne Disease Burden Epidemiology Reference Group (FERG) was established in 2007 by the World Health Organization to estimate the global burden of foodborne diseases (FBDs). This paper describes the methodological framework developed by FERG's Computational Task Force to transform epidemiological information into FBD burden estimates. Methods and Findings The global and regional burden of 31 FBDs was quantified, along with limited estimates for 5 other FBDs, using Disability-Adjusted Life Years in a hazard- and incidence-based approach. To accomplish this task, the following workflow was defined: outline of disease models and collection of epidemiological data; design and completion of a database template; development of an imputation model; identification of disability weights; probabilistic burden assessment; and estimating the proportion of the disease burden by each hazard that is attributable to exposure by food (i.e., source attribution). All computations were performed in R and the different functions were compiled in the R package 'FERG'. Traceability and transparency were ensured by sharing results and methods in an interactive way with all FERG members throughout the process. Conclusions We developed a comprehensive framework for estimating the global burden of FBDs, in which methodological simplicity and transparency were key elements. All the tools developed have been made available and can be translated into a user-friendly national toolkit for studying and monitoring food safety at the local level. PMID:26633883

  13. Herd immunity effect of the HPV vaccination program in Australia under different assumptions regarding natural immunity against re-infection.

    PubMed

    Korostil, Igor A; Peters, Gareth W; Law, Matthew G; Regan, David G

    2013-04-08

    Deterministic dynamic compartmental transmission models (DDCTMs) of human papillomavirus (HPV) transmission have been used in a number of studies to estimate the potential impact of HPV vaccination programs. In most cases, the models were built under the assumption that an individual who cleared HPV infection develops (life-long) natural immunity against re-infection with the same HPV type (this is known as SIR scenario). This assumption was also made by two Australian modelling studies evaluating the impact of the National HPV Vaccination Program to assist in the health-economic assessment of male vaccination. An alternative view denying natural immunity after clearance (SIS scenario) was only presented in one study, although neither scenario has been supported by strong evidence. Some recent findings, however, provide arguments in favour of SIS. We developed HPV transmission models implementing life-time (SIR), limited, and non-existent (SIS) natural immunity. For each model we estimated the herd immunity effect of the ongoing Australian HPV vaccination program and its extension to cover males. Given the Australian setting, we aimed to clarify the extent to which the choice of model structure would influence estimation of this effect. A statistically robust and efficient calibration methodology was applied to ensure credibility of our results. We observed that for non-SIR models the herd immunity effect measured in relative reductions in HPV prevalence in the unvaccinated population was much more pronounced than for the SIR model. For example, with vaccine efficacy of 95% for females and 90% for males, the reductions for HPV-16 were 3% in females and 28% in males for the SIR model, and at least 30% (females) and 60% (males) for non-SIR models. The magnitude of these differences implies that evaluations of the impact of vaccination programs using DDCTMs should incorporate several model structures until our understanding of natural immunity is improved. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. The Diabetes Management Education Program in South Texas: An Economic and Clinical Impact Analysis.

    PubMed

    Kash, Bita A; Lin, Szu-Hsuan; Baek, Juha; Ohsfeldt, Robert L

    2017-01-01

    Diabetes is a major chronic disease that can lead to serious health problems and high healthcare costs without appropriate disease management and treatment. In the United States, the number of people diagnosed with diabetes and the cost for diabetes treatment has dramatically increased over time. To improve patients' self-management skills and clinical outcomes, diabetes management education (DME) programs have been developed and operated in various regions. This community case study explores and calculates the economic and clinical impacts of expanding a model DME program into 26 counties located in South Texas. The study sample includes 355 patients with type 2 diabetes and a follow-up hemoglobin A1c level measurement among 1,275 individuals who participated in the DME program between September 2012 and August 2013. We used the Gilmer's cost differentials model and the United Kingdom Prospective Diabetes Study (UKPDS) Risk Engine methodology to predict 3-year healthcare cost savings and 10-year clinical benefits of implementing a DME program in the selected 26 Texas counties. Changes in estimated 3-year cost and the estimated treatment effect were based on baseline hemoglobin A1c level. An average 3-year reduction in medical treatment costs per program participant was $2,033 (in 2016 dollars). The total healthcare cost savings for the 26 targeted counties increases as the program participation rate increases. The total projected cost saving ranges from $12 million with 5% participation rate to $185 million with 75% participation rate. A 10-year outlook on additional clinical benefits associated with the implementation and expansion of the DME program at 60% participation is estimated to result in approximately 4,838 avoided coronary heart disease cases and another 392 cases of avoided strokes. The implementation of this model DME program in the selected 26 counties would contribute to substantial healthcare cost savings and clinical benefits. Organizations that provide DME services may benefit from reduction in medical treatment costs and improvement in clinical outcomes for populations with diabetes.

  15. Assessing global radiative forcing due to regional emissions of tropospheric ozone precursors: a step towards climate credit for ozone reductions

    NASA Astrophysics Data System (ADS)

    Mauzerall, D. L.; Naik, V.; Horowitz, L. W.; Schwarzkopf, D.; Ramaswamy, V.; Oppenheimer, M.

    2005-05-01

    Carbon dioxide emissions from fossil-fuel consumption are presented for the five Asian countries that are among the global leaders in anthropogenic carbon emissions: China (13% of global total), Japan (5% of global total), India (5% of global total), South Korea (2% of global total), and Indonesia (1% of global total). Together, these five countries represent over a quarter of the world's fossil-fuel based carbon emissions. Moreover, these countries are rapidly developing and energy demand has grown dramatically in the last two decades. A method is developed to estimate the spatial and seasonal flux of fossil-fuel consumption, thereby greatly improving the temporal and spatial resolution of anthropogenic carbon dioxide emissions. Currently, only national annual data for anthropogenic carbon emissions are available, and as such, no understanding of seasonal or sub-national patterns of emissions are possible. This methodology employs fuel distribution data from representative sectors of the fossil-fuel market to determine the temporal and spatial patterns of fuel consumption. These patterns of fuel consumption are then converted to patterns of carbon emissions. The annual total emissions estimates produced by this method are consistent to those maintained by the United Nations. Improved estimates of temporal and spatial resolution of the human based carbon emissions allows for better projections about future energy demands, carbon emissions, and ultimately the global carbon cycle.

  16. Valuing the ozone-related health benefits of methane emission controls

    DOE PAGES

    Sarofim, Marcus C.; Waldhoff, Stephanie T.; Anenberg, Susan C.

    2015-06-29

    Methane is a greenhouse gas that oxidizes to form ground-level ozone, itself a greenhouse gas and a health-harmful air pollutant. Reducing methane emissions will both slow anthropogenic climate change and reduce ozone-related mortality. We estimate the benefits of reducing methane emissions anywhere in the world for ozone-related premature mortality globally and for eight geographic regions. Our methods are consistent with those used by the US Government to estimate the social cost of carbon (SCC). We find that the global short- and long-term premature mortality benefits due to reduced ozone production from methane mitigation are (2011) $790 and $1775 per tonnemore » methane, respectively. These correspond to approximately 70 and 150 % of the valuation of methane’s global climate impacts using the SCC after extrapolating from carbon dioxide to methane using global warming potential estimates. Results for monetized benefits are sensitive to a number of factors, particularly the choice of elasticity to income growth used when calculating the value of a statistical life. The benefits increase for emission years further in the future. Regionally, most of the global mortality benefits accrue in Asia, but 10 % accrue in the United States. As a result, this methodology can be used to assess the benefits of methane emission reductions anywhere in the world, including those achieved by national and multinational policies.« less

  17. Aeroshell Design Techniques for Aerocapture Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Dyke, R. Eric; Hrinda, Glenn A.

    2004-01-01

    A major goal of NASA s In-Space Propulsion Program is to shorten trip times for scientific planetary missions. To meet this challenge arrival speeds will increase, requiring significant braking for orbit insertion, and thus increased deceleration propellant mass that may exceed launch lift capabilities. A technology called aerocapture has been developed to expand the mission potential of exploratory probes destined for planets with suitable atmospheres. Aerocapture inserts a probe into planetary orbit via a single pass through the atmosphere using the probe s aeroshell drag to reduce velocity. The benefit of an aerocapture maneuver is a large reduction in propellant mass that may result in smaller, less costly missions and reduced mission cruise times. The methodology used to design rigid aerocapture aeroshells will be presented with an emphasis on a new systems tool under development. Current methods for fast, efficient evaluations of structural systems for exploratory vehicles to planets and moons within our solar system have been under development within NASA having limited success. Many systems tools that have been attempted applied structural mass estimation techniques based on historical data and curve fitting techniques that are difficult and cumbersome to apply to new vehicle concepts and missions. The resulting vehicle aeroshell mass may be incorrectly estimated or have high margins included to account for uncertainty. This new tool will reduce the guesswork previously found in conceptual aeroshell mass estimations.

  18. E-therapy in the treatment and prevention of eating disorders: A systematic review and meta-analysis.

    PubMed

    Loucas, Christina E; Fairburn, Christopher G; Whittington, Craig; Pennant, Mary E; Stockton, Sarah; Kendall, Tim

    2014-12-01

    The widespread availability of the Internet and mobile-device applications (apps) is changing the treatment of mental health problems. The aim of the present study was to review the research on the effectiveness of e-therapy for eating disorders, using the methodology employed by the UK's National Institute for Health and Care Excellence (NICE). Electronic databases were searched for published randomised controlled trials of e-therapies, designed to prevent or treat any eating disorder in all age groups. Studies were meta-analysed where possible, and effect sizes with confidence intervals were calculated. The GRADE approach was used to determine the confidence in the effect estimates. Twenty trials met the inclusion criteria. For prevention, a CBT-based e-intervention was associated with small reductions in eating disorder psychopathology, weight concern and drive for thinness, with moderate confidence in the effect estimates. For treatment and relapse prevention, various e-therapies showed some beneficial effects, but for most outcomes, evidence came from single studies and confidence in the effect estimates was low. Overall, although some positive findings were identified, the value of e-therapy for eating disorders must be viewed as uncertain. Further research, with improved methods, is needed to establish the effectiveness of e-therapy for people with eating disorders. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Valuing the ozone-related health benefits of methane emission controls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarofim, Marcus C.; Waldhoff, Stephanie T.; Anenberg, Susan C.

    Methane is a greenhouse gas that oxidizes to form ground-level ozone, itself a greenhouse gas and a health-harmful air pollutant. Reducing methane emissions will both slow anthropogenic climate change and reduce ozone-related mortality. We estimate the benefits of reducing methane emissions anywhere in the world for ozone-related premature mortality globally and for eight geographic regions. Our methods are consistent with those used by the US Government to estimate the social cost of carbon (SCC). We find that the global short- and long-term premature mortality benefits due to reduced ozone production from methane mitigation are (2011) $790 and $1775 per tonnemore » methane, respectively. These correspond to approximately 70 and 150 % of the valuation of methane’s global climate impacts using the SCC after extrapolating from carbon dioxide to methane using global warming potential estimates. Results for monetized benefits are sensitive to a number of factors, particularly the choice of elasticity to income growth used when calculating the value of a statistical life. The benefits increase for emission years further in the future. Regionally, most of the global mortality benefits accrue in Asia, but 10 % accrue in the United States. As a result, this methodology can be used to assess the benefits of methane emission reductions anywhere in the world, including those achieved by national and multinational policies.« less

  20. New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.

    PubMed

    Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María

    2017-08-01

    In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.

  1. Methodologies for the quantitative estimation of toxicant dose to cigarette smokers using physical, chemical and bioanalytical data.

    PubMed

    St Charles, Frank Kelley; McAughey, John; Shepperd, Christopher J

    2013-06-01

    Methodologies have been developed, described and demonstrated that convert mouth exposure estimates of cigarette smoke constituents to dose by accounting for smoke spilled from the mouth prior to inhalation (mouth-spill (MS)) and the respiratory retention (RR) during the inhalation cycle. The methodologies are applicable to just about any chemical compound in cigarette smoke that can be measured analytically and can be used with ambulatory population studies. Conversion of exposure to dose improves the relevancy for risk assessment paradigms. Except for urinary nicotine plus metabolites, biomarkers generally do not provide quantitative exposure or dose estimates. In addition, many smoke constituents have no reliable biomarkers. We describe methods to estimate the RR of chemical compounds in smoke based on their vapor pressure (VP) and to estimate the MS for a given subject. Data from two clinical studies were used to demonstrate dose estimation for 13 compounds, of which only 3 have urinary biomarkers. Compounds with VP > 10(-5) Pa generally have RRs of 88% or greater, which do not vary appreciably with inhalation volume (IV). Compounds with VP < 10(-7) Pa generally have RRs dependent on IV and lung exposure time. For MS, mean subject values from both studies were slightly greater than 30%. For constituents with urinary biomarkers, correlations with the calculated dose were significantly improved over correlations with mouth exposure. Of toxicological importance is that the dose correlations provide an estimate of the metabolic conversion of a constituent to its respective biomarker.

  2. Development of Methodology for Programming Autonomous Agents

    NASA Technical Reports Server (NTRS)

    Erol, Kutluhan; Levy, Renato; Lang, Lun

    2004-01-01

    A brief report discusses the rationale for, and the development of, a methodology for generating computer code for autonomous-agent-based systems. The methodology is characterized as enabling an increase in the reusability of the generated code among and within such systems, thereby making it possible to reduce the time and cost of development of the systems. The methodology is also characterized as enabling reduction of the incidence of those software errors that are attributable to the human failure to anticipate distributed behaviors caused by the software. A major conceptual problem said to be addressed in the development of the methodology was that of how to efficiently describe the interfaces between several layers of agent composition by use of a language that is both familiar to engineers and descriptive enough to describe such interfaces unambivalently

  3. Multi-criteria analysis for PM10 planning

    NASA Astrophysics Data System (ADS)

    Pisoni, Enrico; Carnevale, Claudio; Volta, Marialuisa

    To implement sound air quality policies, Regulatory Agencies require tools to evaluate outcomes and costs associated to different emission reduction strategies. These tools are even more useful when considering atmospheric PM10 concentrations due to the complex nonlinear processes that affect production and accumulation of the secondary fraction of this pollutant. The approaches presented in the literature (Integrated Assessment Modeling) are mainly cost-benefit and cost-effective analysis. In this work, the formulation of a multi-objective problem to control particulate matter is proposed. The methodology defines: (a) the control objectives (the air quality indicator and the emission reduction cost functions); (b) the decision variables (precursor emission reductions); (c) the problem constraints (maximum feasible technology reductions). The cause-effect relations between air quality indicators and decision variables are identified tuning nonlinear source-receptor models. The multi-objective problem solution provides to the decision maker a set of not-dominated scenarios representing the efficient trade-off between the air quality benefit and the internal costs (emission reduction technology costs). The methodology has been implemented for Northern Italy, often affected by high long-term exposure to PM10. The source-receptor models used in the multi-objective analysis are identified processing long-term simulations of GAMES multiphase modeling system, performed in the framework of CAFE-Citydelta project.

  4. A Methodological Framework to Estimate the Site Fidelity of Tagged Animals Using Passive Acoustic Telemetry

    PubMed Central

    Capello, Manuela; Robert, Marianne; Soria, Marc; Potin, Gael; Itano, David; Holland, Kim; Deneubourg, Jean-Louis; Dagorn, Laurent

    2015-01-01

    The rapid expansion of the use of passive acoustic telemetry technologies has facilitated unprecedented opportunities for studying the behavior of marine organisms in their natural environment. This technological advance would greatly benefit from the parallel development of dedicated methodologies accounting for the variety of timescales involved in the remote detection of tagged animals related to instrumental, environmental and behavioral events. In this paper we propose a methodological framework for estimating the site fidelity (“residence times”) of acoustic tagged animals at different timescales, based on the survival analysis of continuous residence times recorded at multiple receivers. Our approach is validated through modeling and applied on two distinct datasets obtained from a small coastal pelagic species (bigeye scad, Selar crumenophthalmus) and a large, offshore pelagic species (yellowfin tuna, Thunnus albacares), which show very distinct spatial scales of behavior. The methodological framework proposed herein allows estimating the most appropriate temporal scale for processing passive acoustic telemetry data depending on the scientific question of interest. Our method provides residence times free of the bias inherent to environmental and instrumental noise that can be used to study the small scale behavior of acoustic tagged animals. At larger timescales, it can effectively identify residence times that encompass the diel behavioral excursions of fish out of the acoustic detection range. This study provides a systematic framework for the analysis of passive acoustic telemetry data that can be employed for the comparative study of different species and study sites. The same methodology can be used each time discrete records of animal detections of any nature are employed for estimating the site fidelity of an animal at different timescales. PMID:26261985

  5. A Methodological Framework to Estimate the Site Fidelity of Tagged Animals Using Passive Acoustic Telemetry.

    PubMed

    Capello, Manuela; Robert, Marianne; Soria, Marc; Potin, Gael; Itano, David; Holland, Kim; Deneubourg, Jean-Louis; Dagorn, Laurent

    2015-01-01

    The rapid expansion of the use of passive acoustic telemetry technologies has facilitated unprecedented opportunities for studying the behavior of marine organisms in their natural environment. This technological advance would greatly benefit from the parallel development of dedicated methodologies accounting for the variety of timescales involved in the remote detection of tagged animals related to instrumental, environmental and behavioral events. In this paper we propose a methodological framework for estimating the site fidelity ("residence times") of acoustic tagged animals at different timescales, based on the survival analysis of continuous residence times recorded at multiple receivers. Our approach is validated through modeling and applied on two distinct datasets obtained from a small coastal pelagic species (bigeye scad, Selar crumenophthalmus) and a large, offshore pelagic species (yellowfin tuna, Thunnus albacares), which show very distinct spatial scales of behavior. The methodological framework proposed herein allows estimating the most appropriate temporal scale for processing passive acoustic telemetry data depending on the scientific question of interest. Our method provides residence times free of the bias inherent to environmental and instrumental noise that can be used to study the small scale behavior of acoustic tagged animals. At larger timescales, it can effectively identify residence times that encompass the diel behavioral excursions of fish out of the acoustic detection range. This study provides a systematic framework for the analysis of passive acoustic telemetry data that can be employed for the comparative study of different species and study sites. The same methodology can be used each time discrete records of animal detections of any nature are employed for estimating the site fidelity of an animal at different timescales.

  6. Incorporating ITS into corridor planning : Seattle case study

    DOT National Transportation Integrated Search

    1999-08-01

    The goals of this study were to develop a methodology for incorporating Intelligent Transportation Systems (ITS) into the transportation planning process and apply the methodology to estimate ITS costs and benefits for one case study. A major result ...

  7. Highway User Benefit Analysis System Research Project #128

    DOT National Transportation Integrated Search

    2000-10-01

    In this research, a methodology for estimating road user costs of various competing alternatives was developed. Also, software was developed to calculate the road user cost, perform economic analysis and update cost tables. The methodology is based o...

  8. Incorporating ITS into corridor planning : Seattle case study

    DOT National Transportation Integrated Search

    1999-06-01

    The goals of this study were to develop a methodology for incorporating Intelligent Transportation Systems (ITS) into the transportation planning process and apply the methodology to estimate ITS costs and benefits for one case study. A major result ...

  9. Optimal multi-dimensional poverty lines: The state of poverty in Iraq

    NASA Astrophysics Data System (ADS)

    Ameen, Jamal R. M.

    2017-09-01

    Poverty estimation based on calories intake is unrealistic. The established concept of multidimensional poverty has methodological weaknesses in the treatment of different dimensions and there is disagreement in methods of combining them into a single poverty line. This paper introduces a methodology to estimate optimal multidimensional poverty lines and uses the Iraqi household socio-economic survey data of 2012 to demonstrate the idea. The optimal poverty line for Iraq is found to be 170.5 Thousand Iraqi Dinars (TID).

  10. Discrete and continuous dynamics modeling of a mass moving on a flexible structure

    NASA Technical Reports Server (NTRS)

    Herman, Deborah Ann

    1992-01-01

    A general discrete methodology for modeling the dynamics of a mass that moves on the surface of a flexible structure is developed. This problem was motivated by the Space Station/Mobile Transporter system. A model reduction approach is developed to make the methodology applicable to large structural systems. To validate the discrete methodology, continuous formulations are also developed. Three different systems are examined: (1) simply-supported beam, (2) free-free beam, and (3) free-free beam with two points of contact between the mass and the flexible beam. In addition to validating the methodology, parametric studies were performed to examine how the system's physical properties affect its dynamics.

  11. Automatic Inference of Cryptographic Key Length Based on Analysis of Proof Tightness

    DTIC Science & Technology

    2016-06-01

    within an attack tree structure, then expand attack tree methodology to include cryptographic reductions. We then provide the algorithms for...maintaining and automatically reasoning about these expanded attack trees . We provide a software tool that utilizes machine-readable proof and attack metadata...and the attack tree methodology to provide rapid and precise answers regarding security parameters and effective security. This eliminates the need

  12. Social Costs of Gambling in the Czech Republic 2012.

    PubMed

    Winkler, Petr; Bejdová, Markéta; Csémy, Ladislav; Weissová, Aneta

    2017-12-01

    Evidence about social costs of gambling is scarce and the methodology for their calculation has been a subject to strong criticism. We aimed to estimate social costs of gambling in the Czech Republic 2012. This retrospective, prevalence based cost of illness study builds on the revised methodology of Australian Productivity Commission. Social costs of gambling were estimated by combining epidemiological and economic data. Prevalence data on negative consequences of gambling were taken from existing national epidemiological studies. Economic data were taken from various national and international sources. Consequences of problem and pathological gambling only were taken into account. In 2012, the social costs of gambling in the Czech Republic were estimated to range between 541,619 and 619,608 thousands EUR. While personal and family costs accounted for 63% of all social costs, direct medical costs were estimated to range from 0.25 to 0.28% of all social costs only. This is the first study which estimates social costs of gambling in any of the Central and East European countries. It builds upon the solid evidence about prevalence of gambling related problems in the Czech Republic and satisfactorily reliable economic data. However, there is a number of limitations stemming from assumptions that were made, which suggest that the methodology for the calculation of the social costs of gambling needs further development.

  13. Validation Methodology to Allow Simulated Peak Reduction and Energy Performance Analysis of Residential Building Envelope with Phase Change Materials: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conductionmore » Finite Difference (CondFD) algorithms.« less

  14. Combat Stress: A Collateral Effect in the Operational Effectiveness Loss Multiplier (OELM) Methodology

    DTIC Science & Technology

    2015-02-01

    5202, Draft Final (Alexandria, VA: IDA, April 2015), 10-4. 14 North Atlantic Treaty Organization (NATO) Standardization Agency ( NSA ), NATO Glossary of...Belgium: NSA , 2012), 2-C-2. 15 Disraelly et al., “A New Methodology for CBRN Casualty Estimation,” 228. 16 Disraelly et al., A Methodology for...20 NATO NSA , AAP-06, 2-K-1. 21 Ibid., 2-D-6. 22 Disraelly et al., A Methodology for Examining Collateral Effects on Military Operations during

  15. Estimating onset time from longitudinal and cross-sectional data with an application to estimating gestational age from longitudinal maternal anthropometry during pregnancy and neonatal anthropometry at birth.

    PubMed

    Ortega-Villa, Ana Maria; Grantz, Katherine L; Albert, Paul S

    2018-06-01

    Determining the date of conception is important for estimating gestational age and monitoring whether the fetus and mother are on track in their development and pregnancy. Various methods based on ultrasound have been proposed for dating a pregnancy in high resource countries. However, such techniques may not be available in under-resourced countries. We develop a shared random parameter model for estimating the date of conception using longitudinal assessment of multiple maternal anthropometry and cross-sectional neonatal anthropometry. The methodology is evaluated with a training-test set paradigm as well as with simulations to examine the robustness of the method to model misspecification. We illustrate this new methodology with data from the NICHD Fetal Growth Studies.

  16. Methodology for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Cancer.gov

    The HINTS is designed to produce reliable estimates at the national and regional levels. GIS maps using HINTS data have been used to provide a visual representation of possible geographic relationships in HINTS cancer-related variables.

  17. A Methodology for Developing Army Acquisition Strategies for an Uncertain Future

    DTIC Science & Technology

    2007-01-01

    manuscript for publication. Acronyms ABP Assumption-Based Planning ACEIT Automated Cost Estimating Integrated Tool ACR Armored Cavalry Regiment ACTD...decisions. For example, they employ the Automated Cost Estimating Integrated Tools ( ACEIT ) to simplify life cycle cost estimates; other tools are

  18. Comparisons between vehicular emissions from real-world in-use testing and EPA moves estimation.

    DOT National Transportation Integrated Search

    2012-07-01

    "This research study developed a methodology to perform mandatory dynamometer vehicular emissions tests : on real roads, performed on-road emissions tests, and compared the test results to the estimates using the : current EPA emissions estimation mo...

  19. Assessment of Energy Efficiency Improvement in the United States Petroleum Refining Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, William R.; Marano, John; Sathaye, Jayant

    2013-02-01

    Adoption of efficient process technologies is an important approach to reducing CO 2 emissions, in particular those associated with combustion. In many cases, implementing energy efficiency measures is among the most cost-effective approaches that any refiner can take, improving productivity while reducing emissions. Therefore, careful analysis of the options and costs associated with efficiency measures is required to establish sound carbon policies addressing global climate change, and is the primary focus of LBNL’s current petroleum refining sector analysis for the U.S. Environmental Protection Agency. The analysis is aimed at identifying energy efficiency-related measures and developing energy abatement supply curves andmore » CO 2 emissions reduction potential for the U.S. refining industry. A refinery model has been developed for this purpose that is a notional aggregation of the U.S. petroleum refining sector. It consists of twelve processing units and account s for the additional energy requirements from steam generation, hydrogen production and water utilities required by each of the twelve processing units. The model is carbon and energy balanced such that crud e oil inputs and major refinery sector outputs (fuels) are benchmarked to 2010 data. Estimates of the current penetration for the identified energy efficiency measures benchmark the energy requirements to those reported in U.S. DOE 2010 data. The remaining energy efficiency potential for each of the measures is estimated and compared to U.S. DOE fuel prices resulting in estimates of cost- effective energy efficiency opportunities for each of the twelve major processes. A combined cost of conserved energy supply curve is also presented along with the CO 2 emissions abatement opportunities that exist in the U.S. petroleum refinery sector. Roughly 1,200 PJ per year of primary fuels savings and close to 500 GWh per y ear of electricity savings are potentially cost-effective given U.S. DOE fuel price forecasts. This represents roughly 70 million metric tonnes of CO 2 emission reductions assuming 2010 emissions factor for grid electricity. Energy efficiency measures resulting in an additional 400 PJ per year of primary fuels savings and close to 1,700 GWh per year of electricity savings, and an associated 24 million metric tonnes of CO 2 emission reductions are not cost-effective given the same assumption with respect to fuel prices and electricity emissions factors. Compared to the modeled energy requirements for the U.S. petroleum refining sector, the cost effective potential represents a 40% reduction in fuel consumption and a 2% reduction in electricity consumption. The non-cost-effective potential represents an additional 13% reduction in fuel consumption and an additional 7% reduction in electricity consumption. The relative energy reduction potentials are mu ch higher for fuel consumption than electricity consumption largely in part because fuel is the primary energy consumption type in the refineries. Moreover, many cost effective fuel savings measures would increase electricity consumption. The model also has the potential to be used to examine the costs and benefits of the other CO 2 mitigation options, such as combined heat and power (CHP), carbon capture, and the potential introduction of biomass feedstocks. However, these options are not addressed in this report as this report is focused on developing the modeling methodology and assessing fuels savings measures. These opportunities to further reduce refinery sector CO 2 emissions and are recommended for further research and analysis.« less

  20. Developing methodologies for estimation of manure across livestock systems using agricultural census data

    NASA Astrophysics Data System (ADS)

    Khalil, Mohammad I.; Muldowney, John; Osborne, Bruce

    2017-04-01

    Livestock production and management-induced emissions of greenhouse gases (GHGs), comprising 18% of total global anthropogenic emissions together with air pollutants, have major atmospheric and ecosystem-related impacts. Identification of categorical/sub-categorical hotspots associated with these emissions and the estimation of emissions factors (EFs), including the use of the Intergovernmental Panel on Climate Change defaults (Tier 1), are key objectives in the preparation of reasonable, and transparent national reporting inventories (Tier 2). They also provide a basis for assessment of technological/management approaches for emissions reduction. For this, data on manure (solid/FYM and slurry/liquid) production across livestock categories, housing types and periods, storage types and application methodologies are required. However, relevant agricultural activity data are not sufficient to quantify the proportion and timing of the amounts of manure applied to major land use types and for different seasons. We have used the recent Census of Agriculture survey data 2010, collected by the Central Statistics Office, Ireland. Based on the compiled datasheets, several steps have been taken to generate missing information (e.g., number of individual livestock categories/subcategories) and to develop methodologies for calculating the proportion of slurry and manure production and application across farm categories. Among livestock categories, the proportion (%) of slurry over solids was higher for pigs (99:1) than the proportion derived from cattle (61:39). Solid manure production from other livestock systems derived mostly from loose-bedded houses. There were large differences between the proportions estimated using the number of farms and the livestock population. A major proportion of the slurry was applied to grassland (97 vs. 73) and the amounts applied in spring and summer were similar (40-42 vs. 36-39), but significantly higher than the autumn application (18 vs. 24). Similarly, most solid manure was applied to grassland (90 vs.77) with more applied during autumn (49 vs. 26), and the spring application was larger (31 vs. 61) than the summer application (21 vs. 13). Among the application methods used for slurry and solid manure, farmers mostly used splash plate and side discharge (90 and 60%, respectively) methods. Nationally, the total estimated (no. of places vs. population) amount of slurry from cattle and pigs for 2010 was 30.9 vs. 32.1 Mm3 and for solid manure was 319.8 vs. 320.3 Mm3 included sheep, poultry, goats and horses. The analysis indicates significant deficiencies in the available information, including discrepancies in the number of available places in relation to the total population during the housing period (key categories vs. poultry), and the methods of slurry, and solid manure application. Expert advice and the collection of information from other verifiable sources will be required before the information can be made acceptable to users.

Top