Sample records for risk estimation model

  1. Quantile uncertainty and value-at-risk model risk.

    PubMed

    Alexander, Carol; Sarabia, José María

    2012-08-01

    This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value-at-Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of "model risk" in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value-at-Risk model risk and compute the required regulatory capital add-on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value-at-Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks. © 2012 Society for Risk Analysis.

  2. LIFETIME LUNG CANCER RISKS ASSOCIATED WITH INDOOR RADON EXPOSURE BASED ON VARIOUS RADON RISK MODELS FOR CANADIAN POPULATION.

    PubMed

    Chen, Jing

    2017-04-01

    This study calculates and compares the lifetime lung cancer risks associated with indoor radon exposure based on well-known risk models in the literature; two risk models are from joint studies among miners and the other three models were developed from pooling studies on residential radon exposure from China, Europe and North America respectively. The aim of this article is to make clear that the various models are mathematical descriptions of epidemiologically observed real risks in different environmental settings. The risk from exposure to indoor radon is real and it is normal that variations could exist among different risk models even when they were applied to the same dataset. The results show that lifetime risk estimates vary significantly between the various risk models considered here: the model based on the European residential data provides the lowest risk estimates, while models based on the European miners and Chinese residential pooling with complete dosimetry give the highest values. The lifetime risk estimates based on the EPA/BEIR-VI model lie within this range and agree reasonably well with the averages of risk estimates from the five risk models considered in this study. © Crown copyright 2016.

  3. Sensitivity Analysis of Median Lifetime on Radiation Risks Estimates for Cancer and Circulatory Disease amongst Never-Smokers

    NASA Technical Reports Server (NTRS)

    Chappell, Lori J.; Cucinotta, Francis A.

    2011-01-01

    Radiation risks are estimated in a competing risk formalism where age or time after exposure estimates of increased risks for cancer and circulatory diseases are folded with a probability to survive to a given age. The survival function, also called the life-table, changes with calendar year, gender, smoking status and other demographic variables. An outstanding problem in risk estimation is the method of risk transfer between exposed populations and a second population where risks are to be estimated. Approaches used to transfer risks are based on: 1) Multiplicative risk transfer models -proportional to background disease rates. 2) Additive risk transfer model -risks independent of background rates. In addition, a Mixture model is often considered where the multiplicative and additive transfer assumptions are given weighted contributions. We studied the influence of the survival probability on the risk of exposure induced cancer and circulatory disease morbidity and mortality in the Multiplicative transfer model and the Mixture model. Risks for never-smokers (NS) compared to the average U.S. population are estimated to be reduced between 30% and 60% dependent on model assumptions. Lung cancer is the major contributor to the reduction for NS, with additional contributions from circulatory diseases and cancers of the stomach, liver, bladder, oral cavity, esophagus, colon, a portion of the solid cancer remainder, and leukemia. Greater improvements in risk estimates for NS s are possible, and would be dependent on improved understanding of risk transfer models, and elucidating the role of space radiation on the various stages of disease formation (e.g. initiation, promotion, and progression).

  4. Comparison of prospective risk estimates for postoperative complications: human vs computer model.

    PubMed

    Glasgow, Robert E; Hawn, Mary T; Hosokawa, Patrick W; Henderson, William G; Min, Sung-Joon; Richman, Joshua S; Tomeh, Majed G; Campbell, Darrell; Neumayer, Leigh A

    2014-02-01

    Surgical quality improvement tools such as NSQIP are limited in their ability to prospectively affect individual patient care by the retrospective audit and feedback nature of their design. We hypothesized that statistical models using patient preoperative characteristics could prospectively provide risk estimates of postoperative adverse events comparable to risk estimates provided by experienced surgeons, and could be useful for stratifying preoperative assessment of patient risk. This was a prospective observational cohort. Using previously developed models for 30-day postoperative mortality, overall morbidity, cardiac, thromboembolic, pulmonary, renal, and surgical site infection (SSI) complications, model and surgeon estimates of risk were compared with each other and with actual 30-day outcomes. The study cohort included 1,791 general surgery patients operated on between June 2010 and January 2012. Observed outcomes were mortality (0.2%), overall morbidity (8.2%), and pulmonary (1.3%), cardiac (0.3%), thromboembolism (0.2%), renal (0.4%), and SSI (3.8%) complications. Model and surgeon risk estimates showed significant correlation (p < 0.0001) for each outcome category. When surgeons perceived patient risk for overall morbidity to be low, the model-predicted risk and observed morbidity rates were 2.8% and 4.1%, respectively, compared with 10% and 18% in perceived high risk patients. Patients in the highest quartile of model-predicted risk accounted for 75% of observed mortality and 52% of morbidity. Across a broad range of general surgical operations, we confirmed that the model risk estimates are in fairly good agreement with risk estimates of experienced surgeons. Using these models prospectively can identify patients at high risk for morbidity and mortality, who could then be targeted for intervention to reduce postoperative complications. Published by Elsevier Inc.

  5. Comparison of risk estimates using life-table methods.

    PubMed

    Sullivan, R E; Weng, P S

    1987-08-01

    Risk estimates promulgated by various radiation protection authorities in recent years have become increasingly more complex. Early "integral" estimates in the form of health effects per 0.01 person-Gy (per person-rad) or per 10(4) person-Gy (per 10(6) person-rad) have tended to be replaced by "differential" estimates which are age- and sex-dependent and specify both minimum induction (latency) and duration of risk expression (plateau) periods. These latter types of risk estimate must be used in conjunction with a life table in order to reduce them to integral form. In this paper, the life table has been used to effect a comparison of the organ and tissue risk estimates derived in several recent reports. In addition, a brief review of life-table methodology is presented and some features of the models used in deriving differential coefficients are discussed. While the great number of permutations possible with dose-response models, detailed risk estimates and proposed projection models precludes any unique result, the reduced integral coefficients are required to conform to the linear, absolute-risk model recommended for use with the integral risk estimates reviewed.

  6. Validation of a novel air toxic risk model with air monitoring.

    PubMed

    Pratt, Gregory C; Dymond, Mary; Ellickson, Kristie; Thé, Jesse

    2012-01-01

    Three modeling systems were used to estimate human health risks from air pollution: two versions of MNRiskS (for Minnesota Risk Screening), and the USEPA National Air Toxics Assessment (NATA). MNRiskS is a unique cumulative risk modeling system used to assess risks from multiple air toxics, sources, and pathways on a local to a state-wide scale. In addition, ambient outdoor air monitoring data were available for estimation of risks and comparison with the modeled estimates of air concentrations. Highest air concentrations and estimated risks were generally found in the Minneapolis-St. Paul metropolitan area and lowest risks in undeveloped rural areas. Emissions from mobile and area (nonpoint) sources created greater estimated risks than emissions from point sources. Highest cancer risks were via ingestion pathway exposures to dioxins and related compounds. Diesel particles, acrolein, and formaldehyde created the highest estimated inhalation health impacts. Model-estimated air concentrations were generally highest for NATA and lowest for the AERMOD version of MNRiskS. This validation study showed reasonable agreement between available measurements and model predictions, although results varied among pollutants, and predictions were often lower than measurements. The results increased confidence in identifying pollutants, pathways, geographic areas, sources, and receptors of potential concern, and thus provide a basis for informing pollution reduction strategies and focusing efforts on specific pollutants (diesel particles, acrolein, and formaldehyde), geographic areas (urban centers), and source categories (nonpoint sources). The results heighten concerns about risks from food chain exposures to dioxins and PAHs. Risk estimates were sensitive to variations in methodologies for treating emissions, dispersion, deposition, exposure, and toxicity. © 2011 Society for Risk Analysis.

  7. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    NASA Astrophysics Data System (ADS)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  8. Estimating the Value-at-Risk for some stocks at the capital market in Indonesia based on ARMA-FIGARCH models

    NASA Astrophysics Data System (ADS)

    Sukono; Lesmana, E.; Susanti, D.; Napitupulu, H.; Hidayat, Y.

    2017-11-01

    Value-at-Risk has already become a standard measurement that must be carried out by the financial institution for both internal interest and regulatory. In this paper, the estimation of Value-at-Risk of some stocks with econometric models approach is analyzed. In this research, we assume that the stock return follows the time series model. To do the estimation of mean value we are using ARMA models, while to estimate the variance value we are using FIGARCH models. Furthermore, the mean value estimator and the variance are used to estimate the Value-at-Risk. The result of the analysis shows that from five stock PRUF, BBRI, MPPA, BMRI, and INDF, the Value-at-Risk obtained are 0.01791, 0.06037, 0.02550, 0.06030, and 0.02585 respectively. Since Value-at-Risk represents the maximum risk size of each stock at a 95% level of significance, then it can be taken into consideration in determining the investment policy on stocks.

  9. Multiple imputation for handling missing outcome data when estimating the relative risk.

    PubMed

    Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B

    2017-09-06

    Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.

  10. Population-based absolute risk estimation with survey data

    PubMed Central

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  11. R2 TRI facilities with 1999-2011 risk related estimates throughout the census blockgroup

    EPA Pesticide Factsheets

    This dataset delineates the distribution of estimate risk from the TRI facilities for 1999 - 2011 throughout the census blockgroup of the region using Office of Pollution, Prevention & Toxics (OPPT)'s Risk-Screening Environmental Indicators model (RSEI). The model uses the reported quantities of TRI releases of chemicals to estimate the impacts associated with each type of air release or transfer by every TRI facility.The RSEI was run to generate the estimate risk for each TRI facility in the region. The result from the model is joined to the TRI spatial data. Estimate risk values for each census block group were calculated based on the inverse distance of all the facilities which are within a 50 km radius of the census block group centroid. The estimate risk value for each census block group thus is an aggregated value that takes into account the estimate potential risk of all the facilities within the searching radius (50km).

  12. Using the Violence Risk Scale-Sexual Offense version in sexual violence risk assessments: Updated risk categories and recidivism estimates from a multisite sample of treated sexual offenders.

    PubMed

    Olver, Mark E; Mundt, James C; Thornton, David; Beggs Christofferson, Sarah M; Kingston, Drew A; Sowden, Justina N; Nicholaichuk, Terry P; Gordon, Audrey; Wong, Stephen C P

    2018-04-30

    The present study sought to develop updated risk categories and recidivism estimates for the Violence Risk Scale-Sexual Offense version (VRS-SO; Wong, Olver, Nicholaichuk, & Gordon, 2003-2017), a sexual offender risk assessment and treatment planning tool. The overarching purpose was to increase the clarity and accuracy of communicating risk assessment information that includes a systematic incorporation of new information (i.e., change) to modify risk estimates. Four treated samples of sexual offenders with VRS-SO pretreatment, posttreatment, and Static-99R ratings were combined with a minimum follow-up period of 10-years postrelease (N = 913). Logistic regression was used to model 5- and 10-year sexual and violent (including sexual) recidivism estimates across 6 different regression models employing specific risk and change score information from the VRS-SO and/or Static-99R. A rationale is presented for clinical applications of select models and the necessity of controlling for baseline risk when utilizing change information across repeated assessments. Information concerning relative risk (percentiles) and absolute risk (recidivism estimates) is integrated with common risk assessment language guidelines to generate new risk categories for the VRS-SO. Guidelines for model selection and forensic clinical application of the risk estimates are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Data Sources for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Cancer.gov

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  14. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  15. Graphs to estimate an individualized risk of breast cancer.

    PubMed

    Benichou, J; Gail, M H; Mulvihill, J J

    1996-01-01

    Clinicians who counsel women about their risk for developing breast cancer need a rapid method to estimate individualized risk (absolute risk), as well as the confidence limits around that point. The Breast Cancer Detection Demonstration Project (BCDDP) model (sometimes called the Gail model) assumes no genetic model and simultaneously incorporates five risk factors, but involves cumbersome calculations and interpolations. This report provides graphs to estimate the absolute risk of breast cancer from the BCDDP model. The BCDDP recruited 280,000 women from 1973 to 1980 who were monitored for 5 years. From this cohort, 2,852 white women developed breast cancer and 3,146 controls were selected, all with complete risk-factor information. The BCDDP model, previously developed from these data, was used to prepare graphs that relate a specific summary relative-risk estimate to the absolute risk of developing breast cancer over intervals of 10, 20, and 30 years. Once a summary relative risk is calculated, the appropriate graph is chosen that shows the 10-, 20-, or 30-year absolute risk of developing breast cancer. A separate graph gives the 95% confidence limits around the point estimate of absolute risk. Once a clinician rules out a single gene trait that predisposes to breast cancer and elicits information on age and four risk factors, the tables and figures permit an estimation of a women's absolute risk of developing breast cancer in the next three decades. These results are intended to be applied to women who undergo regular screening. They should be used only in a formal counseling program to maximize a woman's understanding of the estimates and the proper use of them.

  16. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    PubMed

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  17. [Theoretical model study about the application risk of high risk medical equipment].

    PubMed

    Shang, Changhao; Yang, Fenghui

    2014-11-01

    Research for establishing a risk monitoring theoretical model of high risk medical equipment at applying site. Regard the applying site as a system which contains some sub-systems. Every sub-system consists of some risk estimating indicators. After quantizing of each indicator, the quantized values are multiplied with corresponding weight and then the products are accumulated. Hence, the risk estimating value of each subsystem is attained. Follow the calculating method, the risk estimating values of each sub-system are multiplied with corresponding weights and then the product is accumulated. The cumulative sum is the status indicator of the high risk medical equipment at applying site. The status indicator reflects the applying risk of the medical equipment at applying site. Establish a risk monitoring theoretical model of high risk medical equipment at applying site. The model can monitor the applying risk of high risk medical equipment at applying site dynamically and specially.

  18. Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management

    EPA Science Inventory

    A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...

  19. CubeSat mission design software tool for risk estimating relationships

    NASA Astrophysics Data System (ADS)

    Gamble, Katharine Brumbaugh; Lightsey, E. Glenn

    2014-09-01

    In an effort to make the CubeSat risk estimation and management process more scientific, a software tool has been created that enables mission designers to estimate mission risks. CubeSat mission designers are able to input mission characteristics, such as form factor, mass, development cycle, and launch information, in order to determine the mission risk root causes which historically present the highest risk for their mission. Historical data was collected from the CubeSat community and analyzed to provide a statistical background to characterize these Risk Estimating Relationships (RERs). This paper develops and validates the mathematical model based on the same cost estimating relationship methodology used by the Unmanned Spacecraft Cost Model (USCM) and the Small Satellite Cost Model (SSCM). The RER development uses general error regression models to determine the best fit relationship between root cause consequence and likelihood values and the input factors of interest. These root causes are combined into seven overall CubeSat mission risks which are then graphed on the industry-standard 5×5 Likelihood-Consequence (L-C) chart to help mission designers quickly identify areas of concern within their mission. This paper is the first to document not only the creation of a historical database of CubeSat mission risks, but, more importantly, the scientific representation of Risk Estimating Relationships.

  20. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  1. Methodology for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Cancer.gov

    This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.

  2. Estimation model of life insurance claims risk for cancer patients by using Bayesian method

    NASA Astrophysics Data System (ADS)

    Sukono; Suyudi, M.; Islamiyati, F.; Supian, S.

    2017-01-01

    This paper discussed the estimation model of the risk of life insurance claims for cancer patients using Bayesian method. To estimate the risk of the claim, the insurance participant data is grouped into two: the number of policies issued and the number of claims incurred. Model estimation is done using a Bayesian approach method. Further, the estimator model was used to estimate the risk value of life insurance claims each age group for each sex. The estimation results indicate that a large risk premium for insured males aged less than 30 years is 0.85; for ages 30 to 40 years is 3:58; for ages 41 to 50 years is 1.71; for ages 51 to 60 years is 2.96; and for those aged over 60 years is 7.82. Meanwhile, for insured women aged less than 30 years was 0:56; for ages 30 to 40 years is 3:21; for ages 41 to 50 years is 0.65; for ages 51 to 60 years is 3:12; and for those aged over 60 years is 9.99. This study is useful in determining the risk premium in homogeneous groups based on gender and age.

  3. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Cancer.gov

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  4. Assessing the reliability of dose coefficients for exposure to radioiodine by members of the public, accounting for dosimetric and risk model uncertainties.

    PubMed

    Puncher, M; Zhang, W; Harrison, J D; Wakeford, R

    2017-06-26

    Assessments of risk to a specific population group resulting from internal exposure to a particular radionuclide can be used to assess the reliability of the appropriate International Commission on Radiological Protection (ICRP) dose coefficients used as a radiation protection device for the specified exposure pathway. An estimate of the uncertainty on the associated risk is important for informing judgments on reliability; a derived uncertainty factor, UF, is an estimate of the 95% probable geometric difference between the best risk estimate and the nominal risk and is a useful tool for making this assessment. This paper describes the application of parameter uncertainty analysis to quantify uncertainties resulting from internal exposures to radioiodine by members of the public, specifically 1, 10 and 20-year old females from the population of England and Wales. Best estimates of thyroid cancer incidence risk (lifetime attributable risk) are calculated for ingestion or inhalation of 129 I and 131 I, accounting for uncertainties in biokinetic model and cancer risk model parameter values. These estimates are compared with the equivalent ICRP derived nominal age-, sex- and population-averaged estimates of excess thyroid cancer incidence to obtain UFs. Derived UF values for ingestion or inhalation of 131 I for 1 year, 10-year and 20-year olds are around 28, 12 and 6, respectively, when compared with ICRP Publication 103 nominal values, and 9, 7 and 14, respectively, when compared with ICRP Publication 60 values. Broadly similar results were obtained for 129 I. The uncertainties on risk estimates are largely determined by uncertainties on risk model parameters rather than uncertainties on biokinetic model parameters. An examination of the sensitivity of the results to the risk models and populations used in the calculations show variations in the central estimates of risk of a factor of around 2-3. It is assumed that the direct proportionality of excess thyroid cancer risk and dose observed at low to moderate acute doses and incorporated in the risk models also applies to very small doses received at very low dose rates; the uncertainty in this assumption is considerable, but largely unquantifiable. The UF values illustrate the need for an informed approach to the use of ICRP dose and risk coefficients.

  5. Estimating the risk of dengue transmission from Dutch blood donors travelling to Suriname and the Dutch Caribbean.

    PubMed

    Oei, W; Lieshout-Krikke, R W; Kretzschmar, M E; Zaaijer, H L; Coutinho, R A; Eersel, M; Jubithana, B; Halabi, Y; Gerstenbluth, I; Maduro, E; Tromp, M; Janssen, M P

    2016-05-01

    The risk of dengue transmitted by travellers is known. Methods to estimate the transmission by transfusion (TT) risk from blood donors travelling to risk areas are available, for instance, the European Up-Front Risk Assessment Tool (EUFRAT). This study aimed to validate the estimated risk from travelling donors obtained from EUFRAT. Surveillance data on notified dengue cases in Suriname and the Dutch Caribbean islands (Aruba, Curaçao, St. Maarten, Bonaire, St. Eustatius and Saba) in 2001-2011 was used to calculate local incidence rates. Information on travel and donation behaviour of Dutch donors was collected. With the EUFRAT model, the TT risks from Dutch travelling donors were calculated. Model estimates were compared with the number of infections in Dutch travellers found by laboratory tests in the Netherlands. The expected cumulative number of donors becoming infected during travels to Suriname and the Dutch Caribbean from 2001 to 2011 was estimated at 5 (95% CI, 2-11) and 86 (45-179), respectively. The infection risk inferred from the laboratory-based study was 19 (9-61) and 28 (14-92). Given the independence of the data sources, these estimates are remarkably close. The model estimated that 0·02 (0·001-0·06) and 0·40 (0·01-1·4) recipients would have been infected by these travelling donors. The EUFRAT model provided an estimate close to actual observed number of dengue infections. The dengue TT risk among Dutch travelling donors can be estimated using basic transmission, travel and donation information. The TT risk from Dutch donors travelling to Suriname and the Dutch Caribbean is small. © 2016 International Society of Blood Transfusion.

  6. Risk Estimation Modeling and Feasibility Testing for a Mobile eHealth Intervention for Binge Drinking Among Young People: The D-ARIANNA (Digital-Alcohol RIsk Alertness Notifying Network for Adolescents and young adults) Project.

    PubMed

    Carrà, Giuseppe; Crocamo, Cristina; Schivalocchi, Alessandro; Bartoli, Francesco; Carretta, Daniele; Brambilla, Giulia; Clerici, Massimo

    2015-01-01

    Binge drinking is common among young people but often relevant risk factors are not recognized. eHealth apps, attractive for young people, may be useful to enhance awareness of this problem. We aimed at developing a current risk estimation model for binge drinking, incorporated into an eHealth app--D-ARIANNA (Digital-Alcohol RIsk Alertness Notifying Network for Adolescents and young adults)--for young people. A longitudinal approach with phase 1 (risk estimation), phase 2 (design), and phase 3 (feasibility) was followed. Risk/protective factors identified from the literature were used to develop a current risk estimation model for binge drinking. Relevant odds ratios were subsequently pooled through meta-analytic techniques with a random-effects model, deriving weighted estimates to be introduced in a final model. A set of questions, matching identified risk factors, were nested in a questionnaire and assessed for wording, content, and acceptability in focus groups involving 110 adolescents and young adults. Ten risk factors (5 modifiable) and 2 protective factors showed significant associations with binge drinking and were included in the model. Their weighted coefficients ranged between -0.71 (school proficiency) and 1.90 (cannabis use). The model, nested in an eHealth app questionnaire, provides in percent an overall current risk score, accompanied by appropriate images. Factors that mostly contribute are shown in summary messages. Minor changes have been realized after focus groups review. Most of the subjects (74%) regarded the eHealth app as helpful to assess binge drinking risk. We could produce an evidence-based eHealth app for young people, evaluating current risk for binge drinking. Its effectiveness will be tested in a large trial.

  7. Use and Customization of Risk Scores for Predicting Cardiovascular Events Using Electronic Health Record Data.

    PubMed

    Wolfson, Julian; Vock, David M; Bandyopadhyay, Sunayan; Kottke, Thomas; Vazquez-Benitez, Gabriela; Johnson, Paul; Adomavicius, Gediminas; O'Connor, Patrick J

    2017-04-24

    Clinicians who are using the Framingham Risk Score (FRS) or the American College of Cardiology/American Heart Association Pooled Cohort Equations (PCE) to estimate risk for their patients based on electronic health data (EHD) face 4 questions. (1) Do published risk scores applied to EHD yield accurate estimates of cardiovascular risk? (2) Are FRS risk estimates, which are based on data that are up to 45 years old, valid for a contemporary patient population seeking routine care? (3) Do the PCE make the FRS obsolete? (4) Does refitting the risk score using EHD improve the accuracy of risk estimates? Data were extracted from the EHD of 84 116 adults aged 40 to 79 years who received care at a large healthcare delivery and insurance organization between 2001 and 2011. We assessed calibration and discrimination for 4 risk scores: published versions of FRS and PCE and versions obtained by refitting models using a subset of the available EHD. The published FRS was well calibrated (calibration statistic K=9.1, miscalibration ranging from 0% to 17% across risk groups), but the PCE displayed modest evidence of miscalibration (calibration statistic K=43.7, miscalibration from 9% to 31%). Discrimination was similar in both models (C-index=0.740 for FRS, 0.747 for PCE). Refitting the published models using EHD did not substantially improve calibration or discrimination. We conclude that published cardiovascular risk models can be successfully applied to EHD to estimate cardiovascular risk; the FRS remains valid and is not obsolete; and model refitting does not meaningfully improve the accuracy of risk estimates. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  8. Challenges in risk estimation using routinely collected clinical data: The example of estimating cervical cancer risks from electronic health-records.

    PubMed

    Landy, Rebecca; Cheung, Li C; Schiffman, Mark; Gage, Julia C; Hyun, Noorie; Wentzensen, Nicolas; Kinney, Walter K; Castle, Philip E; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas; Sasieni, Peter D; Katki, Hormuzd A

    2018-06-01

    Electronic health-records (EHR) are increasingly used by epidemiologists studying disease following surveillance testing to provide evidence for screening intervals and referral guidelines. Although cost-effective, undiagnosed prevalent disease and interval censoring (in which asymptomatic disease is only observed at the time of testing) raise substantial analytic issues when estimating risk that cannot be addressed using Kaplan-Meier methods. Based on our experience analysing EHR from cervical cancer screening, we previously proposed the logistic-Weibull model to address these issues. Here we demonstrate how the choice of statistical method can impact risk estimates. We use observed data on 41,067 women in the cervical cancer screening program at Kaiser Permanente Northern California, 2003-2013, as well as simulations to evaluate the ability of different methods (Kaplan-Meier, Turnbull, Weibull and logistic-Weibull) to accurately estimate risk within a screening program. Cumulative risk estimates from the statistical methods varied considerably, with the largest differences occurring for prevalent disease risk when baseline disease ascertainment was random but incomplete. Kaplan-Meier underestimated risk at earlier times and overestimated risk at later times in the presence of interval censoring or undiagnosed prevalent disease. Turnbull performed well, though was inefficient and not smooth. The logistic-Weibull model performed well, except when event times didn't follow a Weibull distribution. We have demonstrated that methods for right-censored data, such as Kaplan-Meier, result in biased estimates of disease risks when applied to interval-censored data, such as screening programs using EHR data. The logistic-Weibull model is attractive, but the model fit must be checked against Turnbull non-parametric risk estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  9. The cardiovascular event reduction tool (CERT)--a simplified cardiac risk prediction model developed from the West of Scotland Coronary Prevention Study (WOSCOPS).

    PubMed

    L'Italien, G; Ford, I; Norrie, J; LaPuerta, P; Ehreth, J; Jackson, J; Shepherd, J

    2000-03-15

    The clinical decision to treat hypercholesterolemia is premised on an awareness of patient risk, and cardiac risk prediction models offer a practical means of determining such risk. However, these models are based on observational cohorts where estimates of the treatment benefit are largely inferred. The West of Scotland Coronary Prevention Study (WOSCOPS) provides an opportunity to develop a risk-benefit prediction model from the actual observed primary event reduction seen in the trial. Five-year Cox model risk estimates were derived from all WOSCOPS subjects (n = 6,595 men, aged 45 to 64 years old at baseline) using factors previously shown to be predictive of definite fatal coronary heart disease or nonfatal myocardial infarction. Model risk factors included age, diastolic blood pressure, total cholesterol/ high-density lipoprotein ratio (TC/HDL), current smoking, diabetes, family history of fatal coronary heart disease, nitrate use or angina, and treatment (placebo/ 40-mg pravastatin). All risk factors were expressed as categorical variables to facilitate risk assessment. Risk estimates were incorporated into a simple, hand-held slide rule or risk tool. Risk estimates were identified for 5-year age bands (45 to 65 years), 4 categories of TC/HDL ratio (<5.5, 5.5 to <6.5, 6.5 to <7.5, > or = 7.5), 2 levels of diastolic blood pressure (<90, > or = 90 mm Hg), from 0 to 3 additional risk factors (current smoking, diabetes, family history of premature fatal coronary heart disease, nitrate use or angina), and pravastatin treatment. Five-year risk estimates ranged from 2% in very low-risk subjects to 61% in the very high-risk subjects. Risk reduction due to pravastatin treatment averaged 31%. Thus, the Cardiovascular Event Reduction Tool (CERT) is a risk prediction model derived from the WOSCOPS trial. Its use will help physicians identify patients who will benefit from cholesterol reduction.

  10. Improving default risk prediction using Bayesian model uncertainty techniques.

    PubMed

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  11. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Cancer.gov

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  12. Are Structural Estimates of Auction Models Reasonable? Evidence from Experimental Data

    ERIC Educational Resources Information Center

    Bajari, Patrick; Hortacsu, Ali

    2005-01-01

    Recently, economists have developed methods for structural estimation of auction models. Many researchers object to these methods because they find the strict rationality assumptions to be implausible. Using bid data from first-price auction experiments, we estimate four alternative structural models: (1) risk-neutral Bayes-Nash, (2) risk-averse…

  13. [Estimation of individual breast cancer risk: relevance and limits of risk estimation models].

    PubMed

    De Pauw, A; Stoppa-Lyonnet, D; Andrieu, N; Asselain, B

    2009-10-01

    Several risk estimation models for breast or ovarian cancers have been developed these last decades. All these models take into account the family history, with different levels of sophistication. Gail model was developed in 1989 taking into account the family history (0, 1 or > or = 2 affected relatives) and several environmental factors. In 1990, Claus model was the first to integrate explicit assumptions about genetic effects, assuming a single gene dominantly inherited occurring with a low frequency in the population. BRCAPRO model, posterior to the identification of BRCA1 and BRCA2, assumes a restricted transmission with only these two dominantly inherited genes. BOADICEA model adds the effect of a polygenic component to the effect of BRCA1 and BRCA2 to explain the residual clustering of breast cancer. At last, IBIS model assumes a third dominantly inherited gene to explain this residual clustering. Moreover, this model incorporates environmental factors. We applied the Claus, BRCAPRO, BOADICEA and IBIS models to four clinical situations, corresponding to more or less heavy family histories, in order to study the consistency of the risk estimates. The three more recent models (BRCAPRO, BOADICEA and IBIS) gave the closer estimations. These estimates could be useful in clinical practice in front of complex analysis of breast and/or ovarian cancers family history.

  14. The Common Risk Model for Dams: A Portfolio Approach to Security Risk Assessments

    DTIC Science & Technology

    2013-06-01

    and threat estimates in a way that accounts for the relationships among these variables. The CRM -D can effectively quantify the benefits of...consequence, vulnerability, and threat estimates in a way that properly accounts for the relationships among these variables. The CRM -D can effectively...Common RiskModel ( CRM ) for evaluating and comparing risks associated with the nation’s critical infrastructure. This model incorporates commonly used risk

  15. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.

    PubMed

    Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi

    2015-04-22

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.

  16. Coronary risk assessment by point-based vs. equation-based Framingham models: significant implications for clinical care.

    PubMed

    Gordon, William J; Polansky, Jesse M; Boscardin, W John; Fung, Kathy Z; Steinman, Michael A

    2010-11-01

    US cholesterol guidelines use original and simplified versions of the Framingham model to estimate future coronary risk and thereby classify patients into risk groups with different treatment strategies. We sought to compare risk estimates and risk group classification generated by the original, complex Framingham model and the simplified, point-based version. We assessed 2,543 subjects age 20-79 from the 2001-2006 National Health and Nutrition Examination Surveys (NHANES) for whom Adult Treatment Panel III (ATP-III) guidelines recommend formal risk stratification. For each subject, we calculated the 10-year risk of major coronary events using the original and point-based Framingham models, and then compared differences in these risk estimates and whether these differences would place subjects into different ATP-III risk groups (<10% risk, 10-20% risk, or >20% risk). Using standard procedures, all analyses were adjusted for survey weights, clustering, and stratification to make our results nationally representative. Among 39 million eligible adults, the original Framingham model categorized 71% of subjects as having "moderate" risk (<10% risk of a major coronary event in the next 10 years), 22% as having "moderately high" (10-20%) risk, and 7% as having "high" (>20%) risk. Estimates of coronary risk by the original and point-based models often differed substantially. The point-based system classified 15% of adults (5.7 million) into different risk groups than the original model, with 10% (3.9 million) misclassified into higher risk groups and 5% (1.8 million) into lower risk groups, for a net impact of classifying 2.1 million adults into higher risk groups. These risk group misclassifications would impact guideline-recommended drug treatment strategies for 25-46% of affected subjects. Patterns of misclassifications varied significantly by gender, age, and underlying CHD risk. Compared to the original Framingham model, the point-based version misclassifies millions of Americans into risk groups for which guidelines recommend different treatment strategies.

  17. Assessment of uncertainties in radiation-induced cancer risk predictions at clinically relevant doses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, J.; Moteabbed, M.; Paganetti, H., E-mail: hpaganetti@mgh.harvard.edu

    2015-01-15

    Purpose: Theoretical dose–response models offer the possibility to assess second cancer induction risks after external beam therapy. The parameters used in these models are determined with limited data from epidemiological studies. Risk estimations are thus associated with considerable uncertainties. This study aims at illustrating uncertainties when predicting the risk for organ-specific second cancers in the primary radiation field illustrated by choosing selected treatment plans for brain cancer patients. Methods: A widely used risk model was considered in this study. The uncertainties of the model parameters were estimated with reported data of second cancer incidences for various organs. Standard error propagationmore » was then subsequently applied to assess the uncertainty in the risk model. Next, second cancer risks of five pediatric patients treated for cancer in the head and neck regions were calculated. For each case, treatment plans for proton and photon therapy were designed to estimate the uncertainties (a) in the lifetime attributable risk (LAR) for a given treatment modality and (b) when comparing risks of two different treatment modalities. Results: Uncertainties in excess of 100% of the risk were found for almost all organs considered. When applied to treatment plans, the calculated LAR values have uncertainties of the same magnitude. A comparison between cancer risks of different treatment modalities, however, does allow statistically significant conclusions. In the studied cases, the patient averaged LAR ratio of proton and photon treatments was 0.35, 0.56, and 0.59 for brain carcinoma, brain sarcoma, and bone sarcoma, respectively. Their corresponding uncertainties were estimated to be potentially below 5%, depending on uncertainties in dosimetry. Conclusions: The uncertainty in the dose–response curve in cancer risk models makes it currently impractical to predict the risk for an individual external beam treatment. On the other hand, the ratio of absolute risks between two modalities is less sensitive to the uncertainties in the risk model and can provide statistically significant estimates.« less

  18. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    PubMed Central

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  19. How are flood risk estimates affected by the choice of return-periods?

    NASA Astrophysics Data System (ADS)

    Ward, P. J.; de Moel, H.; Aerts, J. C. J. H.

    2011-12-01

    Flood management is more and more adopting a risk based approach, whereby flood risk is the product of the probability and consequences of flooding. One of the most common approaches in flood risk assessment is to estimate the damage that would occur for floods of several exceedance probabilities (or return periods), to plot these on an exceedance probability-loss curve (risk curve) and to estimate risk as the area under the curve. However, there is little insight into how the selection of the return-periods (which ones and how many) used to calculate risk actually affects the final risk calculation. To gain such insights, we developed and validated an inundation model capable of rapidly simulating inundation extent and depth, and dynamically coupled this to an existing damage model. The method was applied to a section of the River Meuse in the southeast of the Netherlands. Firstly, we estimated risk based on a risk curve using yearly return periods from 2 to 10 000 yr (€ 34 million p.a.). We found that the overall risk is greatly affected by the number of return periods used to construct the risk curve, with over-estimations of annual risk between 33% and 100% when only three return periods are used. In addition, binary assumptions on dike failure can have a large effect (a factor two difference) on risk estimates. Also, the minimum and maximum return period considered in the curve affects the risk estimate considerably. The results suggest that more research is needed to develop relatively simple inundation models that can be used to produce large numbers of inundation maps, complementary to more complex 2-D-3-D hydrodynamic models. It also suggests that research into flood risk could benefit by paying more attention to the damage caused by relatively high probability floods.

  20. Prediction impact curve is a new measure integrating intervention effects in the evaluation of risk models.

    PubMed

    Campbell, William; Ganna, Andrea; Ingelsson, Erik; Janssens, A Cecile J W

    2016-01-01

    We propose a new measure of assessing the performance of risk models, the area under the prediction impact curve (auPIC), which quantifies the performance of risk models in terms of their average health impact in the population. Using simulated data, we explain how the prediction impact curve (PIC) estimates the percentage of events prevented when a risk model is used to assign high-risk individuals to an intervention. We apply the PIC to the Atherosclerosis Risk in Communities (ARIC) Study to illustrate its application toward prevention of coronary heart disease. We estimated that if the ARIC cohort received statins at baseline, 5% of events would be prevented when the risk model was evaluated at a cutoff threshold of 20% predicted risk compared to 1% when individuals were assigned to the intervention without the use of a model. By calculating the auPIC, we estimated that an average of 15% of events would be prevented when considering performance across the entire interval. We conclude that the PIC is a clinically meaningful measure for quantifying the expected health impact of risk models that supplements existing measures of model performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    NASA Astrophysics Data System (ADS)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2014-12-01

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.

  2. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Denguemore » and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.« less

  3. Space Radiation Cancer, Circulatory Disease and CNS Risks for Near Earth Asteroid and Mars Missions: Uncertainty Estimates for Never-Smokers

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Chappell, Lori J.; Wang, Minli; Kim, Myung-Hee

    2011-01-01

    The uncertainties in estimating the health risks from galactic cosmic rays (GCR) and solar particle events (SPE) are a major limitation to the length of space missions and the evaluation of potential risk mitigation approaches. NASA limits astronaut exposures to a 3% risk of exposure induced cancer death (REID), and protects against uncertainties in risks projections using an assessment of 95% confidence intervals after propagating the error from all model factors (environment and organ exposure, risk coefficients, dose-rate modifiers, and quality factors). Because there are potentially significant late mortality risks from diseases of the circulatory system and central nervous system (CNS) which are less well defined than cancer risks, the cancer REID limit is not necessarily conservative. In this report, we discuss estimates of lifetime risks from space radiation and new estimates of model uncertainties are described. The key updates to the NASA risk projection model are: 1) Revised values for low LET risk coefficients for tissue specific cancer incidence, with incidence rates transported to an average U.S. population to estimate the probability of Risk of Exposure Induced Cancer (REIC) and REID. 2) An analysis of smoking attributable cancer risks for never-smokers that shows significantly reduced lung cancer risk as well as overall cancer risks from radiation compared to risk estimated for the average U.S. population. 3) Derivation of track structure based quality functions depends on particle fluence, charge number, Z and kinetic energy, E. 4) The assignment of a smaller maximum in quality function for leukemia than for solid cancers. 5) The use of the ICRP tissue weights is shown to over-estimate cancer risks from SPEs by a factor of 2 or more. Summing cancer risks for each tissue is recommended as a more accurate approach to estimate SPE cancer risks. 6) Additional considerations on circulatory and CNS disease risks. Our analysis shows that an individual s history of smoking exposure has a larger impact on GCR risk estimates than amounts of radiation shielding or age at exposure (amongst adults). Risks for never-smokers compared to the average U.S. population are estimated to be reduced between 30% and 60% dependent on model assumptions. Lung cancer is the major contributor to the reduction for never-smokers, with additional contributions from circulatory diseases and cancers of the stomach, liver, bladder, oral cavity and esophagus, and leukemia. The relative contribution of CNS risks to the overall space radiation detriment is potentially increased for never-smokers such as most astronauts. Problems in estimating risks for former smokers and the influence of second-hand smoke are discussed. Compared to the LET approximation, the new track structure derived radiation quality functions lead to a reduced risk for relativistic energy particles and increased risks for intermediate energy particles. Revised estimates for the number of safe days in space at solar minimum for heavy shielding conditions are described for never-smokers and the average U.S. population. Results show that missions to near Earth asteroids (NEA) or Mars violate NASA's radiation safety standards with the current levels of uncertainties. Greater improvements in risk estimates for never-smokers are possible, and would be dependent on improved understanding of risk transfer models, and elucidating the role of space radiation on the various stages of disease formation (e.g. initiation, promotion, and progression).

  4. Risk and the physics of clinical prediction.

    PubMed

    McEvoy, John W; Diamond, George A; Detrano, Robert C; Kaul, Sanjay; Blaha, Michael J; Blumenthal, Roger S; Jones, Steven R

    2014-04-15

    The current paradigm of primary prevention in cardiology uses traditional risk factors to estimate future cardiovascular risk. These risk estimates are based on prediction models derived from prospective cohort studies and are incorporated into guideline-based initiation algorithms for commonly used preventive pharmacologic treatments, such as aspirin and statins. However, risk estimates are more accurate for populations of similar patients than they are for any individual patient. It may be hazardous to presume that the point estimate of risk derived from a population model represents the most accurate estimate for a given patient. In this review, we exploit principles derived from physics as a metaphor for the distinction between predictions regarding populations versus patients. We identify the following: (1) predictions of risk are accurate at the level of populations but do not translate directly to patients, (2) perfect accuracy of individual risk estimation is unobtainable even with the addition of multiple novel risk factors, and (3) direct measurement of subclinical disease (screening) affords far greater certainty regarding the personalized treatment of patients, whereas risk estimates often remain uncertain for patients. In conclusion, shifting our focus from prediction of events to detection of disease could improve personalized decision-making and outcomes. We also discuss innovative future strategies for risk estimation and treatment allocation in preventive cardiology. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Software risk estimation and management techniques at JPL

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Lum, K.

    2002-01-01

    In this talk we will discuss how uncertainty has been incorporated into the JPL software model, probabilistic-based estimates, and how risk is addressed, how cost risk is currently being explored via a variety of approaches, from traditional risk lists, to detailed WBS-based risk estimates to the Defect Detection and Prevention (DDP) tool.

  6. Quantitative Microbial Risk Assessment for Spray Irrigation of Dairy Manure Based on an Empirical Fate and Transport Model

    PubMed Central

    Burch, Tucker R.; Spencer, Susan K.; Stokdyk, Joel P.; Kieke, Burney A.; Larson, Rebecca A.; Firnstahl, Aaron D.; Rule, Ana M.

    2017-01-01

    Background: Spray irrigation for land-applying livestock manure is increasing in the United States as farms become larger and economies of scale make manure irrigation affordable. Human health risks from exposure to zoonotic pathogens aerosolized during manure irrigation are not well understood. Objectives: We aimed to a) estimate human health risks due to aerosolized zoonotic pathogens downwind of spray-irrigated dairy manure; and b) determine which factors (e.g., distance, weather conditions) have the greatest influence on risk estimates. Methods: We sampled downwind air concentrations of manure-borne fecal indicators and zoonotic pathogens during 21 full-scale dairy manure irrigation events at three farms. We fit these data to hierarchical empirical models and used model outputs in a quantitative microbial risk assessment (QMRA) to estimate risk [probability of acute gastrointestinal illness (AGI)] for individuals exposed to spray-irrigated dairy manure containing Campylobacter jejuni, enterohemorrhagic Escherichia coli (EHEC), or Salmonella spp. Results: Median risk estimates from Monte Carlo simulations ranged from 10−5 to 10−2 and decreased with distance from the source. Risk estimates for Salmonella or EHEC-related AGI were most sensitive to the assumed level of pathogen prevalence in dairy manure, while risk estimates for C. jejuni were not sensitive to any single variable. Airborne microbe concentrations were negatively associated with distance and positively associated with wind speed, both of which were retained in models as a significant predictor more often than relative humidity, solar irradiation, or temperature. Conclusions: Our model-based estimates suggest that reducing pathogen prevalence and concentration in source manure would reduce the risk of AGI from exposure to manure irrigation, and that increasing the distance from irrigated manure (i.e., setbacks) and limiting irrigation to times of low wind speed may also reduce risk. https://doi.org/10.1289/EHP283 PMID:28885976

  7. Quantitative microbial risk assessment for spray irrigation of dairy manure based on an empirical fate and transport model

    USGS Publications Warehouse

    Burch, Tucker R; Spencer, Susan K.; Stokdyk, Joel; Kieke, Burney A; Larson, Rebecca A; Firnstahl, Aaron; Rule, Ana M; Borchardt, Mark A.

    2017-01-01

    BACKGROUND: Spray irrigation for land-applying livestock manure is increasing in the United States as farms become larger and economies of scale make manure irrigation affordable. Human health risks from exposure to zoonotic pathogens aerosolized during manure irrigation are not well understood. OBJECTIVES: We aimed to a) estimate human health risks due to aerosolized zoonotic pathogens downwind of spray-irrigated dairy manure; and b) determine which factors (e.g., distance, weather conditions) have the greatest influence on risk estimates. METHODS: We sampled downwind air concentrations of manure-borne fecal indicators and zoonotic pathogens during 21 full-scale dairy manure irri- gation events at three farms. We fit these data to hierarchical empirical models and used model outputs in a quantitative microbial risk assessment (QMRA) to estimate risk [probability of acute gastrointestinal illness (AGI)] for individuals exposed to spray-irrigated dairy manure containing Campylobacter jejuni, enterohemorrhagic Escherichia coli (EHEC), or Salmonella spp. RESULTS: Median risk estimates from Monte Carlo simulations ranged from 10−5 to 10−2 and decreased with distance from the source. Risk estimates for Salmonella or EHEC-related AGI were most sensitive to the assumed level of pathogen prevalence in dairy manure, while risk estimates for C. jejuni were not sensitive to any single variable. Airborne microbe concentrations were negatively associated with distance and positively associated with wind speed, both of which were retained in models as a significant predictor more often than relative humidity, solar irradiation, or temperature. CONCLUSIONS: Our model-based estimates suggest that reducing pathogen prevalence and concentration in source manure would reduce the risk of AGI from exposure to manure irrigation, and that increasing the distance from irrigated manure (i.e., setbacks) and limiting irrigation to times of low wind speed may also reduce risk.

  8. Multiple imputation for estimating the risk of developing dementia and its impact on survival.

    PubMed

    Yu, Binbing; Saczynski, Jane S; Launer, Lenore

    2010-10-01

    Dementia, Alzheimer's disease in particular, is one of the major causes of disability and decreased quality of life among the elderly and a leading obstacle to successful aging. Given the profound impact on public health, much research has focused on the age-specific risk of developing dementia and the impact on survival. Early work has discussed various methods of estimating age-specific incidence of dementia, among which the illness-death model is popular for modeling disease progression. In this article we use multiple imputation to fit multi-state models for survival data with interval censoring and left truncation. This approach allows semi-Markov models in which survival after dementia depends on onset age. Such models can be used to estimate the cumulative risk of developing dementia in the presence of the competing risk of dementia-free death. Simulations are carried out to examine the performance of the proposed method. Data from the Honolulu Asia Aging Study are analyzed to estimate the age-specific and cumulative risks of dementia and to examine the effect of major risk factors on dementia onset and death.

  9. Development and validation of risk models to select ever-smokers for CT lung-cancer screening

    PubMed Central

    Katki, Hormuzd A.; Kovalchik, Stephanie A.; Berg, Christine D.; Cheung, Li C.; Chaturvedi, Anil K.

    2016-01-01

    Importance The US Preventive Services Task Force (USPSTF) recommends computed-tomography (CT) lung-cancer screening for ever-smokers ages 55-80 years who smoked at least 30 pack-years with no more than 15 years since quitting. However, selecting ever-smokers for screening using individualized lung-cancer risk calculations may be more effective and efficient than current USPSTF recommendations. Objective Comparison of modeled outcomes from risk-based CT lung-screening strategies versus USPSTF recommendations. Design/Setting/Participants Empirical risk models for lung-cancer incidence and death in the absence of CT screening using data on ever-smokers from the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO; 1993-2009) control group. Covariates included age, education, sex, race, smoking intensity/duration/quit-years, Body Mass Index, family history of lung-cancer, and self-reported emphysema. Model validation in the chest radiography groups of the PLCO and the National Lung Screening Trial (NLST; 2002-2009), with additional validation of the death model in the National Health Interview Survey (NHIS; 1997-2001), a representative sample of the US. Models applied to US ever-smokers ages 50-80 (NHIS 2010-2012) to estimate outcomes of risk-based selection for CT lung-screening, assuming screening for all ever-smokers yields the percent changes in lung-cancer detection and death observed in the NLST. Exposure Annual CT lung-screening for 3 years. Main Outcomes and Measures Model validity: calibration (number of model-predicted cases divided by number of observed cases (Estimated/Observed)) and discrimination (Area-Under-Curve (AUC)). Modeled screening outcomes: estimated number of screen-avertable lung-cancer deaths, estimated screening effectiveness (number needed to screen (NNS) to prevent 1 lung-cancer death). Results Lung-cancer incidence and death risk models were well-calibrated in PLCO and NLST. The lung-cancer death model calibrated and discriminated well for US ever-smokers ages 50-80 (NHIS 1997-2001: Estimated/Observed=0.94, 95%CI=0.84-1.05; AUC=0.78, 95%CI=0.76-0.80). Under USPSTF recommendations, the models estimated 9.0 million US ever-smokers would qualify for lung-cancer screening and 46,488 (95%CI=43,924-49,053) lung-cancer deaths were estimated as screen-avertable over 5 years (estimated NNS=194, 95%CI=187-201). In contrast, risk-based selection screening the same number of ever-smokers (9.0 million) at highest 5-year lung-cancer risk (≥1.9%), was estimated to avert 20% more deaths (55,717; 95%CI=53,033-58,400) and was estimated to reduce the estimated NNS by 17% (NNS=162, 95%CI=157-166). Conclusions and Relevance Among a cohort of US ever-smokers age 50-80 years, application of a risk-based model for CT screening for lung cancer compared with a model based on USPSTF recommendations was estimated to be associated with a greater number of lung-cancer deaths prevented over 5 years along with a lower NNS to prevent 1 lung-cancer death. PMID:27179989

  10. Development and Validation of Risk Models to Select Ever-Smokers for CT Lung Cancer Screening.

    PubMed

    Katki, Hormuzd A; Kovalchik, Stephanie A; Berg, Christine D; Cheung, Li C; Chaturvedi, Anil K

    2016-06-07

    The US Preventive Services Task Force (USPSTF) recommends computed tomography (CT) lung cancer screening for ever-smokers aged 55 to 80 years who have smoked at least 30 pack-years with no more than 15 years since quitting. However, selecting ever-smokers for screening using individualized lung cancer risk calculations may be more effective and efficient than current USPSTF recommendations. Comparison of modeled outcomes from risk-based CT lung-screening strategies vs USPSTF recommendations. Empirical risk models for lung cancer incidence and death in the absence of CT screening using data on ever-smokers from the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO; 1993-2009) control group. Covariates included age; education; sex; race; smoking intensity, duration, and quit-years; body mass index; family history of lung cancer; and self-reported emphysema. Model validation in the chest radiography groups of the PLCO and the National Lung Screening Trial (NLST; 2002-2009), with additional validation of the death model in the National Health Interview Survey (NHIS; 1997-2001), a representative sample of the United States. Models were applied to US ever-smokers aged 50 to 80 years (NHIS 2010-2012) to estimate outcomes of risk-based selection for CT lung screening, assuming screening for all ever-smokers, yield the percent changes in lung cancer detection and death observed in the NLST. Annual CT lung screening for 3 years beginning at age 50 years. For model validity: calibration (number of model-predicted cases divided by number of observed cases [estimated/observed]) and discrimination (area under curve [AUC]). For modeled screening outcomes: estimated number of screen-avertable lung cancer deaths and estimated screening effectiveness (number needed to screen [NNS] to prevent 1 lung cancer death). Lung cancer incidence and death risk models were well calibrated in PLCO and NLST. The lung cancer death model calibrated and discriminated well for US ever-smokers aged 50 to 80 years (NHIS 1997-2001: estimated/observed = 0.94 [95%CI, 0.84-1.05]; AUC, 0.78 [95%CI, 0.76-0.80]). Under USPSTF recommendations, the models estimated 9.0 million US ever-smokers would qualify for lung cancer screening and 46,488 (95% CI, 43,924-49,053) lung cancer deaths were estimated as screen-avertable over 5 years (estimated NNS, 194 [95% CI, 187-201]). In contrast, risk-based selection screening of the same number of ever-smokers (9.0 million) at highest 5-year lung cancer risk (≥1.9%) was estimated to avert 20% more deaths (55,717 [95% CI, 53,033-58,400]) and was estimated to reduce the estimated NNS by 17% (NNS, 162 [95% CI, 157-166]). Among a cohort of US ever-smokers aged 50 to 80 years, application of a risk-based model for CT screening for lung cancer compared with a model based on USPSTF recommendations was estimated to be associated with a greater number of lung cancer deaths prevented over 5 years, along with a lower NNS to prevent 1 lung cancer death.

  11. SURE Estimates for a Heteroscedastic Hierarchical Model

    PubMed Central

    Xie, Xianchao; Kou, S. C.; Brown, Lawrence D.

    2014-01-01

    Hierarchical models are extensively studied and widely used in statistics and many other scientific areas. They provide an effective tool for combining information from similar resources and achieving partial pooling of inference. Since the seminal work by James and Stein (1961) and Stein (1962), shrinkage estimation has become one major focus for hierarchical models. For the homoscedastic normal model, it is well known that shrinkage estimators, especially the James-Stein estimator, have good risk properties. The heteroscedastic model, though more appropriate for practical applications, is less well studied, and it is unclear what types of shrinkage estimators are superior in terms of the risk. We propose in this paper a class of shrinkage estimators based on Stein’s unbiased estimate of risk (SURE). We study asymptotic properties of various common estimators as the number of means to be estimated grows (p → ∞). We establish the asymptotic optimality property for the SURE estimators. We then extend our construction to create a class of semi-parametric shrinkage estimators and establish corresponding asymptotic optimality results. We emphasize that though the form of our SURE estimators is partially obtained through a normal model at the sampling level, their optimality properties do not heavily depend on such distributional assumptions. We apply the methods to two real data sets and obtain encouraging results. PMID:25301976

  12. Role of mathematical models in assessment of risk and in attempts to define management strategy.

    PubMed

    Flamm, W G; Winbush, J S

    1984-06-01

    Risk assessment of food-borne carcinogens is becoming a common practice at FDA. Actual risk is not being estimated, only the upper limit of risk. The risk assessment process involves a large number of steps and assumptions, many of which affect the numerical value estimated. The mathematical model which is to be applied is only one of the factors which affect these numerical values. To fulfill the policy objective of using the "worst plausible case" in estimating the upper limit of risk, recognition needs to be given to a proper balancing of assumptions and decisions. Interaction between risk assessors and risk managers should avoid making or giving the appearance of making specific technical decisions such as the choice of the mathematical model. The importance of this emerging field is too great to jeopardize it by inappropriately mixing scientific judgments with policy judgments. The risk manager should understand fully the points and range of uncertainty involved in arriving at the estimates of risk which must necessarily affect the choice of the policy or regulatory options available.

  13. Advanced risk assessment of the effects of graphite fibers on electronic and electric equipment, phase 1. [simulating vulnerability to airports and communities from fibers released during aircraft fires

    NASA Technical Reports Server (NTRS)

    Pocinki, L. S.; Kaplan, L. D.; Cornell, M. E.; Greenstone, R.

    1979-01-01

    A model was developed to generate quantitative estimates of the risk associated with the release of graphite fibers during fires involving commercial aircraft constructed with graphite fiber composite materials. The model was used to estimate the risk associated with accidents at several U.S. airports. These results were then combined to provide an estimate of the total risk to the nation.

  14. Variance computations for functional of absolute risk estimates.

    PubMed

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  15. Variance computations for functional of absolute risk estimates

    PubMed Central

    Pfeiffer, R.M.; Petracci, E.

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476

  16. Probability based models for estimation of wildfire risk

    Treesearch

    Haiganoush Preisler; D. R. Brillinger; R. E. Burgan; John Benoit

    2004-01-01

    We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 km²-day cell level. We fit a spatially and temporally explicit non-...

  17. A framework for global river flood risk assessments

    NASA Astrophysics Data System (ADS)

    Winsemius, H. C.; Van Beek, L. P. H.; Jongman, B.; Ward, P. J.; Bouwman, A.

    2012-08-01

    There is an increasing need for strategic global assessments of flood risks in current and future conditions. In this paper, we propose a framework for global flood risk assessment for river floods, which can be applied in current conditions, as well as in future conditions due to climate and socio-economic changes. The framework's goal is to establish flood hazard and impact estimates at a high enough resolution to allow for their combination into a risk estimate. The framework estimates hazard at high resolution (~1 km2) using global forcing datasets of the current (or in scenario mode, future) climate, a global hydrological model, a global flood routing model, and importantly, a flood extent downscaling routine. The second component of the framework combines hazard with flood impact models at the same resolution (e.g. damage, affected GDP, and affected population) to establish indicators for flood risk (e.g. annual expected damage, affected GDP, and affected population). The framework has been applied using the global hydrological model PCR-GLOBWB, which includes an optional global flood routing model DynRout, combined with scenarios from the Integrated Model to Assess the Global Environment (IMAGE). We performed downscaling of the hazard probability distributions to 1 km2 resolution with a new downscaling algorithm, applied on Bangladesh as a first case-study application area. We demonstrate the risk assessment approach in Bangladesh based on GDP per capita data, population, and land use maps for 2010 and 2050. Validation of the hazard and damage estimates has been performed using the Dartmouth Flood Observatory database and damage estimates from the EM-DAT database and World Bank sources. We discuss and show sensitivities of the estimated risks with regard to the use of different climate input sets, decisions made in the downscaling algorithm, and different approaches to establish impact models.

  18. Using probabilistic terrorism risk modeling for regulatory benefit-cost analysis: application to the Western hemisphere travel initiative in the land environment.

    PubMed

    Willis, Henry H; LaTourrette, Tom

    2008-04-01

    This article presents a framework for using probabilistic terrorism risk modeling in regulatory analysis. We demonstrate the framework with an example application involving a regulation under consideration, the Western Hemisphere Travel Initiative for the Land Environment, (WHTI-L). First, we estimate annualized loss from terrorist attacks with the Risk Management Solutions (RMS) Probabilistic Terrorism Model. We then estimate the critical risk reduction, which is the risk-reducing effectiveness of WHTI-L needed for its benefit, in terms of reduced terrorism loss in the United States, to exceed its cost. Our analysis indicates that the critical risk reduction depends strongly not only on uncertainties in the terrorism risk level, but also on uncertainty in the cost of regulation and how casualties are monetized. For a terrorism risk level based on the RMS standard risk estimate, the baseline regulatory cost estimate for WHTI-L, and a range of casualty cost estimates based on the willingness-to-pay approach, our estimate for the expected annualized loss from terrorism ranges from $2.7 billion to $5.2 billion. For this range in annualized loss, the critical risk reduction for WHTI-L ranges from 7% to 13%. Basing results on a lower risk level that results in halving the annualized terrorism loss would double the critical risk reduction (14-26%), and basing the results on a higher risk level that results in a doubling of the annualized terrorism loss would cut the critical risk reduction in half (3.5-6.6%). Ideally, decisions about terrorism security regulations and policies would be informed by true benefit-cost analyses in which the estimated benefits are compared to costs. Such analyses for terrorism security efforts face substantial impediments stemming from the great uncertainty in the terrorist threat and the very low recurrence interval for large attacks. Several approaches can be used to estimate how a terrorism security program or regulation reduces the distribution of risks it is intended to manage. But, continued research to develop additional tools and data is necessary to support application of these approaches. These include refinement of models and simulations, engagement of subject matter experts, implementation of program evaluation, and estimating the costs of casualties from terrorism events.

  19. A framework for global river flood risk assessments

    NASA Astrophysics Data System (ADS)

    Winsemius, H. C.; Van Beek, L. P. H.; Jongman, B.; Ward, P. J.; Bouwman, A.

    2013-05-01

    There is an increasing need for strategic global assessments of flood risks in current and future conditions. In this paper, we propose a framework for global flood risk assessment for river floods, which can be applied in current conditions, as well as in future conditions due to climate and socio-economic changes. The framework's goal is to establish flood hazard and impact estimates at a high enough resolution to allow for their combination into a risk estimate, which can be used for strategic global flood risk assessments. The framework estimates hazard at a resolution of ~ 1 km2 using global forcing datasets of the current (or in scenario mode, future) climate, a global hydrological model, a global flood-routing model, and more importantly, an inundation downscaling routine. The second component of the framework combines hazard with flood impact models at the same resolution (e.g. damage, affected GDP, and affected population) to establish indicators for flood risk (e.g. annual expected damage, affected GDP, and affected population). The framework has been applied using the global hydrological model PCR-GLOBWB, which includes an optional global flood routing model DynRout, combined with scenarios from the Integrated Model to Assess the Global Environment (IMAGE). We performed downscaling of the hazard probability distributions to 1 km2 resolution with a new downscaling algorithm, applied on Bangladesh as a first case study application area. We demonstrate the risk assessment approach in Bangladesh based on GDP per capita data, population, and land use maps for 2010 and 2050. Validation of the hazard estimates has been performed using the Dartmouth Flood Observatory database. This was done by comparing a high return period flood with the maximum observed extent, as well as by comparing a time series of a single event with Dartmouth imagery of the event. Validation of modelled damage estimates was performed using observed damage estimates from the EM-DAT database and World Bank sources. We discuss and show sensitivities of the estimated risks with regard to the use of different climate input sets, decisions made in the downscaling algorithm, and different approaches to establish impact models.

  20. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data

    EPA Science Inventory

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...

  1. A diversity index for model space selection in the estimation of benchmark and infectious doses via model averaging.

    PubMed

    Kim, Steven B; Kodell, Ralph L; Moon, Hojin

    2014-03-01

    In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.

  2. Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model.

    PubMed

    Tang, Jiechen; Zhou, Chao; Yuan, Xinyu; Sriboonchitta, Songsak

    2015-01-01

    This paper concentrates on estimating the risk of Title Transfer Facility (TTF) Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT) is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR) and conditional value at risk (CVaR). Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels.

  3. A proportional hazards regression model for the subdistribution with right-censored and left-truncated competing risks data

    PubMed Central

    Zhang, Xu; Zhang, Mei-Jie; Fine, Jason

    2012-01-01

    With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated. PMID:21557288

  4. Estimating Risk of Natural Gas Portfolios by Using GARCH-EVT-Copula Model

    PubMed Central

    Tang, Jiechen; Zhou, Chao; Yuan, Xinyu; Sriboonchitta, Songsak

    2015-01-01

    This paper concentrates on estimating the risk of Title Transfer Facility (TTF) Hub natural gas portfolios by using the GARCH-EVT-copula model. We first use the univariate ARMA-GARCH model to model each natural gas return series. Second, the extreme value distribution (EVT) is fitted to the tails of the residuals to model marginal residual distributions. Third, multivariate Gaussian copula and Student t-copula are employed to describe the natural gas portfolio risk dependence structure. Finally, we simulate N portfolios and estimate value at risk (VaR) and conditional value at risk (CVaR). Our empirical results show that, for an equally weighted portfolio of five natural gases, the VaR and CVaR values obtained from the Student t-copula are larger than those obtained from the Gaussian copula. Moreover, when minimizing the portfolio risk, the optimal natural gas portfolio weights are found to be similar across the multivariate Gaussian copula and Student t-copula and different confidence levels. PMID:26351652

  5. Screening Mammography: Patient Perceptions and Preferences Regarding Communication of Estimated Breast Cancer Risk.

    PubMed

    Amornsiripanitch, Nita; Mangano, Mark; Niell, Bethany L

    2017-05-01

    Many models exist to estimate a woman's risk of development of breast cancer. At screening mammography, many imaging centers collect data required for these models to identify women who may benefit from supplemental screening and referral for cancer risk assessment. The purpose of this study was to discern perceptions and preferences of screening mammography patients regarding communication of estimated breast cancer risk. An anonymous survey was distributed to screening and surveillance mammography patients between April and June 2015. Survey questions were designed to assess patient preferences regarding the receipt and complexity of risk estimate communication, including hypothetical scenarios with and without > 20% estimated risk of breast cancer. The McNemar test and the Wilcoxon signed rank test were used with p ≤ 0.05 considered statistically significant. The survey was distributed to 1061 screening and surveillance mammography patients, and 503 patients responded (response rate, 47%). Although 86% (431/503) of patients expressed interest in learning their estimated risk, only 8% (38/503) had undergone formal risk assessment. The preferred method (241 respondents [26%]) of communication of risk < 20% was a mailed letter accompanying annual mammogram results. For risk > 20%, patients preferred oral communication and were 10-fold as likely to choose only oral communication (p < 0.000001). For risk < 20% and > 20%, patients preferred to learn their estimated risk in great detail (69% and 85%), although women were significantly more likely to choose greater detail for risk > 20% (p < 0.00001). Screening mammography patients expressed interest in learning their estimated risk of breast cancer regardless of their level of hypothetical risk.

  6. Revisiting the Table 2 fallacy: A motivating example examining preeclampsia and preterm birth.

    PubMed

    Bandoli, Gretchen; Palmsten, Kristin; Chambers, Christina D; Jelliffe-Pawlowski, Laura L; Baer, Rebecca J; Thompson, Caroline A

    2018-05-21

    A "Table Fallacy," as coined by Westreich and Greenland, reports multiple adjusted effect estimates from a single model. This practice, which remains common in published literature, can be problematic when different types of effect estimates are presented together in a single table. The purpose of this paper is to quantitatively illustrate this potential for misinterpretation with an example estimating the effects of preeclampsia on preterm birth. We analysed a retrospective population-based cohort of 2 963 888 singleton births in California between 2007 and 2012. We performed a modified Poisson regression to calculate the total effect of preeclampsia on the risk of PTB, adjusting for previous preterm birth. pregnancy alcohol abuse, maternal education, and maternal socio-demographic factors (Model 1). In subsequent models, we report the total effects of previous preterm birth, alcohol abuse, and education on the risk of PTB, comparing and contrasting the controlled direct effects, total effects, and confounded effect estimates, resulting from Model 1. The effect estimate for previous preterm birth (a controlled direct effect in Model 1) increased 10% when estimated as a total effect. The risk ratio for alcohol abuse, biased due to an uncontrolled confounder in Model 1, was reduced by 23% when adjusted for drug abuse. The risk ratio for maternal education, solely a predictor of the outcome, was essentially unchanged. Reporting multiple effect estimates from a single model may lead to misinterpretation and lack of reproducibility. This example highlights the need for careful consideration of the types of effects estimated in statistical models. © 2018 John Wiley & Sons Ltd.

  7. Space Radiation Cancer Risk Projections and Uncertainties - 2010

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Kim, Myung-Hee Y.; Chappell, Lori J.

    2011-01-01

    Uncertainties in estimating health risks from galactic cosmic rays greatly limit space mission lengths and potential risk mitigation evaluations. NASA limits astronaut exposures to a 3% risk of exposure-induced death and protects against uncertainties using an assessment of 95% confidence intervals in the projection model. Revisions to this model for lifetime cancer risks from space radiation and new estimates of model uncertainties are described here. We review models of space environments and transport code predictions of organ exposures, and characterize uncertainties in these descriptions. We summarize recent analysis of low linear energy transfer radio-epidemiology data, including revision to Japanese A-bomb survivor dosimetry, longer follow-up of exposed cohorts, and reassessments of dose and dose-rate reduction effectiveness factors. We compare these projections and uncertainties with earlier estimates. Current understanding of radiation quality effects and recent data on factors of relative biological effectiveness and particle track structure are reviewed. Recent radiobiology experiment results provide new information on solid cancer and leukemia risks from heavy ions. We also consider deviations from the paradigm of linearity at low doses of heavy ions motivated by non-targeted effects models. New findings and knowledge are used to revise the NASA risk projection model for space radiation cancer risks.

  8. On the estimation of risk associated with an attenuation prediction

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1992-01-01

    Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.

  9. Development and external validation of a risk-prediction model to predict 5-year overall survival in advanced larynx cancer.

    PubMed

    Petersen, Japke F; Stuiver, Martijn M; Timmermans, Adriana J; Chen, Amy; Zhang, Hongzhen; O'Neill, James P; Deady, Sandra; Vander Poorten, Vincent; Meulemans, Jeroen; Wennerberg, Johan; Skroder, Carl; Day, Andrew T; Koch, Wayne; van den Brekel, Michiel W M

    2018-05-01

    TNM-classification inadequately estimates patient-specific overall survival (OS). We aimed to improve this by developing a risk-prediction model for patients with advanced larynx cancer. Cohort study. We developed a risk prediction model to estimate the 5-year OS rate based on a cohort of 3,442 patients with T3T4N0N+M0 larynx cancer. The model was internally validated using bootstrapping samples and externally validated on patient data from five external centers (n = 770). The main outcome was performance of the model as tested by discrimination, calibration, and the ability to distinguish risk groups based on tertiles from the derivation dataset. The model performance was compared to a model based on T and N classification only. We included age, gender, T and N classification, and subsite as prognostic variables in the standard model. After external validation, the standard model had a significantly better fit than a model based on T and N classification alone (C statistic, 0.59 vs. 0.55, P < .001). The model was able to distinguish well among three risk groups based on tertiles of the risk score. Adding treatment modality to the model did not decrease the predictive power. As a post hoc analysis, we tested the added value of comorbidity as scored by American Society of Anesthesiologists score in a subsample, which increased the C statistic to 0.68. A risk prediction model for patients with advanced larynx cancer, consisting of readily available clinical variables, gives more accurate estimations of the estimated 5-year survival rate when compared to a model based on T and N classification alone. 2c. Laryngoscope, 128:1140-1145, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  10. Screening level risk assessment model for chemical fate and effects in the environment.

    PubMed

    Arnot, Jon A; Mackay, Don; Webster, Eva; Southwood, Jeanette M

    2006-04-01

    A screening level risk assessment model is developed and described to assess and prioritize chemicals by estimating environmental fate and transport, bioaccumulation, and exposure to humans and wildlife for a unit emission rate. The most sensitive risk endpoint is identified and a critical emission rate is then calculated as a result of that endpoint being reached. Finally, this estimated critical emission rate is compared with the estimated actual emission rate as a risk assessment factor. This "back-tracking" process avoids the use of highly uncertain emission rate data as model input. The application of the model is demonstrated in detail for three diverse chemicals and in less detail for a group of 70 chemicals drawn from the Canadian Domestic Substances List. The simple Level II and the more complex Level III fate calculations are used to "bin" substances into categories of similar probable risk. The essential role of the model is to synthesize information on chemical and environmental properties within a consistent mass balance framework to yield an overall estimate of screening level risk with respect to the defined endpoint. The approach may be useful to identify and prioritize those chemicals of commerce that are of greatest potential concern and require more comprehensive modeling and monitoring evaluations in actual regional environments and food webs.

  11. Discrimination measures for survival outcomes: connection between the AUC and the predictiveness curve.

    PubMed

    Viallon, Vivian; Latouche, Aurélien

    2011-03-01

    Finding out biomarkers and building risk scores to predict the occurrence of survival outcomes is a major concern of clinical epidemiology, and so is the evaluation of prognostic models. In this paper, we are concerned with the estimation of the time-dependent AUC--area under the receiver-operating curve--which naturally extends standard AUC to the setting of survival outcomes and enables to evaluate the discriminative power of prognostic models. We establish a simple and useful relation between the predictiveness curve and the time-dependent AUC--AUC(t). This relation confirms that the predictiveness curve is the key concept for evaluating calibration and discrimination of prognostic models. It also highlights that accurate estimates of the conditional absolute risk function should yield accurate estimates for AUC(t). From this observation, we derive several estimators for AUC(t) relying on distinct estimators of the conditional absolute risk function. An empirical study was conducted to compare our estimators with the existing ones and assess the effect of model misspecification--when estimating the conditional absolute risk function--on the AUC(t) estimation. We further illustrate the methodology on the Mayo PBC and the VA lung cancer data sets. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Calibration plots for risk prediction models in the presence of competing risks.

    PubMed

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  13. What does my patient's coronary artery calcium score mean? Combining information from the coronary artery calcium score with information from conventional risk factors to estimate coronary heart disease risk

    PubMed Central

    Pletcher, Mark J; Tice, Jeffrey A; Pignone, Michael; McCulloch, Charles; Callister, Tracy Q; Browner, Warren S

    2004-01-01

    Background The coronary artery calcium (CAC) score is an independent predictor of coronary heart disease. We sought to combine information from the CAC score with information from conventional cardiac risk factors to produce post-test risk estimates, and to determine whether the score may add clinically useful information. Methods We measured the independent cross-sectional associations between conventional cardiac risk factors and the CAC score among asymptomatic persons referred for non-contrast electron beam computed tomography. Using the resulting multivariable models and published CAC score-specific relative risk estimates, we estimated post-test coronary heart disease risk in a number of different scenarios. Results Among 9341 asymptomatic study participants (age 35–88 years, 40% female), we found that conventional coronary heart disease risk factors including age, male sex, self-reported hypertension, diabetes and high cholesterol were independent predictors of the CAC score, and we used the resulting multivariable models for predicting post-test risk in a variety of scenarios. Our models predicted, for example, that a 60-year-old non-smoking non-diabetic women with hypertension and high cholesterol would have a 47% chance of having a CAC score of zero, reducing her 10-year risk estimate from 15% (per Framingham) to 6–9%; if her score were over 100, however (a 17% chance), her risk estimate would be markedly higher (25–51% in 10 years). In low risk scenarios, the CAC score is very likely to be zero or low, and unlikely to change management. Conclusion Combining information from the CAC score with information from conventional risk factors can change assessment of coronary heart disease risk to an extent that may be clinically important, especially when the pre-test 10-year risk estimate is intermediate. The attached spreadsheet makes these calculations easy. PMID:15327691

  14. Estimating the Tradeoff Between Risk Protection and Moral Hazard with a Nonlinear Budget Set Model of Health Insurance*

    PubMed Central

    Kowalski, Amanda E.

    2015-01-01

    Insurance induces a tradeoff between the welfare gains from risk protection and the welfare losses from moral hazard. Empirical work traditionally estimates each side of the tradeoff separately, potentially yielding mutually inconsistent results. I develop a nonlinear budget set model of health insurance that allows for both simultaneously. Nonlinearities in the budget set arise from deductibles, coinsurance rates, and stoplosses that alter moral hazard as well as risk protection. I illustrate the properties of my model by estimating it using data on employer sponsored health insurance from a large firm. PMID:26664035

  15. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  16. Mixture models for undiagnosed prevalent disease and interval-censored incident disease: applications to a cohort assembled from electronic health records.

    PubMed

    Cheung, Li C; Pan, Qing; Hyun, Noorie; Schiffman, Mark; Fetterman, Barbara; Castle, Philip E; Lorey, Thomas; Katki, Hormuzd A

    2017-09-30

    For cost-effectiveness and efficiency, many large-scale general-purpose cohort studies are being assembled within large health-care providers who use electronic health records. Two key features of such data are that incident disease is interval-censored between irregular visits and there can be pre-existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan-Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval-censored incident disease that we call prevalence-incidence models. Parameters for parametric prevalence-incidence models, such as the logistic regression and Weibull survival (logistic-Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non-parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan-Meier, logistic-Weibull, and non-parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan-Meier provided poor estimates while the logistic-Weibull model was a close fit to the non-parametric. Our findings support our use of logistic-Weibull models to develop the risk estimates that underlie current US risk-based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.

  17. Health effects models for nuclear power plant accident consequence analysis: Low LET radiation: Part 2, Scientific bases for health effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abrahamson, S.; Bender, M.; Book, S.

    1989-05-01

    This report provides dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Two-parameter Weibull hazard functions are recommended for estimating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary and gastrointestinal syndromes -- are considered. Linear and linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid and ''other''. Themore » category, ''other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also provided. For most cancers, both incidence and mortality are addressed. Linear and linear-quadratic models are also recommended for assessing genetic risks. Five classes of genetic disease -- dominant, x-linked, aneuploidy, unbalanced translocation and multifactorial diseases --are considered. In addition, the impact of radiation-induced genetic damage on the incidence of peri-implantation embryo losses is discussed. The uncertainty in modeling radiological health risks is addressed by providing central, upper, and lower estimates of all model parameters. Data are provided which should enable analysts to consider the timing and severity of each type of health risk. 22 refs., 14 figs., 51 tabs.« less

  18. The Effects of Revealed Information on Catastrophe Loss Projection Models' Characterization of Risk: Damage Vulnerability Evidence from Florida.

    PubMed

    Karl, J Bradley; Medders, Lorilee A; Maroney, Patrick F

    2016-06-01

    We examine whether the risk characterization estimated by catastrophic loss projection models is sensitive to the revelation of new information regarding risk type. We use commercial loss projection models from two widely employed modeling firms to estimate the expected hurricane losses of Florida Atlantic University's building stock, both including and excluding secondary information regarding hurricane mitigation features that influence damage vulnerability. We then compare the results of the models without and with this revealed information and find that the revelation of additional, secondary information influences modeled losses for the windstorm-exposed university building stock, primarily evidenced by meaningful percent differences in the loss exceedance output indicated after secondary modifiers are incorporated in the analysis. Secondary risk characteristics for the data set studied appear to have substantially greater impact on probable maximum loss estimates than on average annual loss estimates. While it may be intuitively expected for catastrophe models to indicate that secondary risk characteristics hold value for reducing modeled losses, the finding that the primary value of secondary risk characteristics is in reduction of losses in the "tail" (low probability, high severity) events is less intuitive, and therefore especially interesting. Further, we address the benefit-cost tradeoffs that commercial entities must consider when deciding whether to undergo the data collection necessary to include secondary information in modeling. Although we assert the long-term benefit-cost tradeoff is positive for virtually every entity, we acknowledge short-term disincentives to such an effort. © 2015 Society for Risk Analysis.

  19. Impact of using different blood donor subpopulations and models on the estimation of transfusion transmission residual risk of human immunodeficiency virus, hepatitis B virus, and hepatitis C virus in Zimbabwe.

    PubMed

    Mapako, Tonderai; Janssen, Mart P; Mvere, David A; Emmanuel, Jean C; Rusakaniko, Simbarashe; Postma, Maarten J; van Hulst, Marinus

    2016-06-01

    Various models for estimating the residual risk (RR) of transmission of infections by blood transfusion have been published mainly based on data from high-income countries. However, to obtain the data required for such an assessment remains challenging for most developing settings. The National Blood Service Zimbabwe (NBSZ) adapted a published incidence-window period (IWP) model, which has less demanding data requirements. In this study we assess the impact of various definitions of blood donor subpopulations and models on RR estimates. We compared the outcomes of two published models and an adapted NBSZ model. The Schreiber IWP model (Model 1), an amended version (Model 2), and an adapted NBSZ model (Model 3) were applied. Variably the three models include prevalence, incidence, preseroconversion intervals, mean lifetime risk, and person-years at risk. Annual mean RR estimates and 95% confidence intervals for each of the three models for human immunodeficiency virus (HIV), hepatitis B virus (HBV), and hepatitis C virus (HCV) were determined using NBSZ blood donor data from 2002 through 2011. The annual mean RR estimates for Models 1 through 3 were 1 in 6542, 5805, and 6418, respectively for HIV; 1 in 1978, 2027, and 1628 for HBV; and 1 in 9588, 15,126, and 7750, for HCV. The adapted NBSZ model provided comparable results to the published methods and these highlight the high occurrence of HBV in Zimbabwe. The adapted NBSZ model could be used as an alternative to estimate RRs when in settings where two repeat donations are not available. © 2016 AABB.

  20. Impact of covariate models on the assessment of the air pollution-mortality association in a single- and multipollutant context.

    PubMed

    Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M

    2012-10-01

    With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.

  1. Malaria Disease Mapping in Malaysia based on Besag-York-Mollie (BYM) Model

    NASA Astrophysics Data System (ADS)

    Azah Samat, Nor; Mey, Liew Wan

    2017-09-01

    Disease mapping is the visual representation of the geographical distribution which give an overview info about the incidence of disease within a population through spatial epidemiology data. Based on the result of map, it helps in monitoring and planning resource needs at all levels of health care and designing appropriate interventions, tailored towards areas that deserve closer scrutiny or communities that lead to further investigations to identify important risk factors. Therefore, the choice of statistical model used for relative risk estimation is important because production of disease risk map relies on the model used. This paper proposes Besag-York-Mollie (BYM) model to estimate the relative risk for Malaria in Malaysia. The analysis involved using the number of Malaria cases that obtained from the Ministry of Health Malaysia. The outcomes of analysis are displayed through graph and map, including Malaria disease risk map that constructed according to the estimation of relative risk. The distribution of high and low risk areas of Malaria disease occurrences for all states in Malaysia can be identified in the risk map.

  2. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    PubMed

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  3. The linearized multistage model and the future of quantitative risk assessment.

    PubMed

    Crump, K S

    1996-10-01

    The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.

  4. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  5. Risk estimation using probability machines

    PubMed Central

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  6. Risk estimation using probability machines.

    PubMed

    Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D

    2014-03-01

    Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.

  7. A probabilistic model of gastroenteritis risks associated with consumption of street food salads in Kumasi, Ghana: evaluation of methods to estimate pathogen dose from water, produce or food quality.

    PubMed

    Barker, S Fiona; Amoah, Philip; Drechsel, Pay

    2014-07-15

    With a rapidly growing urban population in Kumasi, Ghana, the consumption of street food is increasing. Raw salads, which often accompany street food dishes, are typically composed of perishable vegetables that are grown in close proximity to the city using poor quality water for irrigation. This study assessed the risk of gastroenteritis illness (caused by rotavirus, norovirus and Ascaris lumbricoides) associated with the consumption of street food salads using Quantitative Microbial Risk Assessment (QMRA). Three different risk assessment models were constructed, based on availability of microbial concentrations: 1) Water - starting from irrigation water quality, 2) Produce - starting from the quality of produce at market, and 3) Street - using microbial quality of street food salad. In the absence of viral concentrations, published ratios between faecal coliforms and viruses were used to estimate the quality of water, produce and salad, and annual disease burdens were determined. Rotavirus dominated the estimates of annual disease burden (~10(-3)Disability Adjusted Life Years per person per year (DALYs pppy)), although norovirus also exceeded the 10(-4)DALY threshold for both Produce and Street models. The Water model ignored other on-farm and post-harvest sources of contamination and consistently produced lower estimates of risk; it likely underestimates disease burden and therefore is not recommended. Required log reductions of up to 5.3 (95th percentile) for rotavirus were estimated for the Street model, demonstrating that significant interventions are required to protect the health and safety of street food consumers in Kumasi. Estimates of virus concentrations were a significant source of model uncertainty and more data on pathogen concentrations is needed to refine QMRA estimates of disease burden. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Psychosocial work environment and myocardial infarction: improving risk estimation by combining two complementary job stress models in the SHEEP Study

    PubMed Central

    Peter, R; Siegrist, J; Hallqvist, J; Reuterwall, C; Theorell, T

    2002-01-01

    Objectives: Associations between two alternative formulations of job stress derived from the effort-reward imbalance and the job strain model and first non-fatal acute myocardial infarction were studied. Whereas the job strain model concentrates on situational (extrinsic) characteristics the effort-reward imbalance model analyses distinct person (intrinsic) characteristics in addition to situational ones. In view of these conceptual differences the hypothesis was tested that combining information from the two models improves the risk estimation of acute myocardial infarction. Methods: 951 male and female myocardial infarction cases and 1147 referents aged 45–64 years of The Stockholm Heart Epidemiology (SHEEP) case-control study underwent a clinical examination. Information on job stress and health adverse behaviours was derived from standardised questionnaires. Results: Multivariate analysis showed moderately increased odds ratios for either model. Yet, with respect to the effort-reward imbalance model gender specific effects were found: in men the extrinsic component contributed to risk estimation, whereas this was the case with the intrinsic component in women. Controlling each job stress model for the other in order to test the independent effect of either approach did not show systematically increased odds ratios. An improved estimation of acute myocardial infarction risk resulted from combining information from the two models by defining groups characterised by simultaneous exposure to effort-reward imbalance and job strain (men: odds ratio 2.02 (95% confidence intervals (CI) 1.34 to 3.07); women odds ratio 2.19 (95% CI 1.11 to 4.28)). Conclusions: Findings show an improved risk estimation of acute myocardial infarction by combining information from the two job stress models under study. Moreover, gender specific effects of the two components of the effort-reward imbalance model were observed. PMID:11896138

  9. Environmental risk assessment of selected organic chemicals based on TOC test and QSAR estimation models.

    PubMed

    Chi, Yulang; Zhang, Huanteng; Huang, Qiansheng; Lin, Yi; Ye, Guozhu; Zhu, Huimin; Dong, Sijun

    2018-02-01

    Environmental risks of organic chemicals have been greatly determined by their persistence, bioaccumulation, and toxicity (PBT) and physicochemical properties. Major regulations in different countries and regions identify chemicals according to their bioconcentration factor (BCF) and octanol-water partition coefficient (Kow), which frequently displays a substantial correlation with the sediment sorption coefficient (Koc). Half-life or degradability is crucial for the persistence evaluation of chemicals. Quantitative structure activity relationship (QSAR) estimation models are indispensable for predicting environmental fate and health effects in the absence of field- or laboratory-based data. In this study, 39 chemicals of high concern were chosen for half-life testing based on total organic carbon (TOC) degradation, and two widely accepted and highly used QSAR estimation models (i.e., EPI Suite and PBT Profiler) were adopted for environmental risk evaluation. The experimental results and estimated data, as well as the two model-based results were compared, based on the water solubility, Kow, Koc, BCF and half-life. Environmental risk assessment of the selected compounds was achieved by combining experimental data and estimation models. It was concluded that both EPI Suite and PBT Profiler were fairly accurate in measuring the physicochemical properties and degradation half-lives for water, soil, and sediment. However, the half-lives between the experimental and the estimated results were still not absolutely consistent. This suggests deficiencies of the prediction models in some ways, and the necessity to combine the experimental data and predicted results for the evaluation of environmental fate and risks of pollutants. Copyright © 2016. Published by Elsevier B.V.

  10. 42 CFR 81.10 - Use of cancer risk assessment models in NIOSH IREP.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Use of cancer risk assessment models in NIOSH IREP... Risk Models Used To Estimate Probability of Causation § 81.10 Use of cancer risk assessment models in... tables were developed from analyses of cancer mortality risk among the Japanese atomic bomb survivor...

  11. 42 CFR 81.10 - Use of cancer risk assessment models in NIOSH IREP.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Use of cancer risk assessment models in NIOSH IREP... Risk Models Used To Estimate Probability of Causation § 81.10 Use of cancer risk assessment models in... tables were developed from analyses of cancer mortality risk among the Japanese atomic bomb survivor...

  12. 42 CFR 81.10 - Use of cancer risk assessment models in NIOSH IREP.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Use of cancer risk assessment models in NIOSH IREP... Risk Models Used To Estimate Probability of Causation § 81.10 Use of cancer risk assessment models in... tables were developed from analyses of cancer mortality risk among the Japanese atomic bomb survivor...

  13. 42 CFR 81.10 - Use of cancer risk assessment models in NIOSH IREP.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Use of cancer risk assessment models in NIOSH IREP... Risk Models Used To Estimate Probability of Causation § 81.10 Use of cancer risk assessment models in... tables were developed from analyses of cancer mortality risk among the Japanese atomic bomb survivor...

  14. Quantitative Assessment of Cancer Risk from Exposure to Diesel Engine Emissions

    EPA Science Inventory

    Quantitative estimates of lung cancer risk from exposure to diesel engine emissions were developed using data from three chronic bioassays with Fischer 344 rats. uman target organ dose was estimated with the aid of a comprehensive dosimetry model. This model accounted for rat-hum...

  15. Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    DOE PAGES

    Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.

    2016-02-16

    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less

  16. Probabilistic quantitative microbial risk assessment model of norovirus from wastewater irrigated vegetables in Ghana using genome copies and fecal indicator ratio conversion for estimating exposure dose.

    PubMed

    Owusu-Ansah, Emmanuel de-Graft Johnson; Sampson, Angelina; Amponsah, Samuel K; Abaidoo, Robert C; Dalsgaard, Anders; Hald, Tine

    2017-12-01

    The need to replace the commonly applied fecal indicator conversions ratio (an assumption of 1:10 -5 virus to fecal indicator organism) in Quantitative Microbial Risk Assessment (QMRA) with models based on quantitative data on the virus of interest has gained prominence due to the different physical and environmental factors that might influence the reliability of using indicator organisms in microbial risk assessment. The challenges facing analytical studies on virus enumeration (genome copies or particles) have contributed to the already existing lack of data in QMRA modelling. This study attempts to fit a QMRA model to genome copies of norovirus data. The model estimates the risk of norovirus infection from the intake of vegetables irrigated with wastewater from different sources. The results were compared to the results of a corresponding model using the fecal indicator conversion ratio to estimate the norovirus count. In all scenarios of using different water sources, the application of the fecal indicator conversion ratio underestimated the norovirus disease burden, measured by the Disability Adjusted Life Years (DALYs), when compared to results using the genome copies norovirus data. In some cases the difference was >2 orders of magnitude. All scenarios using genome copies met the 10 -4 DALY per person per year for consumption of vegetables irrigated with wastewater, although these results are considered to be highly conservative risk estimates. The fecal indicator conversion ratio model of stream-water and drain-water sources of wastewater achieved the 10 -6 DALY per person per year threshold, which tends to indicate an underestimation of health risk when compared to using genome copies for estimating the dose. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Extensions of criteria for evaluating risk prediction models for public health applications.

    PubMed

    Pfeiffer, Ruth M

    2013-04-01

    We recently proposed two novel criteria to assess the usefulness of risk prediction models for public health applications. The proportion of cases followed, PCF(p), is the proportion of individuals who will develop disease who are included in the proportion p of individuals in the population at highest risk. The proportion needed to follow-up, PNF(q), is the proportion of the general population at highest risk that one needs to follow in order that a proportion q of those destined to become cases will be followed (Pfeiffer, R.M. and Gail, M.H., 2011. Two criteria for evaluating risk prediction models. Biometrics 67, 1057-1065). Here, we extend these criteria in two ways. First, we introduce two new criteria by integrating PCF and PNF over a range of values of q or p to obtain iPCF, the integrated PCF, and iPNF, the integrated PNF. A key assumption in the previous work was that the risk model is well calibrated. This assumption also underlies novel estimates of iPCF and iPNF based on observed risks in a population alone. The second extension is to propose and study estimates of PCF, PNF, iPCF, and iPNF that are consistent even if the risk models are not well calibrated. These new estimates are obtained from case-control data when the outcome prevalence in the population is known, and from cohort data, with baseline covariates and observed health outcomes. We study the efficiency of the various estimates and propose and compare tests for comparing two risk models, both of which were evaluated in the same validation data.

  18. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  19. DEVELOPMENT AND APPLICATION OF POPULATION MODELS TO SUPPORT EPA'S ECOLOGICAL RISK ASSESSMENT PROCESSES FOR PESTICIDES

    EPA Science Inventory

    As part of a broader exploratory effort to develop ecological risk assessment approaches to estimate potential chemical effects on non-target populations, we describe an approach for developing simple population models to estimate the extent to which acute effects on individual...

  20. Biological and statistical approaches to predicting human lung cancer risk from silica.

    PubMed

    Kuempel, E D; Tran, C L; Bailer, A J; Porter, D W; Hubbs, A F; Castranova, V

    2001-01-01

    Chronic inflammation is a key step in the pathogenesis of particle-elicited fibrosis and lung cancer in rats, and possibly in humans. In this study, we compute the excess risk estimates for lung cancer in humans with occupational exposure to crystalline silica, using both rat and human data, and using both a threshold approach and linear models. From a toxicokinetic/dynamic model fit to lung burden and pulmonary response data from a subchronic inhalation study in rats, we estimated the minimum critical quartz lung burden (Mcrit) associated with reduced pulmonary clearance and increased neutrophilic inflammation. A chronic study in rats was also used to predict the human excess risk of lung cancer at various quartz burdens, including mean Mcrit (0.39 mg/g lung). We used a human kinetic lung model to link the equivalent lung burdens to external exposures in humans. We then computed the excess risk of lung cancer at these external exposures, using data of workers exposed to respirable crystalline silica and using Poisson regression and lifetable analyses. Finally, we compared the lung cancer excess risks estimated from male rat and human data. We found that the rat-based linear model estimates were approximately three times higher than those based on human data (e.g., 2.8% in rats vs. 0.9-1% in humans, at mean Mcrit lung burden or associated mean working lifetime exposure of 0.036 mg/m3). Accounting for variability and uncertainty resulted in 100-1000 times lower estimates of human critical lung burden and airborne exposure. This study illustrates that assumptions about the relevant biological mechanism, animal model, and statistical approach can all influence the magnitude of lung cancer risk estimates in humans exposed to crystalline silica.

  1. Individual risk of cutaneous melanoma in New Zealand: developing a clinical prediction aid.

    PubMed

    Sneyd, Mary Jane; Cameron, Claire; Cox, Brian

    2014-05-22

    New Zealand and Australia have the highest melanoma incidence rates worldwide. In New Zealand, both the incidence and thickness have been increasing. Clinical decisions require accurate risk prediction but a simple list of genetic, phenotypic and behavioural risk factors is inadequate to estimate individual risk as the risk factors for melanoma have complex interactions. In order to offer tailored clinical management strategies, we developed a New Zealand prediction model to estimate individual 5-year absolute risk of melanoma. A population-based case-control study (368 cases and 270 controls) of melanoma risk factors provided estimates of relative risks for fair-skinned New Zealanders aged 20-79 years. Model selection techniques and multivariate logistic regression were used to determine the important predictors. The relative risks for predictors were combined with baseline melanoma incidence rates and non-melanoma mortality rates to calculate individual probabilities of developing melanoma within 5 years. For women, the best model included skin colour, number of moles > =5 mm on the right arm, having a 1st degree relative with large moles, and a personal history of non-melanoma skin cancer (NMSC). The model correctly classified 68% of participants; the C-statistic was 0.74. For men, the best model included age, place of occupation up to age 18 years, number of moles > =5 mm on the right arm, birthplace, and a history of NMSC. The model correctly classified 67% of cases; the C-statistic was 0.71. We have developed the first New Zealand risk prediction model that calculates individual absolute 5-year risk of melanoma. This model will aid physicians to identify individuals at high risk, allowing them to individually target surveillance and other management strategies, and thereby reduce the high melanoma burden in New Zealand.

  2. The role of models in estimating consequences as part of the risk assessment process.

    PubMed

    Forde-Folle, K; Mitchell, D; Zepeda, C

    2011-08-01

    The degree of disease risk represented by the introduction, spread, or establishment of one or several diseases through the importation of animals and animal products is assessed by importing countries through an analysis of risk. The components of a risk analysis include hazard identification, risk assessment, risk management, and risk communication. A risk assessment starts with identification of the hazard(s) and then continues with four interrelated steps: release assessment, exposure assessment, consequence assessment, and risk estimation. Risk assessments may be either qualitative or quantitative. This paper describes how, through the integration of epidemiological and economic models, the potential adverse biological and economic consequences of exposure can be quantified.

  3. Investigating Gender Differences under Time Pressure in Financial Risk Taking.

    PubMed

    Xie, Zhixin; Page, Lionel; Hardy, Ben

    2017-01-01

    There is a significant gender imbalance on financial trading floors. This motivated us to investigate gender differences in financial risk taking under pressure. We used a well-established approach from behavior economics to analyze a series of risky monetary choices by male and female participants with and without time pressure. We also used second to fourth digit ratio (2D:4D) and face width-to-height ratio (fWHR) as correlates of pre-natal exposure to testosterone. We constructed a structural model and estimated the participants' risk attitudes and probability perceptions via maximum likelihood estimation under both expected utility (EU) and rank-dependent utility (RDU) models. In line with existing research, we found that male participants are less risk averse and that the gender gap in risk attitudes increases under moderate time pressure. We found that female participants with lower 2D:4D ratios and higher fWHR are less risk averse in RDU estimates. Males with lower 2D:4D ratios were less risk averse in EU estimations, but more risk averse using RDU estimates. We also observe that men whose ratios indicate a greater prenatal exposure to testosterone exhibit a greater optimism and overestimation of small probabilities of success.

  4. Estimating the decline in excess risk of cerebrovascular disease following quitting smoking--a systematic review based on the negative exponential model.

    PubMed

    Lee, Peter N; Fry, John S; Thornton, Alison J

    2014-02-01

    We attempted to quantify the decline in stroke risk following quitting using the negative exponential model, with methodology previously employed for IHD. We identified 22 blocks of RRs (from 13 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We tried to estimate the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block. The method failed to converge or produced very variable estimates of H in nine blocks with a current smoker RR <1.40. Rejecting these, and combining blocks by amount smoked in one study where problems arose in model-fitting, the final analyses used 11 blocks. Goodness-of-fit was adequate for each block, the combined estimate of H being 4.78(95%CI 2.17-10.50) years. However, considerable heterogeneity existed, unexplained by any factor studied, with the random-effects estimate 3.08(1.32-7.16). Sensitivity analyses allowing for reverse causation or differing assumed times for the final quitting period gave similar results. The estimates of H are similar for stroke and IHD, and the individual estimates similarly heterogeneous. Fitting the model is harder for stroke, due to its weaker association with smoking. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    USGS Publications Warehouse

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  6. Assessment of NHTSA’s Report “Relationships Between Fatality Risk, Mass, and Footprint in Model Year 2003-2010 Passenger Cars and LTVs”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Tom

    NHTSA recently completed a logistic regression analysis updating its 2003, 2010, and 2012 studies of the relationship between vehicle mass and US fatality risk per vehicle mile traveled (VMT; Kahane 2010, Kahane 2012, Puckett 2016). The new study updates the 2012 analysis using FARS data from 2005 to 2011 for model year 2003 to 2010. Using the updated databases, NHTSA estimates that reducing vehicle mass by 100 pounds while holding footprint fixed would increase fatality risk per VMT by 1.49% for lighter-than-average cars and by 0.50% for heavierthan- average cars, but reduce risk by 0.10% for lighter-than-average light-duty trucks, bymore » 0.71% for heavier-than-average light-duty trucks, and by 0.99% for CUVs/minivans. Using a jack knife method to estimate the statistical uncertainty of these point estimates, NHTSA finds that none of these estimates are statistically significant at the 95% confidence level; however, the 1.49% increase in risk associated with mass reduction in lighter-than-average cars, and the 0.71% and 0.99% decreases in risk associated with mass reduction in heavier-than-average light trucks and CUVs/minivans, are statistically significant at the 90% confidence interval. The effect of mass reduction on risk that NHTSA estimated in 2016 is more beneficial than in its 2012 study, particularly for light trucks and CUVs/minivans. The 2016 NHTSA analysis estimates that reducing vehicle footprint by one square foot while holding mass constant would increase fatality risk per VMT by 0.28% in cars, by 0.38% in light trucks, and by 1.18% in CUVs and minivans.This report replicates the 2016 NHTSA analysis, and reproduces their main results. This report uses the confidence intervals output by the logistic regression models, which are smaller than the intervals NHTSA estimated using a jack-knife technique that accounts for the sampling error in the FARS fatality and state crash data. In addition to reproducing the NHTSA results, this report also examines the NHTSA data in slightly different ways to get a deeper understanding of the relationship between vehicle weight, footprint, and safety. The results of the NHTSA baseline results, and these alternative analyses, are summarized in Table ES.1; statistically significant estimates, based on the confidence intervals output by the logistic regression models, are shown in red in the tables. We found that NHTSA’s reasonable assumption that all vehicles will have ESC installed by 2017 in its baseline regression model slightly increases the estimated increase in risk from mass reduction in cars, but substantially decreases the estimated increase in risk from footprint reduction in all three vehicle types (Alternative 1 in Table ES.1; explained in more detail in Section 2.1 of this report). This is because NHTSA projects ESC to substantially reduce the number of fatalities in rollovers and crashes with stationary objects, and mass reduction appears to reduce risk, while footprint reduction appears to increase risk, in these types of crashes, particularly in cars and CUVs/minivans. A single regression model including all crash types results in slightly different estimates of the relationship between decreasing mass and risk, as shown in Alternative 2 in Table ES.1.« less

  7. Usefulness of cancer-free survival in estimating the lifetime attributable risk of cancer incidence from radiation exposure.

    PubMed

    Seo, Songwon; Lee, Dal Nim; Jin, Young Woo; Lee, Won Jin; Park, Sunhoo

    2018-05-11

    Risk projection models estimating the lifetime cancer risk from radiation exposure are generally based on exposure dose, age at exposure, attained age, gender and study-population-specific factors such as baseline cancer risks and survival rates. Because such models have mostly been based on the Life Span Study cohort of Japanese atomic bomb survivors, the baseline risks and survival rates in the target population should be considered when applying the cancer risk. The survival function used in the risk projection models that are commonly used in the radiological protection field to estimate the cancer risk from medical or occupational exposure is based on all-cause mortality. Thus, it may not be accurate for estimating the lifetime risk of high-incidence but not life-threatening cancer with a long-term survival rate. Herein, we present the lifetime attributable risk (LAR) estimates of all solid cancers except thyroid cancer, thyroid cancer, and leukemia except chronic lymphocytic leukemia in South Korea for lifetime exposure to 1 mGy per year using the cancer-free survival function, as recently applied in the Fukushima health risk assessment by the World Health Organization. Compared with the estimates of LARs using an overall survival function solely based on all-cause mortality, the LARs of all solid cancers except thyroid cancer, and thyroid cancer evaluated using the cancer-free survival function, decreased by approximately 13% and 1% for men and 9% and 5% for women, respectively. The LAR of leukemia except chronic lymphocytic leukemia barely changed for either gender owing to the small absolute difference between its incidence and mortality. Given that many cancers have a high curative rate and low mortality rate, using a survival function solely based on all-cause mortality may cause an overestimation of the lifetime risk of cancer incidence. The lifetime fractional risk was robust against the choice of survival function.

  8. Estimating micro area behavioural risk factor prevalence from large population-based surveys: a full Bayesian approach.

    PubMed

    Seliske, L; Norwood, T A; McLaughlin, J R; Wang, S; Palleschi, C; Holowaty, E

    2016-06-07

    An important public health goal is to decrease the prevalence of key behavioural risk factors, such as tobacco use and obesity. Survey information is often available at the regional level, but heterogeneity within large geographic regions cannot be assessed. Advanced spatial analysis techniques are demonstrated to produce sensible micro area estimates of behavioural risk factors that enable identification of areas with high prevalence. A spatial Bayesian hierarchical model was used to estimate the micro area prevalence of current smoking and excess bodyweight for the Erie-St. Clair region in southwestern Ontario. Estimates were mapped for male and female respondents of five cycles of the Canadian Community Health Survey (CCHS). The micro areas were 2006 Census Dissemination Areas, with an average population of 400-700 people. Two individual-level models were specified: one controlled for survey cycle and age group (model 1), and one controlled for survey cycle, age group and micro area median household income (model 2). Post-stratification was used to derive micro area behavioural risk factor estimates weighted to the population structure. SaTScan analyses were conducted on the granular, postal-code level CCHS data to corroborate findings of elevated prevalence. Current smoking was elevated in two urban areas for both sexes (Sarnia and Windsor), and an additional small community (Chatham) for males only. Areas of excess bodyweight were prevalent in an urban core (Windsor) among males, but not females. Precision of the posterior post-stratified current smoking estimates was improved in model 2, as indicated by narrower credible intervals and a lower coefficient of variation. For excess bodyweight, both models had similar precision. Aggregation of the micro area estimates to CCHS design-based estimates validated the findings. This is among the first studies to apply a full Bayesian model to complex sample survey data to identify micro areas with variation in risk factor prevalence, accounting for spatial correlation and other covariates. Application of micro area analysis techniques helps define areas for public health planning, and may be informative to surveillance and research modeling of relevant chronic disease outcomes.

  9. MeProRisk - a Joint Venture for Minimizing Risk in Geothermal Reservoir Development

    NASA Astrophysics Data System (ADS)

    Clauser, C.; Marquart, G.

    2009-12-01

    Exploration and development of geothermal reservoirs for the generation of electric energy involves high engineering and economic risks due to the need for 3-D geophysical surface surveys and deep boreholes. The MeProRisk project provides a strategy guideline for reducing these risks by combining cross-disciplinary information from different specialists: Scientists from three German universities and two private companies contribute with new methods in seismic modeling and interpretation, numerical reservoir simulation, estimation of petrophysical parameters, and 3-D visualization. The approach chosen in MeProRisk consists in considering prospecting and developing of geothermal reservoirs as an iterative process. A first conceptual model for fluid flow and heat transport simulation can be developed based on limited available initial information on geology and rock properties. In the next step, additional data is incorporated which is based on (a) new seismic interpretation methods designed for delineating fracture systems, (b) statistical studies on large numbers of rock samples for estimating reliable rock parameters, (c) in situ estimates of the hydraulic conductivity tensor. This results in a continuous refinement of the reservoir model where inverse modelling of fluid flow and heat transport allows infering the uncertainty and resolution of the model at each iteration step. This finally yields a calibrated reservoir model which may be used to direct further exploration by optimizing additional borehole locations, estimate the uncertainty of key operational and economic parameters, and optimize the long-term operation of a geothermal resrvoir.

  10. Accounting for Selection Bias in Studies of Acute Cardiac Events.

    PubMed

    Banack, Hailey R; Harper, Sam; Kaufman, Jay S

    2018-06-01

    In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  11. Process-based Cost Estimation for Ramjet/Scramjet Engines

    NASA Technical Reports Server (NTRS)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  12. Influence of safety measures on the risks of transporting dangerous goods through road tunnels.

    PubMed

    Saccomanno, Frank; Haastrup, Palle

    2002-12-01

    Quantitative risk assessment (QRA) models are used to estimate the risks of transporting dangerous goods and to assess the merits of introducing alternative risk reduction measures for different transportation scenarios and assumptions. A comprehensive QRA model recently was developed in Europe for application to road tunnels. This model can assess the merits of a limited number of "native safety measures." In this article, we introduce a procedure for extending its scope to include the treatment of a number of important "nonnative safety measures" of interest to tunnel operators and decisionmakers. Nonnative safety measures were not included in the original model specification. The suggested procedure makes use of expert judgment and Monte Carlo simulation methods to model uncertainty in the revised risk estimates. The results of a case study application are presented that involve the risks of transporting a given volume of flammable liquid through a 10-km road tunnel.

  13. Comparing motor-vehicle crash risk of EU and US vehicles.

    PubMed

    Flannagan, Carol A C; Bálint, András; Klinich, Kathleen D; Sander, Ulrich; Manary, Miriam A; Cuny, Sophie; McCarthy, Michael; Phan, Vuthy; Wallbank, Caroline; Green, Paul E; Sui, Bo; Forsman, Åsa; Fagerlind, Helen

    2018-08-01

    This study examined the hypotheses that passenger vehicles meeting European Union (EU) safety standards have similar crashworthiness to United States (US) -regulated vehicles in the US driving environment, and vice versa. The first step involved identifying appropriate databases of US and EU crashes that include in-depth crash information, such as estimation of crash severity using Delta-V and injury outcome based on medical records. The next step was to harmonize variable definitions and sampling criteria so that the EU data could be combined and compared to the US data using the same or equivalent parameters. Logistic regression models of the risk of a Maximum injury according to the Abbreviated Injury Scale of 3 or greater, or fatality (MAIS3+F) in EU-regulated and US-regulated vehicles were constructed. The injury risk predictions of the EU model and the US model were each applied to both the US and EU standard crash populations. Frontal, near-side, and far-side crashes were analyzed together (termed "front/side crashes") and a separate model was developed for rollover crashes. For the front/side model applied to the US standard population, the mean estimated risk for the US-vehicle model is 0.035 (sd = 0.012), and the mean estimated risk for the EU-vehicle model is 0.023 (sd = 0.016). When applied to the EU front/side population, the US model predicted a 0.065 risk (sd = 0.027), and the EU model predicted a 0.052 risk (sd = 0.025). For the rollover model applied to the US standard population, the US model predicted a risk of 0.071 (sd = 0.024), and the EU model predicted 0.128 risk (sd = 0.057). When applied to the EU rollover standard population, the US model predicted a 0.067 risk (sd = 0.024), and the EU model predicted 0.103 risk (sd = 0.040). The results based on these methods indicate that EU vehicles most likely have a lower risk of MAIS3+F injury in front/side impacts, while US vehicles most likely have a lower risk of MAIS3+F injury in llroovers. These results should be interpreted with an understanding of the uncertainty of the estimates, the study limitations, and our recommendations for further study detailed in the report. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. The predictive performance of a path-dependent exotic-option credit risk model in the emerging market

    NASA Astrophysics Data System (ADS)

    Chen, Dar-Hsin; Chou, Heng-Chih; Wang, David; Zaabar, Rim

    2011-06-01

    Most empirical research of the path-dependent, exotic-option credit risk model focuses on developed markets. Taking Taiwan as an example, this study investigates the bankruptcy prediction performance of the path-dependent, barrier option model in the emerging market. We adopt Duan's (1994) [11], (2000) [12] transformed-data maximum likelihood estimation (MLE) method to directly estimate the unobserved model parameters, and compare the predictive ability of the barrier option model to the commonly adopted credit risk model, Merton's model. Our empirical findings show that the barrier option model is more powerful than Merton's model in predicting bankruptcy in the emerging market. Moreover, we find that the barrier option model predicts bankruptcy much better for highly-leveraged firms. Finally, our findings indicate that the prediction accuracy of the credit risk model can be improved by higher asset liquidity and greater financial transparency.

  15. Breast Density and Benign Breast Disease: Risk Assessment to Identify Women at High Risk of Breast Cancer.

    PubMed

    Tice, Jeffrey A; Miglioretti, Diana L; Li, Chin-Shang; Vachon, Celine M; Gard, Charlotte C; Kerlikowske, Karla

    2015-10-01

    Women with proliferative breast lesions are candidates for primary prevention, but few risk models incorporate benign findings to assess breast cancer risk. We incorporated benign breast disease (BBD) diagnoses into the Breast Cancer Surveillance Consortium (BCSC) risk model, the only breast cancer risk assessment tool that uses breast density. We developed and validated a competing-risk model using 2000 to 2010 SEER data for breast cancer incidence and 2010 vital statistics to adjust for the competing risk of death. We used Cox proportional hazards regression to estimate the relative hazards for age, race/ethnicity, family history of breast cancer, history of breast biopsy, BBD diagnoses, and breast density in the BCSC. We included 1,135,977 women age 35 to 74 years undergoing mammography with no history of breast cancer; 17% of the women had a prior breast biopsy. During a mean follow-up of 6.9 years, 17,908 women were diagnosed with invasive breast cancer. The BCSC BBD model slightly overpredicted risk (expected-to-observed ratio, 1.04; 95% CI, 1.03 to 1.06) and had modest discriminatory accuracy (area under the receiver operator characteristic curve, 0.665). Among women with proliferative findings, adding BBD to the model increased the proportion of women with an estimated 5-year risk of 3% or higher from 9.3% to 27.8% (P<.001). The BCSC BBD model accurately estimates women's risk for breast cancer using breast density and BBD diagnoses. Greater numbers of high-risk women eligible for primary prevention after BBD diagnosis are identified using the BCSC BBD model. © 2015 by American Society of Clinical Oncology.

  16. Breast Density and Benign Breast Disease: Risk Assessment to Identify Women at High Risk of Breast Cancer

    PubMed Central

    Tice, Jeffrey A.; Miglioretti, Diana L.; Li, Chin-Shang; Vachon, Celine M.; Gard, Charlotte C.; Kerlikowske, Karla

    2015-01-01

    Purpose Women with proliferative breast lesions are candidates for primary prevention, but few risk models incorporate benign findings to assess breast cancer risk. We incorporated benign breast disease (BBD) diagnoses into the Breast Cancer Surveillance Consortium (BCSC) risk model, the only breast cancer risk assessment tool that uses breast density. Methods We developed and validated a competing-risk model using 2000 to 2010 SEER data for breast cancer incidence and 2010 vital statistics to adjust for the competing risk of death. We used Cox proportional hazards regression to estimate the relative hazards for age, race/ethnicity, family history of breast cancer, history of breast biopsy, BBD diagnoses, and breast density in the BCSC. Results We included 1,135,977 women age 35 to 74 years undergoing mammography with no history of breast cancer; 17% of the women had a prior breast biopsy. During a mean follow-up of 6.9 years, 17,908 women were diagnosed with invasive breast cancer. The BCSC BBD model slightly overpredicted risk (expected-to-observed ratio, 1.04; 95% CI, 1.03 to 1.06) and had modest discriminatory accuracy (area under the receiver operator characteristic curve, 0.665). Among women with proliferative findings, adding BBD to the model increased the proportion of women with an estimated 5-year risk of 3% or higher from 9.3% to 27.8% (P < .001). Conclusion The BCSC BBD model accurately estimates women's risk for breast cancer using breast density and BBD diagnoses. Greater numbers of high-risk women eligible for primary prevention after BBD diagnosis are identified using the BCSC BBD model. PMID:26282663

  17. Drug development costs when financial risk is measured using the Fama-French three-factor model.

    PubMed

    Vernon, John A; Golec, Joseph H; Dimasi, Joseph A

    2010-08-01

    In a widely cited article, DiMasi, Hansen, and Grabowski (2003) estimate the average pre-tax cost of bringing a new molecular entity to market. Their base case estimate, excluding post-marketing studies, was $802 million (in $US 2000). Strikingly, almost half of this cost (or $399 million) is the cost of capital (COC) used to fund clinical development expenses to the point of FDA marketing approval. The authors used an 11% real COC computed using the capital asset pricing model (CAPM). But the CAPM is a single factor risk model, and multi-factor risk models are the current state of the art in finance. Using the Fama-French three factor model we find that the cost of drug development to be higher than the earlier estimate. Copyright (c) 2009 John Wiley & Sons, Ltd.

  18. Forecasting extinction risk with nonstationary matrix models.

    PubMed

    Gotelli, Nicholas J; Ellison, Aaron M

    2006-02-01

    Matrix population growth models are standard tools for forecasting population change and for managing rare species, but they are less useful for predicting extinction risk in the face of changing environmental conditions. Deterministic models provide point estimates of lambda, the finite rate of increase, as well as measures of matrix sensitivity and elasticity. Stationary matrix models can be used to estimate extinction risk in a variable environment, but they assume that the matrix elements are randomly sampled from a stationary (i.e., non-changing) distribution. Here we outline a method for using nonstationary matrix models to construct realistic forecasts of population fluctuation in changing environments. Our method requires three pieces of data: (1) field estimates of transition matrix elements, (2) experimental data on the demographic responses of populations to altered environmental conditions, and (3) forecasting data on environmental drivers. These three pieces of data are combined to generate a series of sequential transition matrices that emulate a pattern of long-term change in environmental drivers. Realistic estimates of population persistence and extinction risk can be derived from stochastic permutations of such a model. We illustrate the steps of this analysis with data from two populations of Sarracenia purpurea growing in northern New England. Sarracenia purpurea is a perennial carnivorous plant that is potentially at risk of local extinction because of increased nitrogen deposition. Long-term monitoring records or models of environmental change can be used to generate time series of driver variables under different scenarios of changing environments. Both manipulative and natural experiments can be used to construct a linking function that describes how matrix parameters change as a function of the environmental driver. This synthetic modeling approach provides quantitative estimates of extinction probability that have an explicit mechanistic basis.

  19. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    EPA Science Inventory

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  20. Alternative models of DSM-5 PTSD: Examining diagnostic implications.

    PubMed

    Murphy, Siobhan; Hansen, Maj; Elklit, Ask; Yong Chen, Yoke; Raudzah Ghazali, Siti; Shevlin, Mark

    2018-04-01

    The factor structure of DSM-5 posttraumatic stress disorder (PTSD) has been extensively debated with evidence supporting the recently proposed seven-factor Hybrid model. However, despite myriad studies examining PTSD symptom structure few have assessed the diagnostic implications of these proposed models. This study aimed to generate PTSD prevalence estimates derived from the 7 alternative factor models and assess whether pre-established risk factors associated with PTSD (e.g., transportation accidents and sexual victimisation) produce consistent risk estimates. Seven alternative models were estimated within a confirmatory factor analytic framework using the PTSD Checklist for DSM-5 (PCL-5). Data were analysed from a Malaysian adolescent community sample (n = 481) of which 61.7% were female, with a mean age of 17.03 years. The results indicated that all models provided satisfactory model fit with statistical superiority for the Externalising Behaviours and seven-factor Hybrid models. The PTSD prevalence estimates varied substantially ranging from 21.8% for the DSM-5 model to 10.0% for the Hybrid model. Estimates of risk associated with PTSD were inconsistent across the alternative models, with substantial variation emerging for sexual victimisation. These findings have important implications for research and practice and highlight that more research attention is needed to examine the diagnostic implications emerging from the alternative models of PTSD. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Modeling returns volatility: Realized GARCH incorporating realized risk measure

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Ruan, Qingsong; Li, Jianfeng; Li, Ye

    2018-06-01

    This study applies realized GARCH models by introducing several risk measures of intraday returns into the measurement equation, to model the daily volatility of E-mini S&P 500 index futures returns. Besides using the conventional realized measures, realized volatility and realized kernel as our benchmarks, we also use generalized realized risk measures, realized absolute deviation, and two realized tail risk measures, realized value-at-risk and realized expected shortfall. The empirical results show that realized GARCH models using the generalized realized risk measures provide better volatility estimation for the in-sample and substantial improvement in volatility forecasting for the out-of-sample. In particular, the realized expected shortfall performs best for all of the alternative realized measures. Our empirical results reveal that future volatility may be more attributable to present losses (risk measures). The results are robust to different sample estimation windows.

  2. Breast cancer risk from different mammography screening practices.

    PubMed

    Bijwaard, Harmen; Brenner, Alina; Dekkers, Fieke; van Dillen, Teun; Land, Charles E; Boice, John D

    2010-09-01

    Mammography screening is an accepted procedure for early detection of breast tumors among asymptomatic women. Since this procedure involves the use of X rays, it is itself potentially carcinogenic. Although there is general consensus about the benefit of screening for older women, screening practices differ between countries. In this paper radiation risks for these different practices are estimated using a new approach. We model breast cancer induction by ionizing radiation in a cohort of patients exposed to frequent X-ray examinations. The biologically based, mechanistic model provides a better foundation for the extrapolation of risks to different mammography screening practices than empirical models do. The model predicts that the excess relative risk (ERR) doubles when screening starts at age 40 instead of 50 and that a continuation of screening at ages 75 and higher carries little extra risk. The number of induced fatal breast cancers is estimated to be considerably lower than derived from epidemiological studies and from internationally accepted radiation protection risks. The present findings, if used in a risk-benefit analysis for mammography screening, would be more favorable to screening than estimates currently recommended for radiation protection. This has implications for the screening ages that are currently being reconsidered in several countries.

  3. Modeling risk of occupational zoonotic influenza infection in swine workers.

    PubMed

    Paccha, Blanca; Jones, Rachael M; Gibbs, Shawn; Kane, Michael J; Torremorell, Montserrat; Neira-Ramirez, Victor; Rabinowitz, Peter M

    2016-08-01

    Zoonotic transmission of influenza A virus (IAV) between swine and workers in swine production facilities may play a role in the emergence of novel influenza strains with pandemic potential. Guidelines to prevent transmission of influenza to swine workers have been developed but there is a need for evidence-based decision-making about protective measures such as respiratory protection. A mathematical model was applied to estimate the risk of occupational IAV exposure to swine workers by contact and airborne transmission, and to evaluate the use of respirators to reduce transmission.  The Markov model was used to simulate the transport and exposure of workers to IAV in a swine facility. A dose-response function was used to estimate the risk of infection. This approach is similar to methods previously used to estimate the risk of infection in human health care settings. This study uses concentration of virus in air from field measurements collected during outbreaks of influenza in commercial swine facilities, and analyzed by polymerase chain reaction.  It was found that spending 25 min working in a barn during an influenza outbreak in a swine herd could be sufficient to cause zoonotic infection in a worker. However, this risk estimate was sensitive to estimates of viral infectivity to humans. Wearing an excellent fitting N95 respirator reduced this risk, but with high aerosol levels the predicted risk of infection remained high under certain assumptions.  The results of this analysis indicate that under the conditions studied, swine workers are at risk of zoonotic influenza infection. The use of an N95 respirator could reduce such risk. These findings have implications for risk assessment and preventive programs targeting swine workers. The exact level of risk remains uncertain, since our model may have overestimated the viability or infectivity of IAV. Additionally, the potential for partial immunity in swine workers associated with repeated low-dose exposures or from previous infection with other influenza strains was not considered. Further studies should explore these uncertainties.

  4. Projecting Individualized Absolute Invasive Breast Cancer Risk in US Hispanic Women

    PubMed Central

    John, Esther M.; Slattery, Martha L.; Gomez, Scarlett Lin; Yu, Mandi; LaCroix, Andrea Z.; Pee, David; Chlebowski, Rowan T.; Hines, Lisa M.; Thompson, Cynthia A.; Gail, Mitchell H.

    2017-01-01

    Background: There is no model to estimate absolute invasive breast cancer risk for Hispanic women. Methods: The San Francisco Bay Area Breast Cancer Study (SFBCS) provided data on Hispanic breast cancer case patients (533 US-born, 553 foreign-born) and control participants (464 US-born, 947 foreign-born). These data yielded estimates of relative risk (RR) and attributable risk (AR) separately for US-born and foreign-born women. Nativity-specific absolute risks were estimated by combining RR and AR information with nativity-specific invasive breast cancer incidence and competing mortality rates from the California Cancer Registry and Surveillance, Epidemiology, and End Results program to develop the Hispanic risk model (HRM). In independent data, we assessed model calibration through observed/expected (O/E) ratios, and we estimated discriminatory accuracy with the area under the receiver operating characteristic curve (AUC) statistic. Results: The US-born HRM included age at first full-term pregnancy, biopsy for benign breast disease, and family history of breast cancer; the foreign-born HRM also included age at menarche. The HRM estimated lower risks than the National Cancer Institute’s Breast Cancer Risk Assessment Tool (BCRAT) for US-born Hispanic women, but higher risks in foreign-born women. In independent data from the Women’s Health Initiative, the HRM was well calibrated for US-born women (observed/expected [O/E] ratio = 1.07, 95% confidence interval [CI] = 0.81 to 1.40), but seemed to overestimate risk in foreign-born women (O/E ratio = 0.66, 95% CI = 0.41 to 1.07). The AUC was 0.564 (95% CI = 0.485 to 0.644) for US-born and 0.625 (95% CI = 0.487 to 0.764) for foreign-born women. Conclusions: The HRM is the first absolute risk model that is based entirely on data specific to Hispanic women by nativity. Further studies in Hispanic women are warranted to evaluate its validity. PMID:28003316

  5. Chronic beryllium disease and cancer risk estimates with uncertainty for beryllium released to the air from the Rocky Flats Plant.

    PubMed Central

    McGavran, P D; Rood, A S; Till, J E

    1999-01-01

    Beryllium was released into the air from routine operations and three accidental fires at the Rocky Flats Plant (RFP) in Colorado from 1958 to 1989. We evaluated environmental monitoring data and developed estimates of airborne concentrations and their uncertainties and calculated lifetime cancer risks and risks of chronic beryllium disease to hypothetical receptors. This article discusses exposure-response relationships for lung cancer and chronic beryllium disease. We assigned a distribution to cancer slope factor values based on the relative risk estimates from an occupational epidemiologic study used by the U.S. Environmental Protection Agency (EPA) to determine the slope factors. We used the regional atmospheric transport code for Hanford emission tracking atmospheric transport model for exposure calculations because it is particularly well suited for long-term annual-average dispersion estimates and it incorporates spatially varying meteorologic and environmental parameters. We accounted for model prediction uncertainty by using several multiplicative stochastic correction factors that accounted for uncertainty in the dispersion estimate, the meteorology, deposition, and plume depletion. We used Monte Carlo techniques to propagate model prediction uncertainty through to the final risk calculations. We developed nine exposure scenarios of hypothetical but typical residents of the RFP area to consider the lifestyle, time spent outdoors, location, age, and sex of people who may have been exposed. We determined geometric mean incremental lifetime cancer incidence risk estimates for beryllium inhalation for each scenario. The risk estimates were < 10(-6). Predicted air concentrations were well below the current reference concentration derived by the EPA for beryllium sensitization. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 PMID:10464074

  6. A flexible Bayesian hierarchical model of preterm birth risk among US Hispanic subgroups in relation to maternal nativity and education

    PubMed Central

    2011-01-01

    Background Previous research has documented heterogeneity in the effects of maternal education on adverse birth outcomes by nativity and Hispanic subgroup in the United States. In this article, we considered the risk of preterm birth (PTB) using 9 years of vital statistics birth data from New York City. We employed finer categorizations of exposure than used previously and estimated the risk dose-response across the range of education by nativity and ethnicity. Methods Using Bayesian random effects logistic regression models with restricted quadratic spline terms for years of completed maternal education, we calculated and plotted the estimated posterior probabilities of PTB (gestational age < 37 weeks) for each year of education by ethnic and nativity subgroups adjusted for only maternal age, as well as with more extensive covariate adjustments. We then estimated the posterior risk difference between native and foreign born mothers by ethnicity over the continuous range of education exposures. Results The risk of PTB varied substantially by education, nativity and ethnicity. Native born groups showed higher absolute risk of PTB and declining risk associated with higher levels of education beyond about 10 years, as did foreign-born Puerto Ricans. For most other foreign born groups, however, risk of PTB was flatter across the education range. For Mexicans, Central Americans, Dominicans, South Americans and "Others", the protective effect of foreign birth diminished progressively across the educational range. Only for Puerto Ricans was there no nativity advantage for the foreign born, although small numbers of foreign born Cubans limited precision of estimates for that group. Conclusions Using flexible Bayesian regression models with random effects allowed us to estimate absolute risks without strong modeling assumptions. Risk comparisons for any sub-groups at any exposure level were simple to calculate. Shrinkage of posterior estimates through the use of random effects allowed for finer categorization of exposures without restricting joint effects to follow a fixed parametric scale. Although foreign born Hispanic women with the least education appeared to generally have low risk, this seems likely to be a marker for unmeasured environmental and behavioral factors, rather than a causally protective effect of low education itself. PMID:21504612

  7. A flexible Bayesian hierarchical model of preterm birth risk among US Hispanic subgroups in relation to maternal nativity and education.

    PubMed

    Kaufman, Jay S; MacLehose, Richard F; Torrone, Elizabeth A; Savitz, David A

    2011-04-19

    Previous research has documented heterogeneity in the effects of maternal education on adverse birth outcomes by nativity and Hispanic subgroup in the United States. In this article, we considered the risk of preterm birth (PTB) using 9 years of vital statistics birth data from New York City. We employed finer categorizations of exposure than used previously and estimated the risk dose-response across the range of education by nativity and ethnicity. Using Bayesian random effects logistic regression models with restricted quadratic spline terms for years of completed maternal education, we calculated and plotted the estimated posterior probabilities of PTB (gestational age < 37 weeks) for each year of education by ethnic and nativity subgroups adjusted for only maternal age, as well as with more extensive covariate adjustments. We then estimated the posterior risk difference between native and foreign born mothers by ethnicity over the continuous range of education exposures. The risk of PTB varied substantially by education, nativity and ethnicity. Native born groups showed higher absolute risk of PTB and declining risk associated with higher levels of education beyond about 10 years, as did foreign-born Puerto Ricans. For most other foreign born groups, however, risk of PTB was flatter across the education range. For Mexicans, Central Americans, Dominicans, South Americans and "Others", the protective effect of foreign birth diminished progressively across the educational range. Only for Puerto Ricans was there no nativity advantage for the foreign born, although small numbers of foreign born Cubans limited precision of estimates for that group. Using flexible Bayesian regression models with random effects allowed us to estimate absolute risks without strong modeling assumptions. Risk comparisons for any sub-groups at any exposure level were simple to calculate. Shrinkage of posterior estimates through the use of random effects allowed for finer categorization of exposures without restricting joint effects to follow a fixed parametric scale. Although foreign born Hispanic women with the least education appeared to generally have low risk, this seems likely to be a marker for unmeasured environmental and behavioral factors, rather than a causally protective effect of low education itself.

  8. Evaluating changes to reservoir rule curves using historical water-level data

    USGS Publications Warehouse

    Mower, Ethan; Miranda, Leandro E.

    2013-01-01

    Flood control reservoirs are typically managed through rule curves (i.e. target water levels) which control the storage and release timing of flood waters. Changes to rule curves are often contemplated and requested by various user groups and management agencies with no information available about the actual flood risk of such requests. Methods of estimating flood risk in reservoirs are not easily available to those unfamiliar with hydrological models that track water movement through a river basin. We developed a quantile regression model that uses readily available daily water-level data to estimate risk of spilling. Our model provided a relatively simple process for estimating the maximum applicable water level under a specific flood risk for any day of the year. This water level represents an upper-limit umbrella under which water levels can be operated in a variety of ways. Our model allows the visualization of water-level management under a user-specified flood risk and provides a framework for incorporating the effect of a changing environment on water-level management in reservoirs, but is not designed to replace existing hydrological models. The model can improve communication and collaboration among agencies responsible for managing natural resources dependent on reservoir water levels.

  9. Estimation of the Dose and Dose Rate Effectiveness Factor

    NASA Technical Reports Server (NTRS)

    Chappell, L.; Cucinotta, F. A.

    2013-01-01

    Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.

  10. Evaluating cardiovascular mortality in type 2 diabetes patients: an analysis based on competing risks Markov chains and additive regression models.

    PubMed

    Rosato, Rosalba; Ciccone, G; Bo, S; Pagano, G F; Merletti, F; Gregori, D

    2007-06-01

    Type 2 diabetes represents a condition significantly associated with increased cardiovascular mortality. The aims of the study are: (i) to estimate the cumulative incidence function for cause-specific mortality using Cox and Aalen model; (ii) to describe how the prediction of cardiovascular or other causes mortality changes for patients with different pattern of covariates; (iii) to show if different statistical methods may give different results. Cox and Aalen additive regression model through the Markov chain approach, are used to estimate the cause-specific hazard for cardiovascular or other causes mortality in a cohort of 2865 type 2 diabetic patients without insulin treatment. The models are compared in the estimation of the risk of death for patients of different severity. For younger patients with a better covariates profile, the Cumulative Incidence Function estimated by Cox and Aalen model was almost the same; for patients with the worst covariates profile, models gave different results: at the end of follow-up cardiovascular mortality rate estimated by Cox and Aalen model was 0.26 [95% confidence interval (CI) = 0.21-0.31] and 0.14 (95% CI = 0.09-0.18). Standard Cox and Aalen model capture the risk process for patients equally well with average profiles of co-morbidities. The Aalen model, in addition, is shown to be better at identifying cause-specific risk of death for patients with more severe clinical profiles. This result is relevant in the development of analytic tools for research and resource management within diabetes care.

  11. Population viability analysis for endangered Roanoke logperch

    USGS Publications Warehouse

    Roberts, James H.; Angermeier, Paul; Anderson, Gregory B.

    2016-01-01

    A common strategy for recovering endangered species is ensuring that populations exceed the minimum viable population size (MVP), a demographic benchmark that theoretically ensures low long-term extinction risk. One method of establishing MVP is population viability analysis, a modeling technique that simulates population trajectories and forecasts extinction risk based on a series of biological, environmental, and management assumptions. Such models also help identify key uncertainties that have a large influence on extinction risk. We used stochastic count-based simulation models to explore extinction risk, MVP, and the possible benefits of alternative management strategies in populations of Roanoke logperch Percina rex, an endangered stream fish. Estimates of extinction risk were sensitive to the assumed population growth rate and model type, carrying capacity, and catastrophe regime (frequency and severity of anthropogenic fish kills), whereas demographic augmentation did little to reduce extinction risk. Under density-dependent growth, the estimated MVP for Roanoke logperch ranged from 200 to 4200 individuals, depending on the assumed severity of catastrophes. Thus, depending on the MVP threshold, anywhere from two to all five of the logperch populations we assessed were projected to be viable. Despite this uncertainty, these results help identify populations with the greatest relative extinction risk, as well as management strategies that might reduce this risk the most, such as increasing carrying capacity and reducing fish kills. Better estimates of population growth parameters and catastrophe regimes would facilitate the refinement of MVP and extinction-risk estimates, and they should be a high priority for future research on Roanoke logperch and other imperiled stream-fish species.

  12. Estimating the decline in excess risk of chronic obstructive pulmonary disease following quitting smoking - a systematic review based on the negative exponential model.

    PubMed

    Lee, Peter N; Fry, John S; Forey, Barbara A

    2014-03-01

    We quantified the decline in COPD risk following quitting using the negative exponential model, as previously carried out for other smoking-related diseases. We identified 14 blocks of RRs (from 11 studies) comparing current smokers, former smokers (by time quit) and never smokers, some studies providing sex-specific blocks. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We estimated the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block, except for one where no decline with quitting was evident, and H was not estimable. For the remaining 13 blocks, goodness-of-fit to the model was generally adequate, the combined estimate of H being 13.32 (95% CI 11.86-14.96) years. There was no heterogeneity in H, overall or by various studied sources. Sensitivity analyses allowing for reverse causation or different assumed times for the final quitting period little affected the results. The model summarizes quitting data well. The estimate of 13.32years is substantially larger than recent estimates of 4.40years for ischaemic heart disease and 4.78years for stroke, and also larger than the 9.93years for lung cancer. Heterogeneity was unimportant for COPD, unlike for the other three diseases. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Heterogeneity and Risk Sharing in Village Economies*

    PubMed Central

    Chiappori, Pierre-André; Samphantharak, Krislert; Schulhofer-Wohl, Sam; Townsend, Robert M.

    2013-01-01

    We show how to use panel data on household consumption to directly estimate households’ risk preferences. Specifically, we measure heterogeneity in risk aversion among households in Thai villages using a full risk-sharing model, which we then test allowing for this heterogeneity. There is substantial, statistically significant heterogeneity in estimated risk preferences. Full insurance cannot be rejected. As the risk sharing, as-if-complete-markets theory might predict, estimated risk preferences are unrelated to wealth or other characteristics. The heterogeneity matters for policy: Although the average household would benefit from eliminating village-level risk, less-risk-averse households who are paid to absorb that risk would be worse off by several percent of household consumption. PMID:24932226

  14. Investigating Gender Differences under Time Pressure in Financial Risk Taking

    PubMed Central

    Xie, Zhixin; Page, Lionel; Hardy, Ben

    2017-01-01

    There is a significant gender imbalance on financial trading floors. This motivated us to investigate gender differences in financial risk taking under pressure. We used a well-established approach from behavior economics to analyze a series of risky monetary choices by male and female participants with and without time pressure. We also used second to fourth digit ratio (2D:4D) and face width-to-height ratio (fWHR) as correlates of pre-natal exposure to testosterone. We constructed a structural model and estimated the participants' risk attitudes and probability perceptions via maximum likelihood estimation under both expected utility (EU) and rank-dependent utility (RDU) models. In line with existing research, we found that male participants are less risk averse and that the gender gap in risk attitudes increases under moderate time pressure. We found that female participants with lower 2D:4D ratios and higher fWHR are less risk averse in RDU estimates. Males with lower 2D:4D ratios were less risk averse in EU estimations, but more risk averse using RDU estimates. We also observe that men whose ratios indicate a greater prenatal exposure to testosterone exhibit a greater optimism and overestimation of small probabilities of success. PMID:29326566

  15. A Web-Based System for Bayesian Benchmark Dose Estimation.

    PubMed

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  16. Association of Coronary Artery Calcification with Estimated Coronary Heart Disease Risk from Prediction Models in a Community-Based Sample of Japanese Men: The Shiga Epidemiological Study of Subclinical Atherosclerosis (SESSA).

    PubMed

    Fujiyoshi, Akira; Arima, Hisatomi; Tanaka-Mizuno, Sachiko; Hisamatsu, Takahashi; Kadowaki, Sayaka; Kadota, Aya; Zaid, Maryam; Sekikawa, Akira; Yamamoto, Takashi; Horie, Minoru; Miura, Katsuyuki; Ueshima, Hirotsugu

    2017-12-05

    The clinical significance of coronary artery calcification (CAC) is not fully determined in general East Asian populations where background coronary heart disease (CHD) is less common than in USA/Western countries. We cross-sectionally assessed the association between CAC and estimated CHD risk as well as each major risk factor in general Japanese men. Participants were 996 randomly selected Japanese men aged 40-79 y, free of stroke, myocardial infarction, or revascularization. We examined an independent relationship between each risk factor used in prediction models and CAC score ≥100 by logistic regression. We then divided the participants into quintiles of estimated CHD risk per prediction model to calculate odds ratio of having CAC score ≥100. Receiver operating characteristic curve and c-index were used to examine discriminative ability of prevalent CAC for each prediction model. Age, smoking status, and systolic blood pressure were significantly associated with CAC score ≥100 in the multivariable analysis. The odds of having CAC score ≥100 were higher for those in higher quintiles in all prediction models (p-values for trend across quintiles <0.0001 for all models). All prediction models showed fair and similar discriminative abilities to detect CAC score ≥100, with similar c-statistics (around 0.70). In a community-based sample of Japanese men free of CHD and stroke, CAC score ≥100 was significantly associated with higher estimated CHD risk by prediction models. This finding supports the potential utility of CAC as a biomarker for CHD in a general Japanese male population.

  17. A screening-level modeling approach to estimate nitrogen loading and standard exceedance risk, with application to the Tippecanoe River watershed, Indiana

    EPA Science Inventory

    This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explor...

  18. Methods to assess performance of models estimating risk of death in intensive care patients: a review.

    PubMed

    Cook, D A

    2006-04-01

    Models that estimate the probability of death of intensive care unit patients can be used to stratify patients according to the severity of their condition and to control for casemix and severity of illness. These models have been used for risk adjustment in quality monitoring, administration, management and research and as an aid to clinical decision making. Models such as the Mortality Prediction Model family, SAPS II, APACHE II, APACHE III and the organ system failure models provide estimates of the probability of in-hospital death of ICU patients. This review examines methods to assess the performance of these models. The key attributes of a model are discrimination (the accuracy of the ranking in order of probability of death) and calibration (the extent to which the model's prediction of probability of death reflects the true risk of death). These attributes should be assessed in existing models that predict the probability of patient mortality, and in any subsequent model that is developed for the purposes of estimating these probabilities. The literature contains a range of approaches for assessment which are reviewed and a survey of the methodologies used in studies of intensive care mortality models is presented. The systematic approach used by Standards for Reporting Diagnostic Accuracy provides a framework to incorporate these theoretical considerations of model assessment and recommendations are made for evaluation and presentation of the performance of models that estimate the probability of death of intensive care patients.

  19. Gender differences in injury severity risks in crashes at signalized intersections.

    PubMed

    Obeng, K

    2011-07-01

    This paper analyzes gender differences in crash risk severities using data for signalized intersections. It estimates gender models for injury severity risks and finds that driver condition, type of crash, type of vehicle driven and vehicle safety features have different effects on females' and males' injury severity risks. Also, it finds some variables which are significantly related to females' injury severity risks but not males' and others which affect males' injury severity risks but not females'. It concludes that better and more in-depth information about gender differences in injury severity risks is gained by estimating separate models for females and males. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Estimation and prediction under local volatility jump-diffusion model

    NASA Astrophysics Data System (ADS)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  1. Modelling second malignancy risks from low dose rate and high dose rate brachytherapy as monotherapy for localised prostate cancer.

    PubMed

    Murray, Louise; Mason, Joshua; Henry, Ann M; Hoskin, Peter; Siebert, Frank-Andre; Venselaar, Jack; Bownes, Peter

    2016-08-01

    To estimate the risks of radiation-induced rectal and bladder cancers following low dose rate (LDR) and high dose rate (HDR) brachytherapy as monotherapy for localised prostate cancer and compare to external beam radiotherapy techniques. LDR and HDR brachytherapy monotherapy plans were generated for three prostate CT datasets. Second cancer risks were assessed using Schneider's concept of organ equivalent dose. LDR risks were assessed according to a mechanistic model and a bell-shaped model. HDR risks were assessed according to a bell-shaped model. Relative risks and excess absolute risks were estimated and compared to external beam techniques. Excess absolute risks of second rectal or bladder cancer were low for both LDR (irrespective of the model used for calculation) and HDR techniques. Average excess absolute risks of rectal cancer for LDR brachytherapy according to the mechanistic model were 0.71 per 10,000 person-years (PY) and 0.84 per 10,000 PY respectively, and according to the bell-shaped model, were 0.47 and 0.78 per 10,000 PY respectively. For HDR, the average excess absolute risks for second rectal and bladder cancers were 0.74 and 1.62 per 10,000 PY respectively. The absolute differences between techniques were very low and clinically irrelevant. Compared to external beam prostate radiotherapy techniques, LDR and HDR brachytherapy resulted in the lowest risks of second rectal and bladder cancer. This study shows both LDR and HDR brachytherapy monotherapy result in low estimated risks of radiation-induced rectal and bladder cancer. LDR resulted in lower bladder cancer risks than HDR, and lower or similar risks of rectal cancer. In absolute terms these differences between techniques were very small. Compared to external beam techniques, second rectal and bladder cancer risks were lowest for brachytherapy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. NASA space cancer risk model-2014: Uncertainties due to qualitative differences in biological effects of HZE particles

    NASA Astrophysics Data System (ADS)

    Cucinotta, Francis

    Uncertainties in estimating health risks from exposures to galactic cosmic rays (GCR) — comprised of protons and high-energy and charge (HZE) nuclei are an important limitation to long duration space travel. HZE nuclei produce both qualitative and quantitative differences in biological effects compared to terrestrial radiation leading to large uncertainties in predicting risks to humans. Our NASA Space Cancer Risk Model-2012 (NSCR-2012) for estimating lifetime cancer risks from space radiation included several new features compared to earlier models from the National Council on Radiation Protection and Measurements (NCRP) used at NASA. New features of NSCR-2012 included the introduction of NASA defined radiation quality factors based on track structure concepts, a Bayesian analysis of the dose and dose-rate reduction effectiveness factor (DDREF) and its uncertainty, and the use of a never-smoker population to represent astronauts. However, NSCR-2012 did not include estimates of the role of qualitative differences between HZE particles and low LET radiation. In this report we discuss evidence for non-targeted effects increasing cancer risks at space relevant HZE particle absorbed doses in tissue (<0.2 Gy), and for increased tumor lethality due to the propensity for higher rates of metastatic tumors from high LET radiation suggested by animal experiments. The NSCR-2014 model considers how these qualitative differences modify the overall probability distribution functions (PDF) for cancer mortality risk estimates from space radiation. Predictions of NSCR-2014 for International Space Station missions and Mars exploration will be described, and compared to those of our earlier NSCR-2012 model.

  3. Relative risk for HIV in India - An estimate using conditional auto-regressive models with Bayesian approach.

    PubMed

    Kandhasamy, Chandrasekaran; Ghosh, Kaushik

    2017-02-01

    Indian states are currently classified into HIV-risk categories based on the observed prevalence counts, percentage of infected attendees in antenatal clinics, and percentage of infected high-risk individuals. This method, however, does not account for the spatial dependence among the states nor does it provide any measure of statistical uncertainty. We provide an alternative model-based approach to address these issues. Our method uses Poisson log-normal models having various conditional autoregressive structures with neighborhood-based and distance-based weight matrices and incorporates all available covariate information. We use R and WinBugs software to fit these models to the 2011 HIV data. Based on the Deviance Information Criterion, the convolution model using distance-based weight matrix and covariate information on female sex workers, literacy rate and intravenous drug users is found to have the best fit. The relative risk of HIV for the various states is estimated using the best model and the states are then classified into the risk categories based on these estimated values. An HIV risk map of India is constructed based on these results. The choice of the final model suggests that an HIV control strategy which focuses on the female sex workers, intravenous drug users and literacy rate would be most effective. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Monte Carlo simulation of single accident airport risk profile

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A computer simulation model was developed for estimating the potential economic impacts of a carbon fiber release upon facilities within an 80 kilometer radius of a major airport. The model simulated the possible range of release conditions and the resulting dispersion of the carbon fibers. Each iteration of the model generated a specific release scenario, which would cause a specific amount of dollar loss to the surrounding community. By repeated iterations, a risk profile was generated, showing the probability distribution of losses from one accident. Using accident probability estimates, the risks profile for annual losses was derived. The mechanics are described of the simulation model, the required input data, and the risk profiles generated for the 26 large hub airports.

  5. Assessment of the agreement between the Framingham and DAD risk equations for estimating cardiovascular risk in adult Africans living with HIV infection: a cross-sectional study.

    PubMed

    Noumegni, Steve Raoul; Ama, Vicky Jocelyne Moor; Assah, Felix K; Bigna, Jean Joel; Nansseu, Jobert Richie; Kameni, Jenny Arielle M; Katte, Jean-Claude; Dehayem, Mesmin Y; Kengne, Andre Pascal; Sobngwi, Eugene

    2017-01-01

    The Absolute cardiovascular disease (CVD) risk evaluation using multivariable CVD risk models is increasingly advocated in people with HIV, in whom existing models remain largely untested. We assessed the agreement between the general population derived Framingham CVD risk equation and the HIV-specific Data collection on Adverse effects of anti-HIV Drugs (DAD) CVD risk equation in HIV-infected adult Cameroonians. This cross-sectional study involved 452 HIV infected adults recruited at the HIV day-care unit of the Yaoundé Central Hospital, Cameroon. The 5-year projected CVD risk was estimated for each participant using the DAD and Framingham CVD risk equations. Agreement between estimates from these equations was assessed using the spearman correlation and Cohen's kappa coefficient. The mean age of participants (80% females) was 44.4 ± 9.8 years. Most participants (88.5%) were on antiretroviral treatment with 93.3% of them receiving first-line regimen. The most frequent cardiovascular risk factors were abdominal obesity (43.1%) and dyslipidemia (33.8%). The median estimated 5-year CVD risk was 0.6% (25th-75th percentiles: 0.3-1.3) using the DAD equation and 0.7% (0.2-2.0) with the Framingham equation. The Spearman correlation between the two estimates was 0.93 ( p  < 0.001). The kappa statistic was 0.61 (95% confident interval: 0.54-0.67) for the agreement between the two equations in classifying participants across risk categories defined as low, moderate, high and very high. Most participants had a low-to-moderate estimated CVD risk, with acceptable level of agreement between the general and HIV-specific equations in ranking CVD risk.

  6. Model estimation of claim risk and premium for motor vehicle insurance by using Bayesian method

    NASA Astrophysics Data System (ADS)

    Sukono; Riaman; Lesmana, E.; Wulandari, R.; Napitupulu, H.; Supian, S.

    2018-01-01

    Risk models need to be estimated by the insurance company in order to predict the magnitude of the claim and determine the premiums charged to the insured. This is intended to prevent losses in the future. In this paper, we discuss the estimation of risk model claims and motor vehicle insurance premiums using Bayesian methods approach. It is assumed that the frequency of claims follow a Poisson distribution, while a number of claims assumed to follow a Gamma distribution. The estimation of parameters of the distribution of the frequency and amount of claims are made by using Bayesian methods. Furthermore, the estimator distribution of frequency and amount of claims are used to estimate the aggregate risk models as well as the value of the mean and variance. The mean and variance estimator that aggregate risk, was used to predict the premium eligible to be charged to the insured. Based on the analysis results, it is shown that the frequency of claims follow a Poisson distribution with parameter values λ is 5.827. While a number of claims follow the Gamma distribution with parameter values p is 7.922 and θ is 1.414. Therefore, the obtained values of the mean and variance of the aggregate claims respectively are IDR 32,667,489.88 and IDR 38,453,900,000,000.00. In this paper the prediction of the pure premium eligible charged to the insured is obtained, which amounting to IDR 2,722,290.82. The prediction of the claims and premiums aggregate can be used as a reference for the insurance company’s decision-making in management of reserves and premiums of motor vehicle insurance.

  7. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    NASA Astrophysics Data System (ADS)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  8. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function.

    PubMed

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  9. CONTROL FUNCTION ASSISTED IPW ESTIMATION WITH A SECONDARY OUTCOME IN CASE-CONTROL STUDIES.

    PubMed

    Sofer, Tamar; Cornelis, Marilyn C; Kraft, Peter; Tchetgen Tchetgen, Eric J

    2017-04-01

    Case-control studies are designed towards studying associations between risk factors and a single, primary outcome. Information about additional, secondary outcomes is also collected, but association studies targeting such secondary outcomes should account for the case-control sampling scheme, or otherwise results may be biased. Often, one uses inverse probability weighted (IPW) estimators to estimate population effects in such studies. IPW estimators are robust, as they only require correct specification of the mean regression model of the secondary outcome on covariates, and knowledge of the disease prevalence. However, IPW estimators are inefficient relative to estimators that make additional assumptions about the data generating mechanism. We propose a class of estimators for the effect of risk factors on a secondary outcome in case-control studies that combine IPW with an additional modeling assumption: specification of the disease outcome probability model. We incorporate this model via a mean zero control function. We derive the class of all regular and asymptotically linear estimators corresponding to our modeling assumption, when the secondary outcome mean is modeled using either the identity or the log link. We find the efficient estimator in our class of estimators and show that it reduces to standard IPW when the model for the primary disease outcome is unrestricted, and is more efficient than standard IPW when the model is either parametric or semiparametric.

  10. Relative risk estimation of Chikungunya disease in Malaysia: An analysis based on Poisson-gamma model

    NASA Astrophysics Data System (ADS)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2015-05-01

    Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.

  11. Empirical evaluation of the market price of risk using the CIR model

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Torosantucci, L.; Uboldi, A.

    2007-03-01

    We describe a simple but effective method for the estimation of the market price of risk. The basic idea is to compare the results obtained by following two different approaches in the application of the Cox-Ingersoll-Ross (CIR) model. In the first case, we apply the non-linear least squares method to cross sectional data (i.e., all rates of a single day). In the second case, we consider the short rate obtained by means of the first procedure as a proxy of the real market short rate. Starting from this new proxy, we evaluate the parameters of the CIR model by means of martingale estimation techniques. The estimate of the market price of risk is provided by comparing results obtained with these two techniques, since this approach makes possible to isolate the market price of risk and evaluate, under the Local Expectations Hypothesis, the risk premium given by the market for different maturities. As a test case, we apply the method to data of the European Fixed Income Market.

  12. Solvency II: How Geosciences become crucial for the Insurance Business

    NASA Astrophysics Data System (ADS)

    Grieser, Jürgen

    2013-04-01

    Solvency II is a Europe-wide framework to force insurers and reinsurers to fulfill solvency capital requirements (SCR). For the insurance market in Europe this means that each insurer and re-insurer has to quantify the financial risk of its portfolio. An insurer has to stay solvent even if the annual loss for a year becomes a 1-in-200-year loss event. Classical approaches to risk appraisal are based on actuarial models. Statistics of observed loss history over a couple of decades are used to estimate the risk for the near future. This pure statistical approach, however, cannot be used to reliably estimate the 1-in-200-year risk. It would be a meaningless extrapolation due to the high range of uncertainty. While around 25 years ago only actuarial models and expert knowledge were used for the estimation of solvency requirements, today reinsurers and major insurers use physical loss models to manage their risk. They are either build in-house or from vendors like Risk Management Solutions. In the frame of Solvency II each insurer has to calculate its 1-in-200-year annual loss. This can be done either by the so-called Standard Formula provided by the European Union or by detailed risk models. The estimation of annual financial losses is only possible if all hazards which can affect a portfolio are considered. For the international property insurance market, these can be earthquakes, winter storms, tropical cyclones, convective storms, tsunamis and floods. All these hazards are modeled by geoscientists. In the presentation the Standard Formula and the assumptions made will be discussed, especially the spatial structure of correlation presumed. The advantage of detailed loss models will be shown based on examples. The presentation will end with a short discussion of the challenges that risk modeling faces due to climate change.

  13. Patient- and cohort-specific dose and risk estimation for abdominopelvic CT: a study based on 100 patients

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Frush, Donald P.; Samei, Ehsan

    2012-03-01

    The purpose of this work was twofold: (a) to estimate patient- and cohort-specific radiation dose and cancer risk index for abdominopelvic computer tomography (CT) scans; (b) to evaluate the effects of patient anatomical characteristics (size, age, and gender) and CT scanner model on dose and risk conversion coefficients. The study included 100 patient models (42 pediatric models, 58 adult models) and multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare). A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which DLP-normalized-effective dose (k factor) and DLP-normalized-risk index values (q factor) were derived. The k factor showed exponential decrease with increasing patient size. For a given gender, q factor showed exponential decrease with both increasing patient size and patient age. The discrepancies in k and q factors across scanners were on average 8% and 15%, respectively. This study demonstrates the feasibility of estimating patient-specific organ dose and cohort-specific effective dose and risk index in abdominopelvic CT requiring only the knowledge of patient size, gender, and age.

  14. Review of NASA approach to space radiation risk assessments for Mars exploration.

    PubMed

    Cucinotta, Francis A

    2015-02-01

    Long duration space missions present unique radiation protection challenges due to the complexity of the space radiation environment, which includes high charge and energy particles and other highly ionizing radiation such as neutrons. Based on a recommendation by the National Council on Radiation Protection and Measurements, a 3% lifetime risk of exposure-induced death for cancer has been used as a basis for risk limitation by the National Aeronautics and Space Administration (NASA) for low-Earth orbit missions. NASA has developed a risk-based approach to radiation exposure limits that accounts for individual factors (age, gender, and smoking history) and assesses the uncertainties in risk estimates. New radiation quality factors with associated probability distribution functions to represent the quality factor's uncertainty have been developed based on track structure models and recent radiobiology data for high charge and energy particles. The current radiation dose limits are reviewed for spaceflight and the various qualitative and quantitative uncertainties that impact the risk of exposure-induced death estimates using the NASA Space Cancer Risk (NSCR) model. NSCR estimates of the number of "safe days" in deep space to be within exposure limits and risk estimates for a Mars exploration mission are described.

  15. A new time-series methodology for estimating relationships between elderly frailty, remaining life expectancy, and ambient air quality.

    PubMed

    Murray, Christian J; Lipfert, Frederick W

    2012-01-01

    Many publications estimate short-term air pollution-mortality risks, but few estimate the associated changes in life-expectancies. We present a new methodology for analyzing time series of health effects, in which prior frailty is assumed to precede short-term elderly nontraumatic mortality. The model is based on a subpopulation of frail individuals whose entries and exits (deaths) are functions of daily and lagged environmental conditions: ambient temperature/season, airborne particles, and ozone. This frail susceptible population is unknown; its fluctuations cannot be observed but are estimated using maximum-likelihood methods with the Kalman filter. We used an existing 14-y set of daily data to illustrate the model and then tested the assumption of prior frailty with a new generalized model that estimates the portion of the daily death count allocated to nonfrail individuals. In this demonstration dataset, new entries into the high-risk pool are associated with lower ambient temperatures and higher concentrations of particulate matter and ozone. Accounting for these effects on antecedent frailty reduces this at-risk population, yielding frail life expectancies of 5-7 days. Associations between environmental factors and entries to the at-risk pool are about twice as strong as for mortality. Nonfrail elderly deaths are seen to make only small contributions. This new model predicts a small short-lived frail population-at-risk that is stable over a wide range of environmental conditions. The predicted effects of pollution on new entries and deaths are robust and consistent with conventional morbidity/mortality times-series studies. We recommend model verification using other suitable datasets.

  16. Quantification of uncertainty in flood risk assessment for flood protection planning: a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Dittes, Beatrice; Špačková, Olga; Ebrahimian, Negin; Kaiser, Maria; Rieger, Wolfgang; Disse, Markus; Straub, Daniel

    2017-04-01

    Flood risk estimates are subject to significant uncertainties, e.g. due to limited records of historic flood events, uncertainty in flood modeling, uncertain impact of climate change or uncertainty in the exposure and loss estimates. In traditional design of flood protection systems, these uncertainties are typically just accounted for implicitly, based on engineering judgment. In the AdaptRisk project, we develop a fully quantitative framework for planning of flood protection systems under current and future uncertainties using quantitative pre-posterior Bayesian decision analysis. In this contribution, we focus on the quantification of the uncertainties and study their relative influence on the flood risk estimate and on the planning of flood protection systems. The following uncertainty components are included using a Bayesian approach: 1) inherent and statistical (i.e. limited record length) uncertainty; 2) climate uncertainty that can be learned from an ensemble of GCM-RCM models; 3) estimates of climate uncertainty components not covered in 2), such as bias correction, incomplete ensemble, local specifics not captured by the GCM-RCM models; 4) uncertainty in the inundation modelling; 5) uncertainty in damage estimation. We also investigate how these uncertainties are possibly reduced in the future when new evidence - such as new climate models, observed extreme events, and socio-economic data - becomes available. Finally, we look into how this new evidence influences the risk assessment and effectivity of flood protection systems. We demonstrate our methodology for a pre-alpine catchment in southern Germany: the Mangfall catchment in Bavaria that includes the city of Rosenheim, which suffered significant losses during the 2013 flood event.

  17. On the difficulty to delimit disease risk hot spots

    NASA Astrophysics Data System (ADS)

    Charras-Garrido, M.; Azizi, L.; Forbes, F.; Doyle, S.; Peyrard, N.; Abrial, D.

    2013-06-01

    Representing the health state of a region is a helpful tool to highlight spatial heterogeneity and localize high risk areas. For ease of interpretation and to determine where to apply control procedures, we need to clearly identify and delineate homogeneous regions in terms of disease risk, and in particular disease risk hot spots. However, even if practical purposes require the delineation of different risk classes, such a classification does not correspond to a reality and is thus difficult to estimate. Working with grouped data, a first natural choice is to apply disease mapping models. We apply a usual disease mapping model, producing continuous estimations of the risks that requires a post-processing classification step to obtain clearly delimited risk zones. We also apply a risk partition model that build a classification of the risk levels in a one step procedure. Working with point data, we will focus on the scan statistic clustering method. We illustrate our article with a real example concerning the bovin spongiform encephalopathy (BSE) an animal disease whose zones at risk are well known by the epidemiologists. We show that in this difficult case of a rare disease and a very heterogeneous population, the different methods provide risk zones that are globally coherent. But, related to the dichotomy between the need and the reality, the exact delimitation of the risk zones, as well as the corresponding estimated risks are quite different.

  18. Melanoma Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Low dose radiation risks for women surviving the a-bombs in Japan: generalized additive model.

    PubMed

    Dropkin, Greg

    2016-11-24

    Analyses of cancer mortality and incidence in Japanese A-bomb survivors have been used to estimate radiation risks, which are generally higher for women. Relative Risk (RR) is usually modelled as a linear function of dose. Extrapolation from data including high doses predicts small risks at low doses. Generalized Additive Models (GAMs) are flexible methods for modelling non-linear behaviour. GAMs are applied to cancer incidence in female low dose subcohorts, using anonymous public data for the 1958 - 1998 Life Span Study, to test for linearity, explore interactions, adjust for the skewed dose distribution, examine significance below 100 mGy, and estimate risks at 10 mGy. For all solid cancer incidence, RR estimated from 0 - 100 mGy and 0 - 20 mGy subcohorts is significantly raised. The response tapers above 150 mGy. At low doses, RR increases with age-at-exposure and decreases with time-since-exposure, the preferred covariate. Using the empirical cumulative distribution of dose improves model fit, and capacity to detect non-linear responses. RR is elevated over wide ranges of covariate values. Results are stable under simulation, or when removing exceptional data cells, or adjusting neutron RBE. Estimates of Excess RR at 10 mGy using the cumulative dose distribution are 10 - 45 times higher than extrapolations from a linear model fitted to the full cohort. Below 100 mGy, quasipoisson models find significant effects for all solid, squamous, uterus, corpus, and thyroid cancers, and for respiratory cancers when age-at-exposure > 35 yrs. Results for the thyroid are compatible with studies of children treated for tinea capitis, and Chernobyl survivors. Results for the uterus are compatible with studies of UK nuclear workers and the Techa River cohort. Non-linear models find large, significant cancer risks for Japanese women exposed to low dose radiation from the atomic bombings. The risks should be reflected in protection standards.

  20. Long-Term Post-CABG Survival: Performance of Clinical Risk Models Versus Actuarial Predictions.

    PubMed

    Carr, Brendan M; Romeiser, Jamie; Ruan, Joyce; Gupta, Sandeep; Seifert, Frank C; Zhu, Wei; Shroyer, A Laurie

    2016-01-01

    Clinical risk models are commonly used to predict short-term coronary artery bypass grafting (CABG) mortality but are less commonly used to predict long-term mortality. The added value of long-term mortality clinical risk models over traditional actuarial models has not been evaluated. To address this, the predictive performance of a long-term clinical risk model was compared with that of an actuarial model to identify the clinical variable(s) most responsible for any differences observed. Long-term mortality for 1028 CABG patients was estimated using the Hannan New York State clinical risk model and an actuarial model (based on age, gender, and race/ethnicity). Vital status was assessed using the Social Security Death Index. Observed/expected (O/E) ratios were calculated, and the models' predictive performances were compared using a nested c-index approach. Linear regression analyses identified the subgroup of risk factors driving the differences observed. Mortality rates were 3%, 9%, and 17% at one-, three-, and five years, respectively (median follow-up: five years). The clinical risk model provided more accurate predictions. Greater divergence between model estimates occurred with increasing long-term mortality risk, with baseline renal dysfunction identified as a particularly important driver of these differences. Long-term mortality clinical risk models provide enhanced predictive power compared to actuarial models. Using the Hannan risk model, a patient's long-term mortality risk can be accurately assessed and subgroups of higher-risk patients can be identified for enhanced follow-up care. More research appears warranted to refine long-term CABG clinical risk models. © 2015 The Authors. Journal of Cardiac Surgery Published by Wiley Periodicals, Inc.

  1. Long‐Term Post‐CABG Survival: Performance of Clinical Risk Models Versus Actuarial Predictions

    PubMed Central

    Carr, Brendan M.; Romeiser, Jamie; Ruan, Joyce; Gupta, Sandeep; Seifert, Frank C.; Zhu, Wei

    2015-01-01

    Abstract Background/aim Clinical risk models are commonly used to predict short‐term coronary artery bypass grafting (CABG) mortality but are less commonly used to predict long‐term mortality. The added value of long‐term mortality clinical risk models over traditional actuarial models has not been evaluated. To address this, the predictive performance of a long‐term clinical risk model was compared with that of an actuarial model to identify the clinical variable(s) most responsible for any differences observed. Methods Long‐term mortality for 1028 CABG patients was estimated using the Hannan New York State clinical risk model and an actuarial model (based on age, gender, and race/ethnicity). Vital status was assessed using the Social Security Death Index. Observed/expected (O/E) ratios were calculated, and the models' predictive performances were compared using a nested c‐index approach. Linear regression analyses identified the subgroup of risk factors driving the differences observed. Results Mortality rates were 3%, 9%, and 17% at one‐, three‐, and five years, respectively (median follow‐up: five years). The clinical risk model provided more accurate predictions. Greater divergence between model estimates occurred with increasing long‐term mortality risk, with baseline renal dysfunction identified as a particularly important driver of these differences. Conclusions Long‐term mortality clinical risk models provide enhanced predictive power compared to actuarial models. Using the Hannan risk model, a patient's long‐term mortality risk can be accurately assessed and subgroups of higher‐risk patients can be identified for enhanced follow‐up care. More research appears warranted to refine long‐term CABG clinical risk models. doi: 10.1111/jocs.12665 (J Card Surg 2016;31:23–30) PMID:26543019

  2. [Survival analysis with competing risks: estimating failure probability].

    PubMed

    Llorca, Javier; Delgado-Rodríguez, Miguel

    2004-01-01

    To show the impact of competing risks of death on survival analysis. We provide an example of survival time without chronic rejection after heart transplantation, where death before rejection acts as a competing risk. Using a computer simulation, we compare the Kaplan-Meier estimator and the multiple decrement model. The Kaplan-Meier method overestimated the probability of rejection. Next, we illustrate the use of the multiple decrement model to analyze secondary end points (in our example: death after rejection). Finally, we discuss Kaplan-Meier assumptions and why they fail in the presence of competing risks. Survival analysis should be adjusted for competing risks of death to avoid overestimation of the risk of rejection produced with the Kaplan-Meier method.

  3. Using Structured Additive Regression Models to Estimate Risk Factors of Malaria: Analysis of 2010 Malawi Malaria Indicator Survey Data

    PubMed Central

    Chirombo, James; Lowe, Rachel; Kazembe, Lawrence

    2014-01-01

    Background After years of implementing Roll Back Malaria (RBM) interventions, the changing landscape of malaria in terms of risk factors and spatial pattern has not been fully investigated. This paper uses the 2010 malaria indicator survey data to investigate if known malaria risk factors remain relevant after many years of interventions. Methods We adopted a structured additive logistic regression model that allowed for spatial correlation, to more realistically estimate malaria risk factors. Our model included child and household level covariates, as well as climatic and environmental factors. Continuous variables were modelled by assuming second order random walk priors, while spatial correlation was specified as a Markov random field prior, with fixed effects assigned diffuse priors. Inference was fully Bayesian resulting in an under five malaria risk map for Malawi. Results Malaria risk increased with increasing age of the child. With respect to socio-economic factors, the greater the household wealth, the lower the malaria prevalence. A general decline in malaria risk was observed as altitude increased. Minimum temperatures and average total rainfall in the three months preceding the survey did not show a strong association with disease risk. Conclusions The structured additive regression model offered a flexible extension to standard regression models by enabling simultaneous modelling of possible nonlinear effects of continuous covariates, spatial correlation and heterogeneity, while estimating usual fixed effects of categorical and continuous observed variables. Our results confirmed that malaria epidemiology is a complex interaction of biotic and abiotic factors, both at the individual, household and community level and that risk factors are still relevant many years after extensive implementation of RBM activities. PMID:24991915

  4. Using structured additive regression models to estimate risk factors of malaria: analysis of 2010 Malawi malaria indicator survey data.

    PubMed

    Chirombo, James; Lowe, Rachel; Kazembe, Lawrence

    2014-01-01

    After years of implementing Roll Back Malaria (RBM) interventions, the changing landscape of malaria in terms of risk factors and spatial pattern has not been fully investigated. This paper uses the 2010 malaria indicator survey data to investigate if known malaria risk factors remain relevant after many years of interventions. We adopted a structured additive logistic regression model that allowed for spatial correlation, to more realistically estimate malaria risk factors. Our model included child and household level covariates, as well as climatic and environmental factors. Continuous variables were modelled by assuming second order random walk priors, while spatial correlation was specified as a Markov random field prior, with fixed effects assigned diffuse priors. Inference was fully Bayesian resulting in an under five malaria risk map for Malawi. Malaria risk increased with increasing age of the child. With respect to socio-economic factors, the greater the household wealth, the lower the malaria prevalence. A general decline in malaria risk was observed as altitude increased. Minimum temperatures and average total rainfall in the three months preceding the survey did not show a strong association with disease risk. The structured additive regression model offered a flexible extension to standard regression models by enabling simultaneous modelling of possible nonlinear effects of continuous covariates, spatial correlation and heterogeneity, while estimating usual fixed effects of categorical and continuous observed variables. Our results confirmed that malaria epidemiology is a complex interaction of biotic and abiotic factors, both at the individual, household and community level and that risk factors are still relevant many years after extensive implementation of RBM activities.

  5. Simulating lifetime outcomes associated with complications for people with type 1 diabetes.

    PubMed

    Lung, Tom W C; Clarke, Philip M; Hayes, Alison J; Stevens, Richard J; Farmer, Andrew

    2013-06-01

    The aim of this study was to develop a discrete-time simulation model for people with type 1 diabetes mellitus, to estimate and compare mean life expectancy and quality-adjusted life-years (QALYs) over a lifetime between intensive and conventional blood glucose treatment groups. We synthesized evidence on type 1 diabetes patients using several published sources. The simulation model was based on 13 equations to estimate risks of events and mortality. Cardiovascular disease (CVD) risk was obtained from results of the DCCT (diabetes control and complications trial). Mortality post-CVD event was based on a study using linked administrative data on people with diabetes from Western Australia. Information on incidence of renal disease and the progression to CVD was obtained from studies in Finland and Italy. Lower-extremity amputation (LEA) risk was based on the type 1 diabetes Swedish inpatient registry, and the risk of blindness was obtained from results of a German-based study. Where diabetes-specific data were unavailable, information from other populations was used. We examine the degree and source of parameter uncertainty and illustrate an application of the model in estimating lifetime outcomes of using intensive and conventional treatments for blood glucose control. From 15 years of age, male and female patients had an estimated life expectancy of 47.2 (95 % CI 35.2-59.2) and 52.7 (95 % CI 41.7-63.6) years in the intensive treatment group. The model produced estimates of the lifetime benefits of intensive treatment for blood glucose from the DCCT of 4.0 (95 % CI 1.2-6.8) QALYs for women and 4.6 (95 % CI 2.7-6.9) QALYs for men. Absolute risk per 1,000 person-years for fatal CVD events was simulated to be 1.37 and 2.51 in intensive and conventional treatment groups, respectively. The model incorporates diabetic complications risk data from a type 1 diabetes population and synthesizes other type 1-specific data to estimate long-term outcomes of CVD, end-stage renal disease, LEA and risk of blindness, along with life expectancy and QALYs. External validation was carried out using life expectancy and absolute risk for fatal CVD events. Because of the flexible and transparent nature of the model, it has many potential future applications.

  6. CREATING A DECISION CONTEXT FOR COMPARATIVE ANALYSIS AND CONSISTENT APPLICATION OF INHALATION DOSIMETRY MODELS IN CHILDREN'S RISK ASSESSMENT

    EPA Science Inventory

    Estimation of risks to children from exposure to airborne pollutants is often complicated by the lack of reliable epidemiological data specific to this age group. As a result, risks are generally estimated from extrapolations based on data obtained in other human age groups (e.g....

  7. Development of good modelling practice for phsiologically based pharmacokinetic models for use in risk assessment: The first steps

    EPA Science Inventory

    The increasing use of tissue dosimetry estimated using pharmacokinetic models in chemical risk assessments in multiple countries necessitates the need to develop internationally recognized good modelling practices. These practices would facilitate sharing of models and model eva...

  8. Private sector embedded water risk: Merging the corn supply chain network and regional watershed depletion

    NASA Astrophysics Data System (ADS)

    Kim, T.; Brauman, K. A.; Schmitt, J.; Goodkind, A. L.; Smith, T. M.

    2016-12-01

    Water scarcity in US corn farming regions is a significant risk consideration for the ethanol and meat production sectors, which comprise 80% of all US corn demand. Water supply risk can lead to effects across the supply chain, affecting annual corn yields. The purpose of our study is to assess the water risk to the US's most corn-intensive sectors and companies by linking watershed depletion estimates with corn production, linked to downstream companies through a corn transport model. We use a water depletion index as an improved metric for seasonal water scarcity and a corn sourcing supply chain model based on economic cost minimization. Water depletion was calculated as the fraction of renewable (ground and surface) water consumption, with estimates of more than 75% depletion on an annual average basis indicating a significant water risk. We estimated company water risk as the amount of embedded corn coming from three categories of water stressed counties. The ethanol sector had 3.1% of sourced corn grown from counties that were more than 75% depleted while the beef sector had 14.0%. From a firm perspective, Tyson, JBS, Cargill, the top three US corn demanding companies, had 4.5%, 9.6%, 12.8% of their sourced corn respectively, coming from watersheds that are more than 75% depleted. These numbers are significantly higher than the global average of 2.2% of watersheds being classified as more than 75% depleted. Our model enables corn using industries to evaluate their supply chain risk of water scarcity through modeling corn sourcing and watershed depletion, providing the private sector a new method for risk estimation. Our results suggest corn dependent industries are already linked to water scarcity risk in disproportionate amounts due to the spatial heterogeneity of corn sourcing and water scarcity.

  9. Estimating cancer risk from 99mTc pyrophosphate imaging for transthyretin cardiac amyloidosis.

    PubMed

    Einstein, Andrew J; Shuryak, Igor; Castaño, Adam; Mintz, Akiva; Maurer, Mathew S; Bokhari, Sabahat

    2018-05-30

    Increasing recognition that transthyretin cardiac amyloidosis (ATTR-CA) is much more common than previously appreciated and the emergence of novel disease-modifying therapeutic agents have led to a paradigm shift in which ATTR-CA screening is considered in high-risk populations, such as patients with heart failure with preserved ejection fraction (HFpEF) or aortic stenosis. Radiation risk from 99m Tc-pyrophosphate ( 99m Tc-PYP) scintigraphy, a test with very high sensitivity and specificity for ATTR-CA, has not been previously determined. Radiation doses to individual organs from 99m Tc-PYP were estimated using models developed by the Medical Internal Radiation Dose Committee and the International Commission on Radiological Protection. Excess future cancer risks were estimated from organ doses, using risk projection models developed by the National Academies and extended by the National Cancer Institute. Excess future risks were estimated for men and women aged 40-80 and compared to total (excess plus baseline) future risks. All-organ excess cancer risks (90% uncertainty intervals) ranged from 5.88 (2.45,11.4) to 12.2 (4.11,26.0) cases per 100,000 patients undergoing 99m Tc-PYP testing, were similar for men and women, and decreased with increasing age at testing. Cancer risks were highest to the urinary bladder, and bladder risk varied nearly twofold depending on which model was used. Excess 99m Tc-PYP-related cancers constituted < 1% of total future cancers to the critical organs. Very low cancer risks associated with 99m Tc-PYP testing suggest a favorable benefit-risk profile for 99m Tc-PYP as a screening test for ATTR-CA in high-risk populations, such as such as patients with HFpEF or aortic stenosis.

  10. Accounting for Time-Varying Confounding in the Relationship Between Obesity and Coronary Heart Disease: Analysis With G-Estimation: The ARIC Study.

    PubMed

    Shakiba, Maryam; Mansournia, Mohammad Ali; Salari, Arsalan; Soori, Hamid; Mansournia, Nasrin; Kaufman, Jay S

    2018-06-01

    In longitudinal studies, standard analysis may yield biased estimates of exposure effect in the presence of time-varying confounders that are also intermediate variables. We aimed to quantify the relationship between obesity and coronary heart disease (CHD) by appropriately adjusting for time-varying confounders. This study was performed in a subset of participants from the Atherosclerosis Risk in Communities (ARIC) Study (1987-2010), a US study designed to investigate risk factors for atherosclerosis. General obesity was defined as body mass index (weight (kg)/height (m)2) ≥30, and abdominal obesity (AOB) was defined according to either waist circumference (≥102 cm in men and ≥88 cm in women) or waist:hip ratio (≥0.9 in men and ≥0.85 in women). The association of obesity with CHD was estimated by G-estimation and compared with results from accelerated failure-time models using 3 specifications. The first model, which adjusted for baseline covariates, excluding metabolic mediators of obesity, showed increased risk of CHD for all obesity measures. Further adjustment for metabolic mediators in the second model and time-varying variables in the third model produced negligible changes in the hazard ratios. The hazard ratios estimated by G-estimation were 1.15 (95% confidence interval (CI): 0.83, 1.47) for general obesity, 1.65 (95% CI: 1.35, 1.92) for AOB based on waist circumference, and 1.38 (95% CI: 1.13, 1.99) for AOB based on waist:hip ratio, suggesting that AOB increased the risk of CHD. The G-estimated hazard ratios for both measures were further from the null than those derived from standard models.

  11. Quantitative assessment of the microbial risk of leafy greens from farm to consumption: preliminary framework, data, and risk estimates.

    PubMed

    Danyluk, Michelle D; Schaffner, Donald W

    2011-05-01

    This project was undertaken to relate what is known about the behavior of Escherichia coli O157:H7 under laboratory conditions and integrate this information to what is known regarding the 2006 E. coli O157:H7 spinach outbreak in the context of a quantitative microbial risk assessment. The risk model explicitly assumes that all contamination arises from exposure in the field. Extracted data, models, and user inputs were entered into an Excel spreadsheet, and the modeling software @RISK was used to perform Monte Carlo simulations. The model predicts that cut leafy greens that are temperature abused will support the growth of E. coli O157:H7, and populations of the organism may increase by as much a 1 log CFU/day under optimal temperature conditions. When the risk model used a starting level of -1 log CFU/g, with 0.1% of incoming servings contaminated, the predicted numbers of cells per serving were within the range of best available estimates of pathogen levels during the outbreak. The model predicts that levels in the field of -1 log CFU/g and 0.1% prevalence could have resulted in an outbreak approximately the size of the 2006 E. coli O157:H7 outbreak. This quantitative microbial risk assessment model represents a preliminary framework that identifies available data and provides initial risk estimates for pathogenic E. coli in leafy greens. Data gaps include retail storage times, correlations between storage time and temperature, determining the importance of E. coli O157:H7 in leafy greens lag time models, and validation of the importance of cross-contamination during the washing process.

  12. A risk adjustment approach to estimating the burden of skin disease in the United States.

    PubMed

    Lim, Henry W; Collins, Scott A B; Resneck, Jack S; Bolognia, Jean; Hodge, Julie A; Rohrer, Thomas A; Van Beek, Marta J; Margolis, David J; Sober, Arthur J; Weinstock, Martin A; Nerenz, David R; Begolka, Wendy Smith; Moyano, Jose V

    2018-01-01

    Direct insurance claims tabulation and risk adjustment statistical methods can be used to estimate health care costs associated with various diseases. In this third manuscript derived from the new national Burden of Skin Disease Report from the American Academy of Dermatology, a risk adjustment method that was based on modeling the average annual costs of individuals with or without specific diseases, and specifically tailored for 24 skin disease categories, was used to estimate the economic burden of skin disease. The results were compared with the claims tabulation method used in the first 2 parts of this project. The risk adjustment method estimated the direct health care costs of skin diseases to be $46 billion in 2013, approximately $15 billion less than estimates using claims tabulation. For individual skin diseases, the risk adjustment cost estimates ranged from 11% to 297% of those obtained using claims tabulation for the 10 most costly skin disease categories. Although either method may be used for purposes of estimating the costs of skin disease, the choice of method will affect the end result. These findings serve as an important reference for future discussions about the method chosen in health care payment models to estimate both the cost of skin disease and the potential cost impact of care changes. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  13. Divorce as Risky Behavior

    PubMed Central

    LIGHT, AUDREY; AHN, TAEHYUN

    2010-01-01

    Given that divorce often represents a high-stakes income gamble, we ask how individual levels of risk tolerance affect the decision to divorce. We extend the orthodox divorce model by assuming that individuals are risk averse, that marriage is risky, and that divorce is even riskier. The model predicts that conditional on the expected gains to marriage and divorce, the probability of divorce increases with relative risk tolerance because risk averse individuals require compensation for the additional risk that is inherent in divorce. To implement the model empirically, we use data for first-married women and men from the 1979 National Longitudinal Survey of Youth to estimate a probit model of divorce in which a measure of risk tolerance is among the covariates. The estimates reveal that a 1-point increase in risk tolerance raises the predicted probability of divorce by 4.3% for a representative man and by 11.4% for a representative woman. These findings are consistent with the notion that divorce entails a greater income gamble for women than for men. PMID:21308563

  14. Divorce as risky behavior.

    PubMed

    Light, Audrey; Ahn, Taehyun

    2010-11-01

    Given that divorce often represents a high-stakes income gamble, we ask how individual levels of risk tolerance affect the decision to divorce. We extend the orthodox divorce model by assuming that individuals are risk averse, that marriage is risky, and that divorce is even riskier. The model predicts that conditional on the expected gains to marriage and divorce, the probability of divorce increases with relative risk tolerance because risk averse individuals require compensation for the additional risk that is inherent in divorce. To implement the model empirically, we use data for first-married women and men from the 1979 National Longitudinal Survey of Youth to estimate a probit model of divorce in which a measure of risk tolerance is among the covariates. The estimates reveal that a 1-point increase in risk tolerance raises the predicted probability of divorce by 4.3% for a representative man and by 11.4% for a representative woman. These findings are consistent with the notion that divorce entails a greater income gamble for women than for men.

  15. Estimating relative risks for common outcome using PROC NLP.

    PubMed

    Yu, Binbing; Wang, Zhuoqiao

    2008-05-01

    In cross-sectional or cohort studies with binary outcomes, it is biologically interpretable and of interest to estimate the relative risk or prevalence ratio, especially when the response rates are not rare. Several methods have been used to estimate the relative risk, among which the log-binomial models yield the maximum likelihood estimate (MLE) of the parameters. Because of restrictions on the parameter space, the log-binomial models often run into convergence problems. Some remedies, e.g., the Poisson and Cox regressions, have been proposed. However, these methods may give out-of-bound predicted response probabilities. In this paper, a new computation method using the SAS Nonlinear Programming (NLP) procedure is proposed to find the MLEs. The proposed NLP method was compared to the COPY method, a modified method to fit the log-binomial model. Issues in the implementation are discussed. For illustration, both methods were applied to data on the prevalence of microalbuminuria (micro-protein leakage into urine) for kidney disease patients from the Diabetes Control and Complications Trial. The sample SAS macro for calculating relative risk is provided in the appendix.

  16. Prostate Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Bladder Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Ovarian Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Pancreatic Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Testicular Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Breast Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Esophageal Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Cervical Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Liver Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Lung Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Colorectal Cancer Risk Prediction Models

    Cancer.gov

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. AQUATOX coupled foodweb model for ecosystem risk assessment of Polybrominated diphenyl ethers (PBDEs) in lake ecosystems.

    PubMed

    Zhang, Lulu; Liu, Jingling

    2014-08-01

    The AQUATOX model considers the direct toxic effects of chemicals and their indirect effects through foodwebs. For this study, the AQUATOX model was applied to evaluating the ecological risk of Polybrominated diphenyl ethers (PBDEs) in a highly anthropogenically disturbed lake-Baiyangdian Lake. Calibration and validation results indicated that the model can adequately describe the dynamics of 18 biological populations. Sensitivity analysis results suggested that the model is highly sensitive to temperature limitation. PBDEs risk estimate results demonstrate that estimated risk for natural ecosystems cannot be fully explained by single species toxicity data alone. The AQUATOX model could provide a good basis in ascertaining ecological protection levels of "chemicals of concern" for aquatic ecosystems. Therefore, AQUATOX can potentially be used to provide necessary information corresponding to early warning and rapid forecasting of pollutant transport and fate in the management of chemicals that put aquatic ecosystems at risk. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. SEMIPARAMETRIC ADDITIVE RISKS REGRESSION FOR TWO-STAGE DESIGN SURVIVAL STUDIES

    PubMed Central

    Li, Gang; Wu, Tong Tong

    2011-01-01

    In this article we study a semiparametric additive risks model (McKeague and Sasieni (1994)) for two-stage design survival data where accurate information is available only on second stage subjects, a subset of the first stage study. We derive two-stage estimators by combining data from both stages. Large sample inferences are developed. As a by-product, we also obtain asymptotic properties of the single stage estimators of McKeague and Sasieni (1994) when the semiparametric additive risks model is misspecified. The proposed two-stage estimators are shown to be asymptotically more efficient than the second stage estimators. They also demonstrate smaller bias and variance for finite samples. The developed methods are illustrated using small intestine cancer data from the SEER (Surveillance, Epidemiology, and End Results) Program. PMID:21931467

  9. Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations

    NASA Astrophysics Data System (ADS)

    Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.

    2014-02-01

    The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.

  10. Predictions of Leukemia Risks to Astronauts from Solar Particle Events

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Atwell, W.; Kim, M. Y.; George, K. A.; Ponomarev, A.; Nikjoo, H.; Wilson, J. W.

    2006-01-01

    Leukemias consisting of acute and chronic myeloid leukemia and acute lymphatic lymphomas represent the earliest cancers that appear after radiation exposure, have a high lethality fraction, and make up a significant fraction of the overall fatal cancer risk from radiation for adults. Several considerations impact the recommendation of a preferred model for the estimation of leukemia risks from solar particle events (SPE's): The BEIR VII report recommends several changes to the method of calculation of leukemia risk compared to the methods recommended by the NCRP Report No. 132 including the preference of a mixture model with additive and multiplicative components in BEIR VII compared to the additive transfer model recommended by NCRP Report No. 132. Proton fluences and doses vary considerably across marrow regions because of the characteristic spectra of primary solar protons making the use of an average dose suspect. Previous estimates of bone marrow doses from SPE's have used an average body-shielding distribution for marrow based on the computerized anatomical man model (CAM). We have developed an 82-point body-shielding distribution that faithfully reproduces the mean and variance of SPE doses in the active marrow regions (head and neck, chest, abdomen, pelvis and thighs) allowing for more accurate estimation of linear- and quadratic-dose components of the marrow response. SPE's have differential dose-rates and a pseudo-quadratic dose response term is possible in the peak-flux period of an event. Also, the mechanistic basis for leukemia risk continues to improve allowing for improved strategies in choosing dose-rate modulation factors and radiation quality descriptors. We make comparisons of the various choices of the components in leukemia risk estimates in formulating our preferred model. A major finding is that leukemia could be the dominant risk to astronauts for a major solar particle event.

  11. REVIEW OF DRAFT REVISED BLUE BOOK ON ESTIMATING ...

    EPA Pesticide Factsheets

    In 1994, EPA published a report, referred to as the “Blue Book,” which lays out EPA’s current methodology for quantitatively estimating radiogenic cancer risks. A follow-on report made minor adjustments to the previous estimates and presented a partial analysis of the uncertainties in the numerical estimates. In 2006, the National Research Council of the National Academy of Sciences released a report on the health risks from exposure to low levels of ionizing radiation. Cosponsored by the EPA and several other Federal agencies, Health Risks from Exposure to Low Levels of Ionizing Radiation BEIR VII Phase 2 (BEIR VII) primarily addresses cancer and genetic risks from low doses of low-LET radiation. In the draft White Paper: Modifying EPA Radiation Risk Models Based on BEIR VII (White Paper), ORIA proposed changes in EPA’s methodology for estimating radiogenic cancers, based on the contents of BEIR VII and some ancillary information. For the most part, it proposed to adopt the models and methodology recommended in BEIR VII; however, certain modifications and expansions are considered to be desirable or necessary for EPA’s purposes. EPA sought advice from the Agency’s Science Advisory Board on the application of BEIR VII and on issues relating to these modifications and expansions in the Advisory on EPA’s Draft White Paper: Modifying EPA Radiation Risk Models Based on BEIR VII (record # 83044). The SAB issued its Advisory on Jan. 31, 2008 (EPA-SAB-08-

  12. Effects of a 20 year rain event: a quantitative microbial risk assessment of a case of contaminated bathing water in Copenhagen, Denmark.

    PubMed

    Andersen, S T; Erichsen, A C; Mark, O; Albrechtsen, H-J

    2013-12-01

    Quantitative microbial risk assessments (QMRAs) often lack data on water quality leading to great uncertainty in the QMRA because of the many assumptions. The quantity of waste water contamination was estimated and included in a QMRA on an extreme rain event leading to combined sewer overflow (CSO) to bathing water where an ironman competition later took place. Two dynamic models, (1) a drainage model and (2) a 3D hydrodynamic model, estimated the dilution of waste water from source to recipient. The drainage model estimated that 2.6% of waste water was left in the system before CSO and the hydrodynamic model estimated that 4.8% of the recipient bathing water came from the CSO, so on average there was 0.13% of waste water in the bathing water during the ironman competition. The total estimated incidence rate from a conservative estimate of the pathogenic load of five reference pathogens was 42%, comparable to 55% in an epidemiological study of the case. The combination of applying dynamic models and exposure data led to an improved QMRA that included an estimate of the dilution factor. This approach has not been described previously.

  13. Spatio-temporal distribution of soil-transmitted helminth infections in Brazil.

    PubMed

    Chammartin, Frédérique; Guimarães, Luiz H; Scholte, Ronaldo Gc; Bavia, Mara E; Utzinger, Jürg; Vounatsou, Penelope

    2014-09-18

    In Brazil, preventive chemotherapy targeting soil-transmitted helminthiasis is being scaled-up. Hence, spatially explicit estimates of infection risks providing information about the current situation are needed to guide interventions. Available high-resolution national model-based estimates either rely on analyses of data restricted to a given period of time, or on historical data collected over a longer period. While efforts have been made to take into account the spatial structure of the data in the modelling approach, little emphasis has been placed on the temporal dimension. We extracted georeferenced survey data on the prevalence of infection with soil-transmitted helminths (i.e. Ascaris lumbricoides, hookworm and Trichuris trichiura) in Brazil from the Global Neglected Tropical Diseases (GNTD) database. Selection of the most important predictors of infection risk was carried out using a Bayesian geostatistical approach and temporal models that address non-linearity and correlation of the explanatory variables. The spatial process was estimated through a predictive process approximation. Spatio-temporal models were built on the selected predictors with integrated nested Laplace approximation using stochastic partial differential equations. Our models revealed that, over the past 20 years, the risk of soil-transmitted helminth infection has decreased in Brazil, mainly because of the reduction of A. lumbricoides and hookworm infections. From 2010 onwards, we estimate that the infection prevalences with A. lumbricoides, hookworm and T. trichiura are 3.6%, 1.7% and 1.4%, respectively. We also provide a map highlighting municipalities in need of preventive chemotherapy, based on a predicted soil-transmitted helminth infection risk in excess of 20%. The need for treatments in the school-aged population at the municipality level was estimated at 1.8 million doses of anthelminthic tablets per year. The analysis of the spatio-temporal aspect of the risk of infection with soil-transmitted helminths contributes to a better understanding of the evolution of risk over time. Risk estimates provide the soil-transmitted helminthiasis control programme in Brazil with useful benchmark information for prioritising and improving spatial and temporal targeting of interventions.

  14. Estimating risks of heat strain by age and sex: a population-level simulation model.

    PubMed

    Glass, Kathryn; Tait, Peter W; Hanna, Elizabeth G; Dear, Keith

    2015-05-18

    Individuals living in hot climates face health risks from hyperthermia due to excessive heat. Heat strain is influenced by weather exposure and by individual characteristics such as age, sex, body size, and occupation. To explore the population-level drivers of heat strain, we developed a simulation model that scales up individual risks of heat storage (estimated using Myrup and Morgan's man model "MANMO") to a large population. Using Australian weather data, we identify high-risk weather conditions together with individual characteristics that increase the risk of heat stress under these conditions. The model identifies elevated risks in children and the elderly, with females aged 75 and older those most likely to experience heat strain. Risk of heat strain in males does not increase as rapidly with age, but is greatest on hot days with high solar radiation. Although cloudy days are less dangerous for the wider population, older women still have an elevated risk of heat strain on hot cloudy days or when indoors during high temperatures. Simulation models provide a valuable method for exploring population level risks of heat strain, and a tool for evaluating public health and other government policy interventions.

  15. On cancer risk estimation of urban air pollution.

    PubMed Central

    Törnqvist, M; Ehrenberg, L

    1994-01-01

    The usefulness of data from various sources for a cancer risk estimation of urban air pollution is discussed. Considering the irreversibility of initiations, a multiplicative model is preferred for solid tumors. As has been concluded for exposure to ionizing radiation, the multiplicative model, in comparison with the additive model, predicts a relatively larger number of cases at high ages, with enhanced underestimation of risks by short follow-up times in disease-epidemiological studies. For related reasons, the extrapolation of risk from animal tests on the basis of daily absorbed dose per kilogram body weight or per square meter surface area without considering differences in life span may lead to an underestimation, and agreements with epidemiologically determined values may be fortuitous. Considering these possibilities, the most likely lifetime risks of cancer death at the average exposure levels in Sweden were estimated for certain pollution fractions or indicator compounds in urban air. The risks amount to approximately 50 deaths per 100,000 for inhaled particulate organic material (POM), with a contribution from ingested POM about three times larger, and alkenes, and butadiene cause 20 deaths, respectively, per 100,000 individuals. Also, benzene and formaldehyde are expected to be associated with considerable risk increments. Comparative potency methods were applied for POM and alkenes. Due to incompleteness of the list of compounds considered and the uncertainties of the above estimates, the total risk calculation from urban air has not been attempted here. PMID:7821292

  16. Projecting Individualized Absolute Invasive Breast Cancer Risk in US Hispanic Women.

    PubMed

    Banegas, Matthew P; John, Esther M; Slattery, Martha L; Gomez, Scarlett Lin; Yu, Mandi; LaCroix, Andrea Z; Pee, David; Chlebowski, Rowan T; Hines, Lisa M; Thompson, Cynthia A; Gail, Mitchell H

    2017-02-01

    There is no model to estimate absolute invasive breast cancer risk for Hispanic women. The San Francisco Bay Area Breast Cancer Study (SFBCS) provided data on Hispanic breast cancer case patients (533 US-born, 553 foreign-born) and control participants (464 US-born, 947 foreign-born). These data yielded estimates of relative risk (RR) and attributable risk (AR) separately for US-born and foreign-born women. Nativity-specific absolute risks were estimated by combining RR and AR information with nativity-specific invasive breast cancer incidence and competing mortality rates from the California Cancer Registry and Surveillance, Epidemiology, and End Results program to develop the Hispanic risk model (HRM). In independent data, we assessed model calibration through observed/expected (O/E) ratios, and we estimated discriminatory accuracy with the area under the receiver operating characteristic curve (AUC) statistic. The US-born HRM included age at first full-term pregnancy, biopsy for benign breast disease, and family history of breast cancer; the foreign-born HRM also included age at menarche. The HRM estimated lower risks than the National Cancer Institute's Breast Cancer Risk Assessment Tool (BCRAT) for US-born Hispanic women, but higher risks in foreign-born women. In independent data from the Women's Health Initiative, the HRM was well calibrated for US-born women (observed/expected [O/E] ratio = 1.07, 95% confidence interval [CI] = 0.81 to 1.40), but seemed to overestimate risk in foreign-born women (O/E ratio = 0.66, 95% CI = 0.41 to 1.07). The AUC was 0.564 (95% CI = 0.485 to 0.644) for US-born and 0.625 (95% CI = 0.487 to 0.764) for foreign-born women. The HRM is the first absolute risk model that is based entirely on data specific to Hispanic women by nativity. Further studies in Hispanic women are warranted to evaluate its validity. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  17. Preliminary risk benefit assessment for nuclear waste disposal in space

    NASA Technical Reports Server (NTRS)

    Rice, E. E.; Denning, R. S.; Friedlander, A. L.; Priest, C. C.

    1982-01-01

    This paper describes the recent work of the authors on the evaluation of health risk benefits of space disposal of nuclear waste. The paper describes a risk model approach that has been developed to estimate the non-recoverable, cumulative, expected radionuclide release to the earth's biosphere for different options of nuclear waste disposal in space. Risk estimates for the disposal of nuclear waste in a mined geologic repository and the short- and long-term risk estimates for space disposal were developed. The results showed that the preliminary estimates of space disposal risks are low, even with the estimated uncertainty bounds. If calculated release risks for mined geologic repositories remain as low as given by the U.S. DOE, and U.S. EPA requirements continue to be met, then no additional space disposal study effort in the U.S. is warranted at this time. If risks perceived by the public are significant in the acceptance of mined geologic repositories, then consideration of space disposal as a complement to the mined geologic repository is warranted.

  18. Survival analysis with error-prone time-varying covariates: a risk set calibration approach

    PubMed Central

    Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna

    2010-01-01

    Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928

  19. Stress increases the risk of type 2 diabetes onset in women: A 12-year longitudinal study using causal modelling

    PubMed Central

    Oldmeadow, Christopher; Hure, Alexis; Luu, Judy; Loxton, Deborah

    2017-01-01

    Background Type 2 diabetes is associated with significant morbidity and mortality. Modifiable risk factors have been found to contribute up to 60% of type 2 diabetes risk. However, type 2 diabetes continues to rise despite implementation of interventions based on traditional risk factors. There is a clear need to identify additional risk factors for chronic disease prevention. The aim of this study was to examine the relationship between perceived stress and type 2 diabetes onset, and partition the estimates into direct and indirect effects. Methods and findings Women born in 1946–1951 (n = 12,844) completed surveys for the Australian Longitudinal Study on Women’s Health in 1998, 2001, 2004, 2007 and 2010. The total causal effect was estimated using logistic regression and marginal structural modelling. Controlled direct effects were estimated through conditioning in the regression model. A graded association was found between perceived stress and all mediators in the multivariate time lag analyses. A significant association was found between hypertension, as well as physical activity and body mass index, and diabetes, but not smoking or diet quality. Moderate/high stress levels were associated with a 2.3-fold increase in the odds of diabetes three years later, for the total estimated effect. Results were only slightly attenuated when the direct and indirect effects of perceived stress on diabetes were partitioned, with the mediators only explaining 10–20% of the excess variation in diabetes. Conclusions Perceived stress is a strong risk factor for type 2 diabetes. The majority of the effect estimate of stress on diabetes risk is not mediated by the traditional risk factors of hypertension, physical activity, smoking, diet quality, and body mass index. This gives a new pathway for diabetes prevention trials and clinical practice. PMID:28222165

  20. Assessment of NHTSA’s Report “Relationships Between Fatality Risk, Mass, and Footprint in Model Year 2004-2011 Passenger Cars and LTVs” (LBNL Phase 1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Tom P.

    In its 2012 report NHTSA simulated the effect four fleetwide mass reduction scenarios would have on the change in annual fatalities. NHTSA estimated that the most aggressive of these scenarios (reducing mass 5.2% in heavier light trucks and 2.6% in all other vehicles types except lighter cars) would result in a small reduction in societal fatalities. LBNL replicated the methodology NHTSA used to simulate six mass reduction scenarios, including the mass reductions recommended in the 2015 NRC committee report, and estimated in 2021 and 2025 by EPA in the TAR, using the updated data through 2012. The analysis indicates thatmore » the estimated x change in fatalities under each scenario based on the updated analysis is comparable to that in the 2012 analysis, but less beneficial or more detrimental than that in the 2016 analysis. For example, an across the board 100-lb reduction in mass would result in an estimated 157 additional annual fatalities based on the 2012 analysis, but would result in only an estimated 91 additional annual fatalities based on the 2016 analysis, and an additional 87 fatalities based on the current analysis. The mass reductions recommended by the 2015 NRC committee report6 would result in a 224 increase in annual fatalities in the 2012 analysis, a 344 decrease in annual fatalities in the 2016 analysis, and a 141 increase in fatalities in the current analysis. The mass reductions EPA estimated for 2025 in the TAR7 would result in a 203 decrease in fatalities based on the 2016 analysis, but an increase of 39 fatalities based on the current analysis. These results support NHTSA’s conclusion from its 2012 study that, when footprint is held fixed, “no judicious combination of mass reductions in the various classes of vehicles results in a statistically significant fatality increase and many potential combinations are safety-neutral as point estimates.”Like the previous NHTSA studies, this updated report concludes that the estimated effect of mass reduction while maintaining footprint on societal U.S. fatality risk is small, and not statistically significant at the 95% or 90% confidence level for all vehicle types based on the jack-knife method NHTSA used. This report also finds that the estimated effects of other control variables, such as vehicle type, specific safety technologies, and crash conditions such as whether the crash occurred at night, in a rural county, or on a high-speed road, on risk are much larger, in some cases two orders of magnitude larger, than the estimated effect of mass or footprint reduction on risk. Finally, this report shows that after accounting for the many vehicle, driver, and crash variables NHTSA used in its regression analyses, there remains a wide variation in risk by vehicle make and model, and this variation is unrelated to vehicle mass. Although the purpose of the NHTSA and LBNL reports is to estimate the effect of vehicle mass reduction on societal risk, this is not exactly what the regression models are estimating. Rather, they are estimating the recent historical relationship between mass and risk, after accounting for most measurable differences between vehicles, drivers, and crash times and locations. In essence, the regression models are comparing the risk of a 2600-lb Dodge Neon with that of a 2500-lb Honda Civic, after attempting to account for all other differences between the two vehicles. The models are not estimating the effect of literally removing 100 pounds from the Neon, leaving everything else unchanged. In addition, the analyses are based on the relationship of vehicle mass and footprint on risk for recent vehicle designs (model year 2004 to 2011). These relationships may or may not continue into the future as manufacturers utilize new vehicle designs and incorporate new technologies, such as more extensive use of strong lightweight materials and specific safety technologies. Therefore, throughout this report we use the phrase “the estimated effect of mass (or footprint) reduction on risk” as shorthand for “the estimated change in risk as a function of its relationship to mass (or footprint) for vehicle models of recent design.”« less

  1. External validation of risk prediction models for incident colorectal cancer using UK Biobank

    PubMed Central

    Usher-Smith, J A; Harshfield, A; Saunders, C L; Sharp, S J; Emery, J; Walter, F M; Muir, K; Griffin, S J

    2018-01-01

    Background: This study aimed to compare and externally validate risk scores developed to predict incident colorectal cancer (CRC) that include variables routinely available or easily obtainable via self-completed questionnaire. Methods: External validation of fourteen risk models from a previous systematic review in 373 112 men and women within the UK Biobank cohort with 5-year follow-up, no prior history of CRC and data for incidence of CRC through linkage to national cancer registries. Results: There were 1719 (0.46%) cases of incident CRC. The performance of the risk models varied substantially. In men, the QCancer10 model and models by Tao, Driver and Ma all had an area under the receiver operating characteristic curve (AUC) between 0.67 and 0.70. Discrimination was lower in women: the QCancer10, Wells, Tao, Guesmi and Ma models were the best performing with AUCs between 0.63 and 0.66. Assessment of calibration was possible for six models in men and women. All would require country-specific recalibration if estimates of absolute risks were to be given to individuals. Conclusions: Several risk models based on easily obtainable data have relatively good discrimination in a UK population. Modelling studies are now required to estimate the potential health benefits and cost-effectiveness of implementing stratified risk-based CRC screening. PMID:29381683

  2. Estimating the association between metabolic risk factors and marijuana use in U.S. adults using data from the continuous National Health and Nutrition Examination Survey.

    PubMed

    Thompson, Christin Ann; Hay, Joel W

    2015-07-01

    More research is needed on the health effects of marijuana use. Results of previous studies indicate that marijuana could alleviate certain factors of metabolic syndrome, such as obesity. Data on 6281 persons from National Health and Nutrition Examination Survey from 2005 to 2012 were used to estimate the effect of marijuana use on cardiometabolic risk factors. The reliability of ordinary least squares (OLS) regression models was tested by replacing marijuana use as the risk factor of interest with alcohol and carbohydrate consumption. Instrumental variable methods were used to account for the potential endogeneity of marijuana use. OLS models show lower fasting insulin, insulin resistance, body mass index, and waist circumference in users compared with nonusers. However, when alcohol and carbohydrate intake substitute for marijuana use in OLS models, similar metabolic benefits are estimated. The Durbin-Wu-Hausman tests provide evidence of endogeneity of marijuana use in OLS models, but instrumental variables models do not yield significant estimates for marijuana use. These findings challenge the robustness of OLS estimates of a positive relationship between marijuana use and fasting insulin, insulin resistance, body mass index, and waist circumference. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Effect of Using Different Vehicle Weight Groups on the Estimated Relationship Between Mass Reduction and U.S. Societal Fatality Risk per Vehicle Miles of Travel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Tom P.

    This report recalculates the estimated relationship between vehicle mass and societal fatality risk, using alternative groupings by vehicle weight, to test whether the trend of decreasing fatality risk from mass reduction as case vehicle mass increases, holds over smaller increments of the range in case vehicle masses. The NHTSA baseline regression model estimates the relationship using for two weight groups for cars and light trucks; we re-estimated the mass reduction coefficients using four, six, and eight bins of vehicle mass. The estimated effect of mass reduction on societal fatality risk was not consistent over the range in vehicle masses inmore » these weight bins. These results suggest that the relationship indicated by the NHTSA baseline model is a result of other, unmeasured attributes of the mix of vehicles in the lighter vs. heavier weight bins, and not necessarily the result of a correlation between mass reduction and societal fatality risk. An analysis of the average vehicle, driver, and crash characteristics across the various weight groupings did not reveal any strong trends that might explain the lack of a consistent trend of decreasing fatality risk from mass reduction in heavier vehicles.« less

  4. A quantitative assessment of risks of heavy metal residues in laundered shop towels and their use by workers.

    PubMed

    Connor, Kevin; Magee, Brian

    2014-10-01

    This paper presents a risk assessment of exposure to metal residues in laundered shop towels by workers. The concentrations of 27 metals measured in a synthetic sweat leachate were used to estimate the releasable quantity of metals which could be transferred to workers' skin. Worker exposure was evaluated quantitatively with an exposure model that focused on towel-to-hand transfer and subsequent hand-to-food or -mouth transfers. The exposure model was based on conservative, but reasonable assumptions regarding towel use and default exposure factor values from the published literature or regulatory guidance. Transfer coefficients were derived from studies representative of the exposures to towel users. Contact frequencies were based on assumed high-end use of shop towels, but constrained by a theoretical maximum dermal loading. The risk estimates for workers developed for all metals were below applicable regulatory risk benchmarks. The risk assessment for lead utilized the Adult Lead Model and concluded that predicted lead intakes do not constitute a significant health hazard based on potential worker exposures. Uncertainties are discussed in relation to the overall confidence in the exposure estimates developed for each exposure pathway and the likelihood that the exposure model is under- or overestimating worker exposures and risk. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Characterizing vaccine-associated risks using cubic smoothing splines.

    PubMed

    Brookhart, M Alan; Walker, Alexander M; Lu, Yun; Polakowski, Laura; Li, Jie; Paeglow, Corrie; Puenpatom, Tosmai; Izurieta, Hector; Daniel, Gregory W

    2012-11-15

    Estimating risks associated with the use of childhood vaccines is challenging. The authors propose a new approach for studying short-term vaccine-related risks. The method uses a cubic smoothing spline to flexibly estimate the daily risk of an event after vaccination. The predicted incidence rates from the spline regression are then compared with the expected rates under a log-linear trend that excludes the days surrounding vaccination. The 2 models are then used to estimate the excess cumulative incidence attributable to the vaccination during the 42-day period after vaccination. Confidence intervals are obtained using a model-based bootstrap procedure. The method is applied to a study of known effects (positive controls) and expected noneffects (negative controls) of the measles, mumps, and rubella and measles, mumps, rubella, and varicella vaccines among children who are 1 year of age. The splines revealed well-resolved spikes in fever, rash, and adenopathy diagnoses, with the maximum incidence occurring between 9 and 11 days after vaccination. For the negative control outcomes, the spline model yielded a predicted incidence more consistent with the modeled day-specific risks, although there was evidence of increased risk of diagnoses of congenital malformations after vaccination, possibly because of a "provider visit effect." The proposed approach may be useful for vaccine safety surveillance.

  6. Spatio-temporal population estimates for risk management

    NASA Astrophysics Data System (ADS)

    Cockings, Samantha; Martin, David; Smith, Alan; Martin, Rebecca

    2013-04-01

    Accurate estimation of population at risk from hazards and effective emergency management of events require not just appropriate spatio-temporal modelling of hazards but also of population. While much recent effort has been focused on improving the modelling and predictions of hazards (both natural and anthropogenic), there has been little parallel advance in the measurement or modelling of population statistics. Different hazard types occur over diverse temporal cycles, are of varying duration and differ significantly in their spatial extent. Even events of the same hazard type, such as flood events, vary markedly in their spatial and temporal characteristics. Conceptually and pragmatically then, population estimates should also be available for similarly varying spatio-temporal scales. Routine population statistics derived from traditional censuses or surveys are usually static representations in both space and time, recording people at their place of usual residence on census/survey night and presenting data for administratively defined areas. Such representations effectively fix the scale of population estimates in both space and time, which is unhelpful for meaningful risk management. Over recent years, the Pop24/7 programme of research, based at the University of Southampton (UK), has developed a framework for spatio-temporal modelling of population, based on gridded population surfaces. Based on a data model which is fully flexible in terms of space and time, the framework allows population estimates to be produced for any time slice relevant to the data contained in the model. It is based around a set of origin and destination centroids, which have capacities, spatial extents and catchment areas, all of which can vary temporally, such as by time of day, day of week, season. A background layer, containing information on features such as transport networks and landuse, provides information on the likelihood of people being in certain places at specific times. Unusual patterns associated with special events can also be modelled and the framework is fully volume preserving. Outputs from the model are gridded population surfaces for the specified time slice, either for total population or by sub-groups (e.g. age). Software to implement the models (SurfaceBuilder247) has been developed and pre-processed layers for typical time slices for England and Wales in 2001 and 2006 are available for UK academic purposes. The outputs and modelling framework from the Pop24/7 programme provide significant opportunities for risk management applications. For estimates of mid- to long-term cumulative population exposure to hazards, such as in flood risk mapping, populations can be produced for numerous time slices and integrated with flood models. For applications in emergency response/ management, time-specific population models can be used as seeds for agent-based models or other response/behaviour models. Estimates for sub-groups of the population also permit exploration of vulnerability through space and time. This paper outlines the requirements for effective spatio-temporal population models for risk management. It then describes the Pop24/7 framework and illustrates its potential for risk management through presentation of examples from natural and anthropogenic hazard applications. The paper concludes by highlighting key challenges for future research in this area.

  7. Risk Prediction Models for Other Cancers or Multiple Sites

    Cancer.gov

    Developing statistical models that estimate the probability of developing other multiple cancers over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. [Application of Competing Risks Model in Predicting Smoking Relapse Following Ischemic Stroke].

    PubMed

    Hou, Li-Sha; Li, Ji-Jie; Du, Xu-Dong; Yan, Pei-Jing; Zhu, Cai-Rong

    2017-07-01

    To determine factors associated with smoking relapse in men who survived from their first stroke. Data were collected through face to face interviews with stroke patients in the hospital, and then repeated every three months via telephone over the period from 2010 to 2014. Kaplan-Meier method and competing risk model were adopted to estimate and predict smoking relapse rates. The Kaplan-Meier method estimated a higher relapse rate than the competing risk model. The four-year relapse rate was 43.1% after adjustment of competing risk. Exposure to environmental tobacco smoking outside of home and workplace (such as bars and restaurants) ( P =0.01), single ( P <0.01), and prior history of smoking at least 20 cigarettes per day ( P =0.02) were significant predictors of smoking relapse. When competing risks exist, competing risks model should be used in data analyses. Smoking interventions should give priorities to those without a spouse and those with a heavy smoking history. Smoking ban in public settings can reduce smoking relapse in stroke patients.

  9. Fine-Scale Mapping by Spatial Risk Distribution Modeling for Regional Malaria Endemicity and Its Implications under the Low-to-Moderate Transmission Setting in Western Cambodia

    PubMed Central

    Okami, Suguru; Kohtake, Naohiko

    2016-01-01

    The disease burden of malaria has decreased as malaria elimination efforts progress. The mapping approach that uses spatial risk distribution modeling needs some adjustment and reinvestigation in accordance with situational changes. Here we applied a mathematical modeling approach for standardized morbidity ratio (SMR) calculated by annual parasite incidence using routinely aggregated surveillance reports, environmental data such as remote sensing data, and non-environmental anthropogenic data to create fine-scale spatial risk distribution maps of western Cambodia. Furthermore, we incorporated a combination of containment status indicators into the model to demonstrate spatial heterogeneities of the relationship between containment status and risks. The explanatory model was fitted to estimate the SMR of each area (adjusted Pearson correlation coefficient R2 = 0.774; Akaike information criterion AIC = 149.423). A Bayesian modeling framework was applied to estimate the uncertainty of the model and cross-scale predictions. Fine-scale maps were created by the spatial interpolation of estimated SMRs at each village. Compared with geocoded case data, corresponding predicted values showed conformity [Spearman’s rank correlation r = 0.662 in the inverse distance weighed interpolation and 0.645 in ordinal kriging (95% confidence intervals of 0.414–0.827 and 0.368–0.813, respectively), Welch’s t-test; Not significant]. The proposed approach successfully explained regional malaria risks and fine-scale risk maps were created under low-to-moderate malaria transmission settings where reinvestigations of existing risk modeling approaches were needed. Moreover, different representations of simulated outcomes of containment status indicators for respective areas provided useful insights for tailored interventional planning, considering regional malaria endemicity. PMID:27415623

  10. Personalized long-term prediction of cognitive function: Using sequential assessments to improve model performance.

    PubMed

    Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J

    2017-12-01

    Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Project risk management in the construction of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Titarenko, Boris; Hasnaoui, Amir; Titarenko, Roman; Buzuk, Liliya

    2018-03-01

    This paper shows the project risk management methods, which allow to better identify risks in the construction of high-rise buildings and to manage them throughout the life cycle of the project. One of the project risk management processes is a quantitative analysis of risks. The quantitative analysis usually includes the assessment of the potential impact of project risks and their probabilities. This paper shows the most popular methods of risk probability assessment and tries to indicate the advantages of the robust approach over the traditional methods. Within the framework of the project risk management model a robust approach of P. Huber is applied and expanded for the tasks of regression analysis of project data. The suggested algorithms used to assess the parameters in statistical models allow to obtain reliable estimates. A review of the theoretical problems of the development of robust models built on the methodology of the minimax estimates was done and the algorithm for the situation of asymmetric "contamination" was developed.

  12. Estimating outcomes in newborn infants using fuzzy logic

    PubMed Central

    Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando C.

    2014-01-01

    OBJECTIVE: To build a linguistic model using the properties of fuzzy logic to estimate the risk of death of neonates admitted to a Neonatal Intensive Care Unit. METHODS: Computational model using fuzzy logic. The input variables of the model were birth weight, gestational age, 5th-minute Apgar score and inspired fraction of oxygen in newborn infants admitted to a Neonatal Intensive Care Unit of Taubaté, Southeast Brazil. The output variable was the risk of death, estimated as a percentage. Three membership functions related to birth weight, gestational age and 5th-minute Apgar score were built, as well as two functions related to the inspired fraction of oxygen; the risk presented five membership functions. The model was developed using the Mandani inference by means of Matlab(r) software. The model values were compared with those provided by experts and their performance was estimated by ROC curve. RESULTS: 100 newborns were included, and eight of them died. The model estimated an average possibility of death of 49.7±29.3%, and the possibility of hospital discharge was 24±17.5%. These values are different when compared by Student's t-test (p<0.001). The correlation test revealed r=0.80 and the performance of the model was 81.9%. CONCLUSIONS: This predictive, non-invasive and low cost model showed a good accuracy and can be applied in neonatal care, given the easiness of its use. PMID:25119746

  13. Estimates of present and future flood risk in the conterminous United States

    NASA Astrophysics Data System (ADS)

    Wing, Oliver E. J.; Bates, Paul D.; Smith, Andrew M.; Sampson, Christopher C.; Johnson, Kris A.; Fargione, Joseph; Morefield, Philip

    2018-03-01

    Past attempts to estimate rainfall-driven flood risk across the US either have incomplete coverage, coarse resolution or use overly simplified models of the flooding process. In this paper, we use a new 30 m resolution model of the entire conterminous US with a 2D representation of flood physics to produce estimates of flood hazard, which match to within 90% accuracy the skill of local models built with detailed data. These flood depths are combined with exposure datasets of commensurate resolution to calculate current and future flood risk. Our data show that the total US population exposed to serious flooding is 2.6-3.1 times higher than previous estimates, and that nearly 41 million Americans live within the 1% annual exceedance probability floodplain (compared to only 13 million when calculated using FEMA flood maps). We find that population and GDP growth alone are expected to lead to significant future increases in exposure, and this change may be exacerbated in the future by climate change.

  14. Estimated population mixing by country and risk cohort for the HIV/AIDS epidemic in Western Europe

    NASA Astrophysics Data System (ADS)

    Thomas, Richard

    This paper applies a compartmental epidemic model to estimating the mixing relations that support the transfer of HIV infection between risk populations within the countries of Western Europe. To this end, a space-time epidemic model with compartments representing countries with populations specified to be at high (gay men and intravenous drug injectors ever with AIDS) and low (the remainder who are sexually active) risk is described. This model also allows for contacts between susceptible and infectious individuals by both local and international travel. This system is calibrated to recorded AIDS incidence and the best-fit solution provides estimates of variations in the rates of mixing between the compartments together with a reconstruction of the transmission pathway. This solution indicates that, for all the countries, AIDS incidence among those at low risk is expected to remain extremely small relative to their total number. A sensitivity analysis of the low risk partner acquisition rate, however, suggests this endemic state might be fragile within Europe during this century. The discussion examines the relevance of these mixing relationships for the maintenance of disease control.

  15. A forecasting method to reduce estimation bias in self-reported cell phone data.

    PubMed

    Redmayne, Mary; Smith, Euan; Abramson, Michael J

    2013-01-01

    There is ongoing concern that extended exposure to cell phone electromagnetic radiation could be related to an increased risk of negative health effects. Epidemiological studies seek to assess this risk, usually relying on participants' recalled use, but recall is notoriously poor. Our objectives were primarily to produce a forecast method, for use by such studies, to reduce estimation bias in the recalled extent of cell phone use. The method we developed, using Bayes' rule, is modelled with data we collected in a cross-sectional cluster survey exploring cell phone user-habits among New Zealand adolescents. Participants recalled their recent extent of SMS-texting and retrieved from their provider the current month's actual use-to-date. Actual use was taken as the gold standard in the analyses. Estimation bias arose from a large random error, as observed in all cell phone validation studies. We demonstrate that this seriously exaggerates upper-end forecasts of use when used in regression models. This means that calculations using a regression model will lead to underestimation of heavy-users' relative risk. Our Bayesian method substantially reduces estimation bias. In cases where other studies' data conforms to our method's requirements, application should reduce estimation bias, leading to a more accurate relative risk calculation for mid-to-heavy users.

  16. Inter-Individual Variability in High-Throughput Risk ...

    EPA Pesticide Factsheets

    We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have little or no existing TK data. Chemicals are prioritized based on model estimates of hazard and exposure, to decide which chemicals should be first in line for further study. Hazard may be estimated with in vitro HT screening assays, e.g., U.S. EPA’s ToxCast program. Bioactive ToxCast concentrations can be extrapolated to doses that produce equivalent concentrations in body tissues using a reverse TK approach in which generic TK models are parameterized with 1) chemical-specific parameters derived from in vitro measurements and predicted from chemical structure; and 2) with physiological parameters for a virtual population. Here we draw physiological parameters from realistic estimates of distributions of demographic and anthropometric quantities in the modern U.S. population, based on the most recent CDC NHANES data. A Monte Carlo approach, accounting for the correlation structure in physiological parameters, is used to estimate ToxCast equivalent doses for the most sensitive portion of the population. To quantify risk, ToxCast equivalent doses are compared to estimates of exposure rates based on Bayesian inferences drawn from NHANES urinary analyte biomonitoring data. The inclusion

  17. ASSESSMENT OF DYNAMIC PRA TECHNIQUES WITH INDUSTRY AVERAGE COMPONENT PERFORMANCE DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadav, Vaibhav; Agarwal, Vivek; Gribok, Andrei V.

    In the nuclear industry, risk monitors are intended to provide a point-in-time estimate of the system risk given the current plant configuration. Current risk monitors are limited in that they do not properly take into account the deteriorating states of plant equipment, which are unit-specific. Current approaches to computing risk monitors use probabilistic risk assessment (PRA) techniques, but the assessment is typically a snapshot in time. Living PRA models attempt to address limitations of traditional PRA models in a limited sense by including temporary changes in plant and system configurations. However, information on plant component health are not considered. Thismore » often leaves risk monitors using living PRA models incapable of conducting evaluations with dynamic degradation scenarios evolving over time. There is a need to develop enabling approaches to solidify risk monitors to provide time and condition-dependent risk by integrating traditional PRA models with condition monitoring and prognostic techniques. This paper presents estimation of system risk evolution over time by integrating plant risk monitoring data with dynamic PRA methods incorporating aging and degradation. Several online, non-destructive approaches have been developed for diagnosing plant component conditions in nuclear industry, i.e., condition indication index, using vibration analysis, current signatures, and operational history [1]. In this work the component performance measures at U.S. commercial nuclear power plants (NPP) [2] are incorporated within the various dynamic PRA methodologies [3] to provide better estimates of probability of failures. Aging and degradation is modeled within the Level-1 PRA framework and is applied to several failure modes of pumps and can be extended to a range of components, viz. valves, generators, batteries, and pipes.« less

  18. Modeling perceptions of climatic risk in crop production.

    PubMed

    Reinmuth, Evelyn; Parker, Phillip; Aurbacher, Joachim; Högy, Petra; Dabbert, Stephan

    2017-01-01

    In agricultural production, land-use decisions are components of economic planning that result in the strategic allocation of fields. Climate variability represents an uncertainty factor in crop production. Considering yield impact, climatic influence is perceived during and evaluated at the end of crop production cycles. In practice, this information is then incorporated into planning for the upcoming season. This process contributes to attitudes toward climate-induced risk in crop production. In the literature, however, the subjective valuation of risk is modeled as a risk attitude toward variations in (monetary) outcomes. Consequently, climatic influence may be obscured by political and market influences so that risk perceptions during the production process are neglected. We present a utility concept that allows the inclusion of annual risk scores based on mid-season risk perceptions that are incorporated into field-planning decisions. This approach is exemplified and implemented for winter wheat production in the Kraichgau, a region in Southwest Germany, using the integrated bio-economic simulation model FarmActor and empirical data from the region. Survey results indicate that a profitability threshold for this crop, the level of "still-good yield" (sgy), is 69 dt ha-1 (regional mean Kraichgau sample) for a given season. This threshold governs the monitoring process and risk estimators. We tested the modeled estimators against simulation results using ten projected future weather time series for winter wheat production. The mid-season estimators generally proved to be effective. This approach can be used to improve the modeling of planning decisions by providing a more comprehensive evaluation of field-crop response to climatic changes from an economic risk point of view. The methodology further provides economic insight in an agrometeorological context where prices for crops or inputs are lacking, but farmer attitudes toward risk should still be included in the analysis.

  19. Modeling perceptions of climatic risk in crop production

    PubMed Central

    Parker, Phillip; Aurbacher, Joachim; Högy, Petra; Dabbert, Stephan

    2017-01-01

    In agricultural production, land-use decisions are components of economic planning that result in the strategic allocation of fields. Climate variability represents an uncertainty factor in crop production. Considering yield impact, climatic influence is perceived during and evaluated at the end of crop production cycles. In practice, this information is then incorporated into planning for the upcoming season. This process contributes to attitudes toward climate-induced risk in crop production. In the literature, however, the subjective valuation of risk is modeled as a risk attitude toward variations in (monetary) outcomes. Consequently, climatic influence may be obscured by political and market influences so that risk perceptions during the production process are neglected. We present a utility concept that allows the inclusion of annual risk scores based on mid-season risk perceptions that are incorporated into field-planning decisions. This approach is exemplified and implemented for winter wheat production in the Kraichgau, a region in Southwest Germany, using the integrated bio-economic simulation model FarmActor and empirical data from the region. Survey results indicate that a profitability threshold for this crop, the level of “still-good yield” (sgy), is 69 dt ha-1 (regional mean Kraichgau sample) for a given season. This threshold governs the monitoring process and risk estimators. We tested the modeled estimators against simulation results using ten projected future weather time series for winter wheat production. The mid-season estimators generally proved to be effective. This approach can be used to improve the modeling of planning decisions by providing a more comprehensive evaluation of field-crop response to climatic changes from an economic risk point of view. The methodology further provides economic insight in an agrometeorological context where prices for crops or inputs are lacking, but farmer attitudes toward risk should still be included in the analysis. PMID:28763471

  20. Household-level disparities in cancer risks from vehicular air pollution in Miami

    NASA Astrophysics Data System (ADS)

    Collins, Timothy W.; Grineski, Sara E.; Chakraborty, Jayajit

    2015-09-01

    Environmental justice (EJ) research has relied on ecological analyses of socio-demographic data from areal units to determine if particular populations are disproportionately burdened by toxic risks. This article advances quantitative EJ research by (a) examining whether statistical associations found for geographic units translate to relationships at the household level; (b) testing alternative explanations for distributional injustices never before investigated; and (c) applying a novel statistical technique appropriate for geographically-clustered data. Our study makes these advances by using generalized estimating equations to examine distributive environmental inequities in the Miami (Florida) metropolitan area, based on primary household-level survey data and census block-level cancer risk estimates of hazardous air pollutant (HAP) exposure from on-road mobile emission sources. In addition to modeling determinants of on-road HAP cancer risk among all survey participants, two subgroup models are estimated to examine whether determinants of risk differ based on disadvantaged minority (Hispanic and non-Hispanic Black) versus non-Hispanic white racial/ethnic status. Results reveal multiple determinants of risk exposure disparities. In the model including all survey participants, renter-occupancy, Hispanic and non-Hispanic black race/ethnicity, the desire to live close to work/urban services or public transportation, and higher risk perception are associated with greater on-road HAP cancer risk; the desire to live in an amenity-rich environment is associated with less risk. Divergent subgroup model results shed light on the previously unexamined role of racial/ethnic status in shaping determinants of risk exposures. While lower socioeconomic status and higher risk perception predict significantly greater on-road HAP cancer risk among disadvantaged minorities, the desire to live near work/urban services or public transport predict significantly greater risk among non-Hispanic whites. Findings have important implications for EJ research and practice in Miami and elsewhere.

  1. FDA-iRISK--a comparative risk assessment system for evaluating and ranking food-hazard pairs: case studies on microbial hazards.

    PubMed

    Chen, Yuhuan; Dennis, Sherri B; Hartnett, Emma; Paoli, Greg; Pouillot, Régis; Ruthman, Todd; Wilson, Margaret

    2013-03-01

    Stakeholders in the system of food safety, in particular federal agencies, need evidence-based, transparent, and rigorous approaches to estimate and compare the risk of foodborne illness from microbial and chemical hazards and the public health impact of interventions. FDA-iRISK (referred to here as iRISK), a Web-based quantitative risk assessment system, was developed to meet this need. The modeling tool enables users to assess, compare, and rank the risks posed by multiple food-hazard pairs at all stages of the food supply system, from primary production, through manufacturing and processing, to retail distribution and, ultimately, to the consumer. Using standard data entry templates, built-in mathematical functions, and Monte Carlo simulation techniques, iRISK integrates data and assumptions from seven components: the food, the hazard, the population of consumers, process models describing the introduction and fate of the hazard up to the point of consumption, consumption patterns, dose-response curves, and health effects. Beyond risk ranking, iRISK enables users to estimate and compare the impact of interventions and control measures on public health risk. iRISK provides estimates of the impact of proposed interventions in various ways, including changes in the mean risk of illness and burden of disease metrics, such as losses in disability-adjusted life years. Case studies for Listeria monocytogenes and Salmonella were developed to demonstrate the application of iRISK for the estimation of risks and the impact of interventions for microbial hazards. iRISK was made available to the public at http://irisk.foodrisk.org in October 2012.

  2. A framework for global river flood risk assessment

    NASA Astrophysics Data System (ADS)

    Winsemius, H. C.; Van Beek, L. P. H.; Bouwman, A.; Ward, P. J.; Jongman, B.

    2012-04-01

    There is an increasing need for strategic global assessments of flood risks. Such assessments may be required by: (a) International Financing Institutes and Disaster Management Agencies to evaluate where, when, and which investments in flood risk mitigation are most required; (b) (re-)insurers, who need to determine their required coverage capital; and (c) large companies to account for risks of regional investments. In this contribution, we propose a framework for global river flood risk assessment. The framework combines coarse scale resolution hazard probability distributions, derived from global hydrological model runs (typical scale about 0.5 degree resolution) with high resolution estimates of exposure indicators. The high resolution is required because floods typically occur at a much smaller scale than the typical resolution of global hydrological models, and exposure indicators such as population, land use and economic value generally are strongly variable in space and time. The framework therefore estimates hazard at a high resolution ( 1 km2) by using a) global forcing data sets of the current (or in scenario mode, future) climate; b) a global hydrological model; c) a global flood routing model, and d) importantly, a flood spatial downscaling routine. This results in probability distributions of annual flood extremes as an indicator of flood hazard, at the appropriate resolution. A second component of the framework combines the hazard probability distribution with classical flood impact models (e.g. damage, affected GDP, affected population) to establish indicators for flood risk. The framework can be applied with a large number of datasets and models and sensitivities of such choices can be evaluated by the user. The framework is applied using the global hydrological model PCR-GLOBWB, combined with a global flood routing model. Downscaling of the hazard probability distributions to 1 km2 resolution is performed with a new downscaling algorithm, applied on a number of target regions. We demonstrate the use of impact models in these regions based on global GDP, population, and land use maps. In this application, we show sensitivities of the estimated risks with regard to the use of different climate input datasets, decisions made in the downscaling algorithm, and different approaches to establish distributed estimates of GDP and asset exposure to flooding.

  3. A probabilistic assessment of health risks associated with short-term exposure to tropospheric ozone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitfield, R.G; Biller, W.F.; Jusko, M.J.

    1996-06-01

    The work described in this report is part of a larger risk assessment sponsored by the U.S. Environmental Protection Agency. Earlier efforts developed exposure-response relationships for acute health effects among populations engaged in heavy exertion. Those efforts also developed a probabilistic national ambient air quality standards exposure model and a general methodology for integrating probabilistic exposure-response relation- ships and exposure estimates to calculate overall risk results. Recently published data make it possible to model additional health endpoints (for exposure at moderate exertion), including hospital admissions. New air quality and exposure estimates for alternative national ambient air quality standards for ozonemore » are combined with exposure-response models to produce the risk results for hospital admissions and acute health effects. Sample results explain the methodology and introduce risk output formats.« less

  4. Reentry survivability modeling

    NASA Astrophysics Data System (ADS)

    Fudge, Michael L.; Maher, Robert L.

    1997-10-01

    Statistical methods for expressing the impact risk posed to space systems in general [and the International Space Station (ISS) in particular] by other resident space objects have been examined. One of the findings of this investigation is that there are legitimate physical modeling reasons for the common statistical expression of the collision risk. A combination of statistical methods and physical modeling is also used to express the impact risk posed by re-entering space systems to objects of interest (e.g., people and property) on Earth. One of the largest uncertainties in the expressing of this risk is the estimation of survivable material which survives reentry to impact Earth's surface. This point was recently demonstrated in dramatic fashion by the impact of an intact expendable launch vehicle (ELV) upper stage near a private residence in the continental United States. Since approximately half of the missions supporting ISS will utilize ELVs, it is appropriate to examine the methods used to estimate the amount and physical characteristics of ELV debris surviving reentry to impact Earth's surface. This paper examines reentry survivability estimation methodology, including the specific methodology used by Caiman Sciences' 'Survive' model. Comparison between empirical results (observations of objects which have been recovered on Earth after surviving reentry) and Survive estimates are presented for selected upper stage or spacecraft components and a Delta launch vehicle second stage.

  5. Long working hours may increase risk of coronary heart disease.

    PubMed

    Kang, Mo-Yeol; Cho, Soo-Hun; Yoo, Min-Sang; Kim, Taeshik; Hong, Yun-Chul

    2014-11-01

    To evaluate the association between long working hours and risk of coronary heart disease (CHD) estimated by Framingham risk score (FRS) in Korean adults. This study evaluated adult participants in Korean National Health and Nutrition Examination Survey IV (2007-2009). After inclusion and exclusion criteria were applied, the final sample size for this study model was 8,350. Subjects were asked about working hours and health status. Participants also completed physical examinations and biochemical measurement necessary for estimation of FRS. Multiple logistic regression was conducted to investigate the association between working hours and 10-year risk for CHD estimated by FRS. Compared to those who work 31-40 hr, significantly higher 10-year risk was estimated among subjects working longer hours. As working hours increased, odds ratio (OR) for upper 10 percent of estimated 10-year risk for CHD was increased up to 1.94. Long working hours are significantly related to risk of coronary heart disease. © 2014 Wiley Periodicals, Inc.

  6. Using a High-Resolution Ensemble Modeling Method to Inform Risk-Based Decision-Making at Taylor Park Dam, Colorado

    NASA Astrophysics Data System (ADS)

    Mueller, M.; Mahoney, K. M.; Holman, K. D.

    2015-12-01

    The Bureau of Reclamation (Reclamation) is responsible for the safety of Taylor Park Dam, located in central Colorado at an elevation of 9300 feet. A key aspect of dam safety is anticipating extreme precipitation, runoff and the associated inflow of water to the reservoir within a probabilistic framework for risk analyses. The Cooperative Institute for Research in Environmental Sciences (CIRES) has partnered with Reclamation to improve understanding and estimation of precipitation in the western United States, including the Taylor Park watershed. A significant challenge is that Taylor Park Dam is located in a relatively data-sparse region, surrounded by mountains exceeding 12,000 feet. To better estimate heavy precipitation events in this basin, a high-resolution modeling approach is used. The Weather Research and Forecasting (WRF) model is employed to simulate events that have produced observed peaks in streamflow at the location of interest. Importantly, an ensemble of model simulations are run on each event so that uncertainty bounds (i.e., forecast error) may be provided such that the model outputs may be more effectively used in Reclamation's risk assessment framework. Model estimates of precipitation (and the uncertainty thereof) are then used in rainfall runoff models to determine the probability of inflows to the reservoir for use in Reclamation's dam safety risk analyses.

  7. Costing the satellite power system

    NASA Technical Reports Server (NTRS)

    Hazelrigg, G. A., Jr.

    1978-01-01

    The paper presents a methodology for satellite power system costing, places approximate limits on the accuracy possible in cost estimates made at this time, and outlines the use of probabilistic cost information in support of the decision-making process. Reasons for using probabilistic costing or risk analysis procedures instead of standard deterministic costing procedures are considered. Components of cost, costing estimating relationships, grass roots costing, and risk analysis are discussed. Risk analysis using a Monte Carlo simulation model is used to estimate future costs.

  8. Testing prediction capabilities of an 131I terrestrial transport model by using measurements collected at the Hanford nuclear facility.

    PubMed

    Apostoaei, A Iulian

    2005-05-01

    A model describing transport of 131I in the environment was developed by SENES Oak Ridge, Inc., for assessment of radiation doses and excess lifetime risk from 131I atmospheric releases from Oak Ridge Reservation in Oak Ridge, Tennessee, and from Idaho National Engineering and Environmental Laboratory in southeast Idaho. This paper describes the results of an exercise designed to test the reliability of this model and to identify the main sources of uncertainty in doses and risks estimated by this model. The testing of the model was based on materials published by the International Atomic Energy Agency BIOMASS program, specifically environmental data collected after the release into atmosphere of 63 curies of 131I during 2-5 September 1963, after an accident at the Hanford PUREX Chemical Separations Plant, in Hanford, Washington. Measurements of activity in air, vegetation, and milk were collected in nine counties around Hanford during the first couple of months after the accident. The activity of 131I in the thyroid glands of two children was measured 47 d after the accident. The model developed by SENES Oak Ridge, Inc., was used to estimate concentrations of 131I in environmental media, thyroid doses for the general population, and the activity of 131I in thyroid glands of the two children. Predicted concentrations of 131I in pasture grass and milk and thyroid doses were compared with similar estimates produced by other modelers. The SENES model was also used to estimate excess lifetime risk of thyroid cancer due to the September 1963 releases of 131I from Hanford. The SENES model was first calibrated and then applied to all locations of interest around Hanford without fitting the model parameters to a given location. Predictions showed that the SENES model reproduces satisfactorily the time-dependent and the time-integrated measured concentrations in vegetation and milk, and provides reliable estimates of 131I activity in thyroids of children. SENES model generated concentrations of 131I closer to observed concentrations, as compared to the predictions produced with other models. The inter-model comparison showed that variation of thyroid doses among all participating models (SENES model included) was a factor of 3 for the general population, but a factor of 10 for the two studied children. As opposed to other models, SENES model allows a complete analysis of uncertainties in every predicted quantity, including estimated thyroid doses and risk of thyroid cancer. The uncertainties in the risk-per-unit-dose and the dose-per-unit-intake coefficients are major contributors to the uncertainty in the estimated lifetime risk and thyroid dose, respectively. The largest contributors to the uncertainty in the estimated concentration in milk are the feed-to-milk transfer factor (F(m)), the dry deposition velocity (V(d)), and the mass interception factor (r/Y)dry for the elemental form of iodine (I2). Exposure to the 1963 PUREX/Hanford accident produced low doses and risks for people living at the studied locations. The upper 97.5th percentile of the excess lifetime risk of thyroid cancer for the most extreme situations is about 10(-4). Measurements in pasture grass and milk at all locations around Hanford indicate a very low transfer of 131I from pasture to cow's milk (e.g., a feed-to-milk transfer coefficient, F(m), for commercial cows of about 0.0022 d L(-1)). These values are towards the low end of F(m) values measured elsewhere and they are low compared to the F(m) values used in other dose reconstruction studies, including the Hanford Environmental Dose Reconstruction.

  9. A multilevel model for cardiovascular disease prevalence in the US and its application to micro area prevalence estimates.

    PubMed

    Congdon, Peter

    2009-01-30

    Estimates of disease prevalence for small areas are increasingly required for the allocation of health funds according to local need. Both individual level and geographic risk factors are likely to be relevant to explaining prevalence variations, and in turn relevant to the procedure for small area prevalence estimation. Prevalence estimates are of particular importance for major chronic illnesses such as cardiovascular disease. A multilevel prevalence model for cardiovascular outcomes is proposed that incorporates both survey information on patient risk factors and the effects of geographic location. The model is applied to derive micro area prevalence estimates, specifically estimates of cardiovascular disease for Zip Code Tabulation Areas in the USA. The model incorporates prevalence differentials by age, sex, ethnicity and educational attainment from the 2005 Behavioral Risk Factor Surveillance System survey. Influences of geographic context are modelled at both county and state level, with the county effects relating to poverty and urbanity. State level influences are modelled using a random effects approach that allows both for spatial correlation and spatial isolates. To assess the importance of geographic variables, three types of model are compared: a model with person level variables only; a model with geographic effects that do not interact with person attributes; and a full model, allowing for state level random effects that differ by ethnicity. There is clear evidence that geographic effects improve statistical fit. Geographic variations in disease prevalence partly reflect the demographic composition of area populations. However, prevalence variations may also show distinct geographic 'contextual' effects. The present study demonstrates by formal modelling methods that improved explanation is obtained by allowing for distinct geographic effects (for counties and states) and for interaction between geographic and person variables. Thus an appropriate methodology to estimate prevalence at small area level should include geographic effects as well as person level demographic variables.

  10. A multilevel model for cardiovascular disease prevalence in the US and its application to micro area prevalence estimates

    PubMed Central

    Congdon, Peter

    2009-01-01

    Background Estimates of disease prevalence for small areas are increasingly required for the allocation of health funds according to local need. Both individual level and geographic risk factors are likely to be relevant to explaining prevalence variations, and in turn relevant to the procedure for small area prevalence estimation. Prevalence estimates are of particular importance for major chronic illnesses such as cardiovascular disease. Methods A multilevel prevalence model for cardiovascular outcomes is proposed that incorporates both survey information on patient risk factors and the effects of geographic location. The model is applied to derive micro area prevalence estimates, specifically estimates of cardiovascular disease for Zip Code Tabulation Areas in the USA. The model incorporates prevalence differentials by age, sex, ethnicity and educational attainment from the 2005 Behavioral Risk Factor Surveillance System survey. Influences of geographic context are modelled at both county and state level, with the county effects relating to poverty and urbanity. State level influences are modelled using a random effects approach that allows both for spatial correlation and spatial isolates. Results To assess the importance of geographic variables, three types of model are compared: a model with person level variables only; a model with geographic effects that do not interact with person attributes; and a full model, allowing for state level random effects that differ by ethnicity. There is clear evidence that geographic effects improve statistical fit. Conclusion Geographic variations in disease prevalence partly reflect the demographic composition of area populations. However, prevalence variations may also show distinct geographic 'contextual' effects. The present study demonstrates by formal modelling methods that improved explanation is obtained by allowing for distinct geographic effects (for counties and states) and for interaction between geographic and person variables. Thus an appropriate methodology to estimate prevalence at small area level should include geographic effects as well as person level demographic variables. PMID:19183458

  11. A screening-level modeling approach to estimate nitrogen ...

    EPA Pesticide Factsheets

    This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce

  12. Security Events and Vulnerability Data for Cybersecurity Risk Estimation.

    PubMed

    Allodi, Luca; Massacci, Fabio

    2017-08-01

    Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.

  13. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes.

    PubMed

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities' preparedness and response capabilities and to mitigate future consequences. An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model's algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties.

  14. Qualitative risk assessment in a data-scarce environment: a model to assess the impact of control measures on spread of African Swine Fever.

    PubMed

    Wieland, Barbara; Dhollander, Sofie; Salman, Mo; Koenen, Frank

    2011-04-01

    In the absence of data, qualitative risk assessment frameworks have proved useful to assess risks associated with animal health diseases. As part of a scientific opinion for the European Commission (EC) on African Swine Fever (ASF), a working group of the European Food Safety Authority (EFSA) assessed the risk of ASF remaining endemic in Trans Caucasus Countries (TCC) and the Russian Federation (RF) and the risk of ASF becoming endemic in the EU if disease were introduced. The aim was to develop a tool to evaluate how current control or preventive measures mitigate the risk of spread and giving decision makers the means to review how strengthening of surveillance and control measures would mitigate the risk of disease spread. Based on a generic model outlining disease introduction, spread and endemicity in a region, the impact of risk mitigation measures on spread of disease was assessed for specific risk questions. The resulting hierarchical models consisted of key steps containing several sub-steps. For each step of the risk pathways risk estimates were determined by the expert group based on existing data or through expert opinion elicitation. Risk estimates were combined using two different combination matrices, one to combine estimates of independent steps and one to combine conditional probabilities. The qualitative risk assessment indicated a moderate risk that ASF will remain endemic in current affected areas in the TCC and RF and a high risk of spread to currently unaffected areas. If introduced into the EU, ASF is likely to be controlled effectively in the production sector with high or limited biosecurity. In the free range production sector, however, there is a moderate risk of ASF becoming endemic due to wild boar contact, non-compliance with animal movement bans, and difficult access to all individual pigs upon implementation of control measures. This study demonstrated the advantages of a systematic framework to assist an expert panel to carry out a risk assessment as it helped experts to disassociate steps in the risk pathway and to overcome preconceived notions of final risk estimates. The approach presented here shows how a qualitative risk assessment framework can address animal diseases with complexity in their spread and control measures and how transparency of the resulting estimates was achieved. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Iowa radon leukaemia study: a hierarchical population risk model for spatially correlated exposure measured with error.

    PubMed

    Smith, Brian J; Zhang, Lixun; Field, R William

    2007-11-10

    This paper presents a Bayesian model that allows for the joint prediction of county-average radon levels and estimation of the associated leukaemia risk. The methods are motivated by radon data from an epidemiologic study of residential radon in Iowa that include 2726 outdoor and indoor measurements. Prediction of county-average radon is based on a geostatistical model for the radon data which assumes an underlying continuous spatial process. In the radon model, we account for uncertainties due to incomplete spatial coverage, spatial variability, characteristic differences between homes, and detector measurement error. The predicted radon averages are, in turn, included as a covariate in Poisson models for incident cases of acute lymphocytic (ALL), acute myelogenous (AML), chronic lymphocytic (CLL), and chronic myelogenous (CML) leukaemias reported to the Iowa cancer registry from 1973 to 2002. Since radon and leukaemia risk are modelled simultaneously in our approach, the resulting risk estimates accurately reflect uncertainties in the predicted radon exposure covariate. Posterior mean (95 per cent Bayesian credible interval) estimates of the relative risk associated with a 1 pCi/L increase in radon for ALL, AML, CLL, and CML are 0.91 (0.78-1.03), 1.01 (0.92-1.12), 1.06 (0.96-1.16), and 1.12 (0.98-1.27), respectively. Copyright 2007 John Wiley & Sons, Ltd.

  16. Benchmark dose analysis via nonparametric regression modeling

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057

  17. Which population groups should be targeted for cardiovascular prevention? A modelling study based on the Norwegian Hordaland Health Study (HUSK).

    PubMed

    Brekke, Mette; Rekdal, Magne; Straand, Jørund

    2007-06-01

    To assess level of cardiovascular risk factors in a non-selected, middle-aged population. To estimate the proportion target for risk intervention according to present guidelines and according to different cut-off levels for two risk algorithms. Population survey, modelling study. The Norwegian Hordaland Health Study (HUSK) 1997-99. A total of 22 289 persons born in 1950-57. Own and relatives' cardiovascular morbidity, antihypertensive and lipid-lowering treatment, smoking, blood pressure, cholesterol. Framingham and Systematic Coronary Risk Evaluation (SCORE) algorithms. The European guidelines on CVD prevention in clinical practice were applied to estimate size of risk groups. Some 9.7% of men and 7.6% of women had CVD, diabetes mellitus, a high level of one specific risk factor, or received lipid-lowering or antihypertensive treatment. Applying a SCORE (60 years) cut-off level at 5% to the rest of the population selected 52.4% of men and 0.8% of women into a primary prevention group, while a cut-off level at 8% included 22.0% and 0.06% respectively. A cut-off level for the Framingham score (60 years) of 20% selected 43.6% of men and 4.7% of women, while a cut-off level of 25% selected 25.6% of men and 1.8% of women. The findings illustrate how choices regarding risk estimation highly affect the size of the target population. Modelling studies are important when preparing guidelines, to address implications for resource allocation and risk of medicalization. The population share to be targeted for primary prevention ought to be estimated, including the impact of various cut-off points for risk algorithms on the size of the risk population.

  18. Multi crop model climate risk country-level management design: case study on the Tanzanian maize production system

    NASA Astrophysics Data System (ADS)

    Chavez, E.

    2015-12-01

    Future climate projections indicate that a very serious consequence of post-industrial anthropogenic global warming is the likelihood of the greater frequency and intensity of extreme hydrometeorological events such as heat waves, droughts, storms, and floods. The design of national and international policies targeted at building more resilient and environmentally sustainable food systems needs to rely on access to robust and reliable data which is largely absent. In this context, the improvement of the modelling of current and future agricultural production losses using the unifying language of risk is paramount. In this study, we use a methodology that allows the integration of the current understanding of the various interacting systems of climate, agro-environment, crops, and the economy to determine short to long-term risk estimates of crop production loss, in different environmental, climate, and adaptation scenarios. This methodology is applied to Tanzania to assess optimum risk reduction and maize production increase paths in different climate scenarios. The simulations carried out use inputs from three different crop models (DSSAT, APSIM, WRSI) run in different technological scenarios and thus allowing to estimate crop model-driven risk exposure estimation bias. The results obtained also allow distinguishing different region-specific optimum climate risk reduction policies subject to historical as well as RCP2.5 and RCP8.5 climate scenarios. The region-specific risk profiles obtained provide a simple framework to determine cost-effective risk management policies for Tanzania and allow to optimally combine investments in risk reduction and risk transfer.

  19. Predicted Risk of Radiation-Induced Cancers After Involved Field and Involved Node Radiotherapy With or Without Intensity Modulation for Early-Stage Hodgkin Lymphoma in Female Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, Damien C., E-mail: damien.weber@unige.ch; Johanson, Safora; Peguret, Nicolas

    2011-10-01

    Purpose: To assess the excess relative risk (ERR) of radiation-induced cancers (RIC) in female patients with Hodgkin lymphoma (HL) female patients treated with conformal (3DCRT), intensity modulated (IMRT), or volumetric modulated arc (RA) radiation therapy. Methods and Materials: Plans for 10 early-stage HL female patients were computed for 3DCRT, IMRT, and RA with involved field RT (IFRT) and involvednode RT (INRT) radiation fields. Organs at risk dose--volume histograms were computed and inter-compared for IFRT vs. INRT and 3DCRT vs. IMRT/RA, respectively. The ERR for cancer induction in breasts, lungs, and thyroid was estimated using both linear and nonlinear models. Results:more » The mean estimated ERR for breast, lung, and thyroid were significantly lower (p < 0.01) with INRT than with IFRT planning, regardless of the radiation delivery technique used, assuming a linear dose-risk relationship. We found that using the nonlinear model, the mean ERR values were significantly (p < 0.01) increased with IMRT or RA compared to those with 3DCRT planning for the breast, lung, and thyroid, using an IFRT paradigm. After INRT planning, IMRT or RA increased the risk of RIC for lung and thyroid only. Conclusions: In this comparative planning study, using a nonlinear dose--risk model, IMRT or RA increased the estimated risk of RIC for breast, lung, and thyroid for HL female patients. This study also suggests that INRT planning, compared to IFRT planning, may reduce the ERR of RIC when risk is predicted using a linear model. Observing the opposite effect, with a nonlinear model, however, questions the validity of these biologically parameterized models.« less

  20. Adaptation of a Biomarker-Based Sepsis Mortality Risk Stratification Tool for Pediatric Acute Respiratory Distress Syndrome.

    PubMed

    Yehya, Nadir; Wong, Hector R

    2018-01-01

    The original Pediatric Sepsis Biomarker Risk Model and revised (Pediatric Sepsis Biomarker Risk Model-II) biomarker-based risk prediction models have demonstrated utility for estimating baseline 28-day mortality risk in pediatric sepsis. Given the paucity of prediction tools in pediatric acute respiratory distress syndrome, and given the overlapping pathophysiology between sepsis and acute respiratory distress syndrome, we tested the utility of Pediatric Sepsis Biomarker Risk Model and Pediatric Sepsis Biomarker Risk Model-II for mortality prediction in a cohort of pediatric acute respiratory distress syndrome, with an a priori plan to revise the model if these existing models performed poorly. Prospective observational cohort study. University affiliated PICU. Mechanically ventilated children with acute respiratory distress syndrome. Blood collection within 24 hours of acute respiratory distress syndrome onset and biomarker measurements. In 152 children with acute respiratory distress syndrome, Pediatric Sepsis Biomarker Risk Model performed poorly and Pediatric Sepsis Biomarker Risk Model-II performed modestly (areas under receiver operating characteristic curve of 0.61 and 0.76, respectively). Therefore, we randomly selected 80% of the cohort (n = 122) to rederive a risk prediction model for pediatric acute respiratory distress syndrome. We used classification and regression tree methodology, considering the Pediatric Sepsis Biomarker Risk Model biomarkers in addition to variables relevant to acute respiratory distress syndrome. The final model was comprised of three biomarkers and age, and more accurately estimated baseline mortality risk (area under receiver operating characteristic curve 0.85, p < 0.001 and p = 0.053 compared with Pediatric Sepsis Biomarker Risk Model and Pediatric Sepsis Biomarker Risk Model-II, respectively). The model was tested in the remaining 20% of subjects (n = 30) and demonstrated similar test characteristics. A validated, biomarker-based risk stratification tool designed for pediatric sepsis was adapted for use in pediatric acute respiratory distress syndrome. The newly derived Pediatric Acute Respiratory Distress Syndrome Biomarker Risk Model demonstrates good test characteristics internally and requires external validation in a larger cohort. Tools such as Pediatric Acute Respiratory Distress Syndrome Biomarker Risk Model have the potential to provide improved risk stratification and prognostic enrichment for future trials in pediatric acute respiratory distress syndrome.

  1. EXPOSURE RELATED DOSE ESTIMATING MODEL ( ERDEM ) A PHYSIOLOGICALLY-BASED PHARMACOKINETIC AND PHARMACODYNAMIC ( PBPK/PD ) MODEL FOR ASSESSING HUMAN EXPOSURE AND RISK

    EPA Science Inventory

    The Exposure Related Dose Estimating Model (ERDEM) is a PBPK/PD modeling system that was developed by EPA's National Exposure Research Laboratory (NERL). The ERDEM framework provides the flexibility either to use existing models and to build new PBPK and PBPK/PD models to address...

  2. Large-scale application of the flood damage model RAilway Infrastructure Loss (RAIL)

    NASA Astrophysics Data System (ADS)

    Kellermann, Patric; Schönberger, Christine; Thieken, Annegret H.

    2016-11-01

    Experience has shown that river floods can significantly hamper the reliability of railway networks and cause extensive structural damage and disruption. As a result, the national railway operator in Austria had to cope with financial losses of more than EUR 100 million due to flooding in recent years. Comprehensive information on potential flood risk hot spots as well as on expected flood damage in Austria is therefore needed for strategic flood risk management. In view of this, the flood damage model RAIL (RAilway Infrastructure Loss) was applied to estimate (1) the expected structural flood damage and (2) the resulting repair costs of railway infrastructure due to a 30-, 100- and 300-year flood in the Austrian Mur River catchment. The results were then used to calculate the expected annual damage of the railway subnetwork and subsequently analysed in terms of their sensitivity to key model assumptions. Additionally, the impact of risk aversion on the estimates was investigated, and the overall results were briefly discussed against the background of climate change and possibly resulting changes in flood risk. The findings indicate that the RAIL model is capable of supporting decision-making in risk management by providing comprehensive risk information on the catchment level. It is furthermore demonstrated that an increased risk aversion of the railway operator has a marked influence on flood damage estimates for the study area and, hence, should be considered with regard to the development of risk management strategies.

  3. Demonstration of the Web-based Interspecies Correlation Estimation (Web-ICE) modeling application

    EPA Science Inventory

    The Web-based Interspecies Correlation Estimation (Web-ICE) modeling application is available to the risk assessment community through a user-friendly internet platform (http://epa.gov/ceampubl/fchain/webice/). ICE models are log-linear least square regressions that predict acute...

  4. Wildfire risk and housing prices: a case study from Colorado Springs.

    Treesearch

    G.H. Donovan; P.A. Champ; D.T. Butry

    2007-01-01

    Unlike other natural hazards such as floods, hurricanes, and earthquakes, wildfire risk has not previously been examined using a hedonic property value model. In this article, we estimate a hedonic model based on parcel-level wildfire risk ratings from Colorado Springs. We found that providing homeowners with specific information about the wildfire risk rating of their...

  5. Reduced Risk of Importing Ebola Virus Disease because of Travel Restrictions in 2014: A Retrospective Epidemiological Modeling Study.

    PubMed

    Otsuki, Shiori; Nishiura, Hiroshi

    An epidemic of Ebola virus disease (EVD) from 2013-16 posed a serious risk of global spread during its early growth phase. A post-epidemic evaluation of the effectiveness of travel restrictions has yet to be conducted. The present study aimed to estimate the effectiveness of travel restrictions in reducing the risk of importation from mid-August to September, 2014, using a simple hazard-based statistical model. The hazard rate was modeled as an inverse function of the effective distance, an excellent predictor of disease spread, which was calculated from the airline transportation network. By analyzing datasets of the date of EVD case importation from the 15th of July to the 15th of September 2014, and assuming that the network structure changed from the 8th of August 2014 because of travel restrictions, parameters that characterized the hazard rate were estimated. The absolute risk reduction and relative risk reductions due to travel restrictions were estimated to be less than 1% and about 20%, respectively, for all models tested. Effectiveness estimates among African countries were greater than those for other countries outside Africa. The travel restrictions were not effective enough to expect the prevention of global spread of Ebola virus disease. It is more efficient to control the spread of disease locally during an early phase of an epidemic than to attempt to control the epidemic at international borders. Capacity building for local containment and coordinated and expedited international cooperation are essential to reduce the risk of global transmission.

  6. Reduced Risk of Importing Ebola Virus Disease because of Travel Restrictions in 2014: A Retrospective Epidemiological Modeling Study

    PubMed Central

    Otsuki, Shiori

    2016-01-01

    Background An epidemic of Ebola virus disease (EVD) from 2013–16 posed a serious risk of global spread during its early growth phase. A post-epidemic evaluation of the effectiveness of travel restrictions has yet to be conducted. The present study aimed to estimate the effectiveness of travel restrictions in reducing the risk of importation from mid-August to September, 2014, using a simple hazard-based statistical model. Methodology/Principal Findings The hazard rate was modeled as an inverse function of the effective distance, an excellent predictor of disease spread, which was calculated from the airline transportation network. By analyzing datasets of the date of EVD case importation from the 15th of July to the 15th of September 2014, and assuming that the network structure changed from the 8th of August 2014 because of travel restrictions, parameters that characterized the hazard rate were estimated. The absolute risk reduction and relative risk reductions due to travel restrictions were estimated to be less than 1% and about 20%, respectively, for all models tested. Effectiveness estimates among African countries were greater than those for other countries outside Africa. Conclusions The travel restrictions were not effective enough to expect the prevention of global spread of Ebola virus disease. It is more efficient to control the spread of disease locally during an early phase of an epidemic than to attempt to control the epidemic at international borders. Capacity building for local containment and coordinated and expedited international cooperation are essential to reduce the risk of global transmission. PMID:27657544

  7. Multi-hazard risk analysis related to hurricanes

    NASA Astrophysics Data System (ADS)

    Lin, Ning

    Hurricanes present major hazards to the United States. Associated with extreme winds, heavy rainfall, and storm surge, landfalling hurricanes often cause enormous structural damage to coastal regions. Hurricane damage risk assessment provides the basis for loss mitigation and related policy-making. Current hurricane risk models, however, often oversimplify the complex processes of hurricane damage. This dissertation aims to improve existing hurricane risk assessment methodology by coherently modeling the spatial-temporal processes of storm landfall, hazards, and damage. Numerical modeling technologies are used to investigate the multiplicity of hazards associated with landfalling hurricanes. The application and effectiveness of current weather forecasting technologies to predict hurricane hazards is investigated. In particular, the Weather Research and Forecasting model (WRF), with Geophysical Fluid Dynamics Laboratory (GFDL)'s hurricane initialization scheme, is applied to the simulation of the wind and rainfall environment during hurricane landfall. The WRF model is further coupled with the Advanced Circulation (AD-CIRC) model to simulate storm surge in coastal regions. A case study examines the multiple hazards associated with Hurricane Isabel (2003). Also, a risk assessment methodology is developed to estimate the probability distribution of hurricane storm surge heights along the coast, particularly for data-scarce regions, such as New York City. This methodology makes use of relatively simple models, specifically a statistical/deterministic hurricane model and the Sea, Lake and Overland Surges from Hurricanes (SLOSH) model, to simulate large numbers of synthetic surge events, and conducts statistical analysis. The estimation of hurricane landfall probability and hazards are combined with structural vulnerability models to estimate hurricane damage risk. Wind-induced damage mechanisms are extensively studied. An innovative windborne debris risk model is developed based on the theory of Poisson random measure, substantiated by a large amount of empirical data. An advanced vulnerability assessment methodology is then developed, by integrating this debris risk model and a component-based pressure damage model, to predict storm-specific or annual damage to coastal residential neighborhoods. The uniqueness of this vulnerability model lies in its detailed description of the interaction between wind pressure and windborne debris effects over periods of strong winds, which is a major mechanism leading to structural failures during hurricanes.

  8. The Effects of Vehicle Redesign on the Risk of Driver Death.

    PubMed

    Farmer, Charles M; Lund, Adrian K

    2015-01-01

    This study updates a 2006 report that estimated the historical effects of vehicle design changes on driver fatality rates in the United States, separate from the effects of environmental and driver behavior changes during the same period. In addition to extending the period covered by 8 years, this study estimated the effect of design changes by model year and vehicle type. Driver death rates for consecutive model years of vehicle models without design changes were used to estimate the vehicle aging effect and the death rates that would have been expected if the entire fleet had remained unchanged from the 1985 calendar year. These calendar year estimates are taken to be the combined effect of road environment and motorist behavioral changes, with the difference between them and the actual calendar year driver fatality rates reflecting the effect of changes in vehicle design and distribution of vehicle types. The effects of vehicle design changes by model year were estimated for cars, SUVs, and pickups by computing driver death rates for model years 1984-2009 during each of their first 3 full calendar years of exposure and comparing with the expected rates if there had been no design changes. As reported in the 2006 study, had there been no changes in the vehicle fleet, driver death risk would have declined during calendar years 1985-1993 and then slowly increased from 1993 to 2004. The updated results indicate that the gradual increase would have continued through 2006, after which driver fatality rates again would have declined through 2012. Overall, it is estimated that there were 7,700 fewer driver deaths in 2012 than there would have been had vehicle designs not changed. Cars were the first vehicle type whose design safety generally exceeded that of the 1984 model year (starting in model year 1996), followed by SUVs (1998 models) and pickups (2002 models). By the 2009 model year, car driver fatality risk had declined 51% from its high in 1994, pickup driver fatality risk had declined 61% from its high in 1988, and SUV risk had declined 79% from its high in 1988. The risk of driver death in 2009 model passenger vehicles was 8% lower than that in 2008 models and about half that in 1984 models. Changes in vehicles, whether from government regulations and consumer testing that led to advanced safety designs or from other factors such as consumer demand for different sizes and types of vehicles, have been key contributors to the decline in U.S. motor vehicle occupant crash death rates since the mid-1990s. Since the early 1990s, environmental and behavioral risk factors have not shown similar improvement, until the recession of 2007, even though there are many empirically proven countermeasures that have been inadequately applied.

  9. Cross-sectional study to assess the association of population density with predicted breast cancer risk.

    PubMed

    Lee, Jeannette Y; Klimberg, Suzanne; Bondurant, Kristina L; Phillips, Martha M; Kadlubar, Susan A

    2014-01-01

    The Gail and CARE models estimate breast cancer risk for white and African-American (AA) women, respectively. The aims of this study were to compare metropolitan and nonmetropolitan women with respect to predicted breast cancer risks based on known risk factors, and to determine if population density was an independent risk factor for breast cancer risk. A cross-sectional survey was completed by 15,582 women between 35 and 85 years of age with no history of breast cancer. Metropolitan and nonmetropolitan women were compared with respect to risk factors, and breast cancer risk estimates, using general linear models adjusted for age. For both white and AA women, tisk factors used to estimate breast cancer risk included age at menarche, history of breast biopsies, and family history. For white women, age at first childbirth was an additional risk factor. In comparison to their nonmetropolitan counterparts, metropolitan white women were more likely to report having a breast biopsy, have family history of breast cancer, and delay childbirth. Among white metropolitan and nonmetropolitan women, mean estimated 5-year risks were 1.44% and 1.32% (p < 0.001), and lifetime risks of breast cancer were 10.81% and 10.01% (p < 0.001), respectively. AA metropolitan residents were more likely than those from nonmetropolitan areas to have had a breast biopsy. Among AA metropolitan and nonmetropolitan women, mean estimated 5-year risks were 1.16% and 1.12% (p = 0.039) and lifetime risks were 8.94%, and 8.85% (p = 0.344). Metropolitan residence was associated with higher predicted breast cancer risks for white women. Among AA women, metropolitan residence was associated with a higher predicted breast cancer risk at 5 years, but not over a lifetime. Population density was not an independent risk factor for breast cancer. © 2014 Wiley Periodicals, Inc.

  10. Assessing Risks to Sea Otters and the Exxon Valdez Oil Spill: New Scenarios, Attributable Risk, and Recovery

    PubMed Central

    Harwell, Mark A.; Gentile, John H.

    2014-01-01

    The Exxon Valdez oil spill occurred more than two decades ago, and the Prince William Sound ecosystem has essentially recovered. Nevertheless, discussion continues on whether or not localized effects persist on sea otters (Enhydra lutris) at northern Knight Island (NKI) and, if so, what are the associated attributable risks. A recent study estimated new rates of sea otter encounters with subsurface oil residues (SSOR) from the oil spill. We previously demonstrated that a potential pathway existed for exposures to polycyclic aromatic hydrocarbons (PAHs) and conducted a quantitative ecological risk assessment using an individual-based model that simulated this and other plausible exposure pathways. Here we quantitatively update the potential for this exposure pathway to constitute an ongoing risk to sea otters using the new estimates of SSOR encounters. Our conservative model predicted that the assimilated doses of PAHs to the 1-in-1000th most-exposed sea otters would remain 1–2 orders of magnitude below the chronic effects thresholds. We re-examine the baseline estimates, post-spill surveys, recovery status, and attributable risks for this subpopulation. We conclude that the new estimated frequencies of encountering SSOR do not constitute a plausible risk for sea otters at NKI and these sea otters have fully recovered from the oil spill. PMID:24587690

  11. Predictive Modeling of Risk Associated with Temperature Extremes over Continental US

    NASA Astrophysics Data System (ADS)

    Kravtsov, S.; Roebber, P.; Brazauskas, V.

    2016-12-01

    We build an extremely statistically accurate, essentially bias-free empirical emulator of atmospheric surface temperature and apply it for meteorological risk assessment over the domain of continental US. The resulting prediction scheme achieves an order-of-magnitude or larger gain of numerical efficiency compared with the schemes based on high-resolution dynamical atmospheric models, leading to unprecedented accuracy of the estimated risk distributions. The empirical model construction methodology is based on our earlier work, but is further modified to account for the influence of large-scale, global climate change on regional US weather and climate. The resulting estimates of the time-dependent, spatially extended probability of temperature extremes over the simulation period can be used as a risk management tool by insurance companies and regulatory governmental agencies.

  12. Estimating Longitudinal Risks and Benefits From Cardiovascular Preventive Therapies Among Medicare Patients: The Million Hearts Longitudinal ASCVD Risk Assessment Tool

    PubMed Central

    Lloyd-Jones, Donald M.; Huffman, Mark D.; Karmali, Kunal N.; Sanghavi, Darshak M.; Wright, Janet S.; Pelser, Colleen; Gulati, Martha; Masoudi, Frederick A.; Goff, David C.

    2016-01-01

    The Million Hearts Initiative has a goal of preventing 1 million heart attacks and strokes—the leading causes of mortality—through several public health and healthcare strategies by 2017. The American Heart Association and American College of Cardiology support the program. The Cardiovascular Risk Reduction Model was developed by Million Hearts and the Center for Medicare & Medicaid Services as a strategy to asses a value-based payment approach toward reduction in 10-year predicted risk of atherosclerotic cardiovascular disease (ASCVD) by implementing cardiovascular preventive strategies to manage the “ABCS” (aspirin therapy in appropriate patients, blood pressure control, cholesterol management, and smoking cessation). The purpose of this special report is to describe the development and intended use of the Million Hearts Longitudinal ASCVD Risk Assessment Tool. The Million Hearts Tool reinforces and builds on the “2013 ACC/AHA Guideline on the Assessment of Cardiovascular Risk” by allowing clinicians to estimate baseline and updated 10-year ASCVD risk estimates for primary prevention patients adhering to the appropriate ABCS over time, alone or in combination. The tool provides updated risk estimates based on evidence from high-quality systematic reviews and meta-analyses of the ABCS therapies. This novel approach to personalized estimation of benefits from risk-reducing therapies in primary prevention may help target therapies to those in whom they will provide the greatest benefit, and serves as the basis for a Center for Medicare & Medicaid Services program designed to evaluate the Million Hearts Cardiovascular Risk Reduction Model. PMID:27825770

  13. Overview of the Special Issue: A Multi-Model Framework to Achieve Consistent Evaluation of Climate Change Impacts in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waldhoff, Stephanie T.; Martinich, Jeremy; Sarofim, Marcus

    2015-07-01

    The Climate Change Impacts and Risk Analysis (CIRA) modeling exercise is a unique contribution to the scientific literature on climate change impacts, economic damages, and risk analysis that brings together multiple, national-scale models of impacts and damages in an integrated and consistent fashion to estimate climate change impacts, damages, and the benefits of greenhouse gas (GHG) mitigation actions in the United States. The CIRA project uses three consistent socioeconomic, emissions, and climate scenarios across all models to estimate the benefits of GHG mitigation policies: a Business As Usual (BAU) and two policy scenarios with radiative forcing (RF) stabilization targets ofmore » 4.5 W/m2 and 3.7 W/m2 in 2100. CIRA was also designed to specifically examine the sensitivity of results to uncertainties around climate sensitivity and differences in model structure. The goals of CIRA project are to 1) build a multi-model framework to produce estimates of multiple risks and impacts in the U.S., 2) determine to what degree risks and damages across sectors may be lowered from a BAU to policy scenarios, 3) evaluate key sources of uncertainty along the causal chain, and 4) provide information for multiple audiences and clearly communicate the risks and damages of climate change and the potential benefits of mitigation. This paper describes the motivations, goals, and design of the CIRA modeling exercise and introduces the subsequent papers in this special issue.« less

  14. Bayesian dose-response analysis for epidemiological studies with complex uncertainty in dose estimation.

    PubMed

    Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L

    2016-02-10

    Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Challenges of Modeling Flood Risk at Large Scales

    NASA Astrophysics Data System (ADS)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing algorithm propagates the flows for each simulated event. The model incorporates a digital terrain model (DTM) at 10m horizontal resolution, which is used to extract flood plain cross-sections such that a one-dimensional hydraulic model can be used to estimate extent and elevation of flooding. In doing so the effect of flood defenses in mitigating floods are accounted for. Finally a suite of vulnerability relationships have been developed to estimate flood losses for a portfolio of properties that are exposed to flood hazard. Historical experience indicates that a for recent floods in Great Britain more than 50% of insurance claims occur outside the flood plain and these are primarily a result of excess surface flow, hillside flooding, flooding due to inadequate drainage. A sub-component of the model addresses this issue by considering several parameters that best explain the variability of claims off the flood plain. The challenges of modeling such a complex phenomenon at a large scale largely dictate the choice of modeling approaches that need to be adopted for each of these model components. While detailed numerically-based physical models exist and have been used for conducting flood hazard studies, they are generally restricted to small geographic regions. In a probabilistic risk estimation framework like our current model, a blend of deterministic and statistical techniques have to be employed such that each model component is independent, physically sound and is able to maintain the statistical properties of observed historical data. This is particularly important because of the highly non-linear behavior of the flooding process. With respect to vulnerability modeling, both on and off the flood plain, the challenges include the appropriate scaling of a damage relationship when applied to a portfolio of properties. This arises from the fact that the estimated hazard parameter used for damage assessment, namely maximum flood depth has considerable uncertainty. The uncertainty can be attributed to various sources among which are imperfections in the hazard modeling, inherent errors in the DTM, lack of accurate information on the properties that are being analyzed, imperfections in the vulnerability relationships, inability of the model to account for local mitigation measures that are usually undertaken when a real event is unfolding and lack of details in the claims data that are used for model calibration. Nevertheless, the model once calibrated provides a very robust framework for analyzing relative and absolute risk. The paper concludes with key economic statistics of flood risk for Great Britain as a whole including certain large loss-causing scenarios affecting the greater London region. The model estimates a total financial loss of 5.6 billion GBP to all properties at a 1% annual aggregate exceedance probability level.

  16. Boosting structured additive quantile regression for longitudinal childhood obesity data.

    PubMed

    Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

    2013-07-25

    Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

  17. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  18. 12 CFR 217.153 - Internal models approach (IMA).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...-regulated institution must have one or more models that: (i) Assess the potential decline in value of its... idiosyncratic risk. (2) The Board-regulated institution's model must produce an estimate of potential losses for its modeled equity exposures that is no less than the estimate of potential losses produced by a VaR...

  19. Estimating the personal cure rate of cancer patients using population-based grouped cancer survival data.

    PubMed

    Binbing Yu; Tiwari, Ram C; Feuer, Eric J

    2011-06-01

    Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.

  20. Deterministic SLIR model for tuberculosis disease mapping

    NASA Astrophysics Data System (ADS)

    Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat

    2017-11-01

    Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.

  1. Genetic markers enhance coronary risk prediction in men: the MORGAM prospective cohorts.

    PubMed

    Hughes, Maria F; Saarela, Olli; Stritzke, Jan; Kee, Frank; Silander, Kaisa; Klopp, Norman; Kontto, Jukka; Karvanen, Juha; Willenborg, Christina; Salomaa, Veikko; Virtamo, Jarmo; Amouyel, Phillippe; Arveiler, Dominique; Ferrières, Jean; Wiklund, Per-Gunner; Baumert, Jens; Thorand, Barbara; Diemert, Patrick; Trégouët, David-Alexandre; Hengstenberg, Christian; Peters, Annette; Evans, Alun; Koenig, Wolfgang; Erdmann, Jeanette; Samani, Nilesh J; Kuulasmaa, Kari; Schunkert, Heribert

    2012-01-01

    More accurate coronary heart disease (CHD) prediction, specifically in middle-aged men, is needed to reduce the burden of disease more effectively. We hypothesised that a multilocus genetic risk score could refine CHD prediction beyond classic risk scores and obtain more precise risk estimates using a prospective cohort design. Using data from nine prospective European cohorts, including 26,221 men, we selected in a case-cohort setting 4,818 healthy men at baseline, and used Cox proportional hazards models to examine associations between CHD and risk scores based on genetic variants representing 13 genomic regions. Over follow-up (range: 5-18 years), 1,736 incident CHD events occurred. Genetic risk scores were validated in men with at least 10 years of follow-up (632 cases, 1361 non-cases). Genetic risk score 1 (GRS1) combined 11 SNPs and two haplotypes, with effect estimates from previous genome-wide association studies. GRS2 combined 11 SNPs plus 4 SNPs from the haplotypes with coefficients estimated from these prospective cohorts using 10-fold cross-validation. Scores were added to a model adjusted for classic risk factors comprising the Framingham risk score and 10-year risks were derived. Both scores improved net reclassification (NRI) over the Framingham score (7.5%, p = 0.017 for GRS1, 6.5%, p = 0.044 for GRS2) but GRS2 also improved discrimination (c-index improvement 1.11%, p = 0.048). Subgroup analysis on men aged 50-59 (436 cases, 603 non-cases) improved net reclassification for GRS1 (13.8%) and GRS2 (12.5%). Net reclassification improvement remained significant for both scores when family history of CHD was added to the baseline model for this male subgroup improving prediction of early onset CHD events. Genetic risk scores add precision to risk estimates for CHD and improve prediction beyond classic risk factors, particularly for middle aged men.

  2. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    PubMed

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-03-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License

  3. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    PubMed Central

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-01-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. PMID:28440974

  4. A framework for estimating radiation-related cancer risks in Japan from the 2011 Fukushima nuclear accident.

    PubMed

    Walsh, L; Zhang, W; Shore, R E; Auvinen, A; Laurier, D; Wakeford, R; Jacob, P; Gent, N; Anspaugh, L R; Schüz, J; Kesminiene, A; van Deventer, E; Tritscher, A; del Rosarion Pérez, M

    2014-11-01

    We present here a methodology for health risk assessment adopted by the World Health Organization that provides a framework for estimating risks from the Fukushima nuclear accident after the March 11, 2011 Japanese major earthquake and tsunami. Substantial attention has been given to the possible health risks associated with human exposure to radiation from damaged reactors at the Fukushima Daiichi nuclear power station. Cumulative doses were estimated and applied for each post-accident year of life, based on a reference level of exposure during the first year after the earthquake. A lifetime cumulative dose of twice the first year dose was estimated for the primary radionuclide contaminants ((134)Cs and (137)Cs) and are based on Chernobyl data, relative abundances of cesium isotopes, and cleanup efforts. Risks for particularly radiosensitive cancer sites (leukemia, thyroid and breast cancer), as well as the combined risk for all solid cancers were considered. The male and female cumulative risks of cancer incidence attributed to radiation doses from the accident, for those exposed at various ages, were estimated in terms of the lifetime attributable risk (LAR). Calculations of LAR were based on recent Japanese population statistics for cancer incidence and current radiation risk models from the Life Span Study of Japanese A-bomb survivors. Cancer risks over an initial period of 15 years after first exposure were also considered. LAR results were also given as a percentage of the lifetime baseline risk (i.e., the cancer risk in the absence of radiation exposure from the accident). The LAR results were based on either a reference first year dose (10 mGy) or a reference lifetime dose (20 mGy) so that risk assessment may be applied for relocated and non-relocated members of the public, as well as for adult male emergency workers. The results show that the major contribution to LAR from the reference lifetime dose comes from the first year dose. For a dose of 10 mGy in the first year and continuing exposure, the lifetime radiation-related cancer risks based on lifetime dose (which are highest for children under 5 years of age at initial exposure), are small, and much smaller than the lifetime baseline cancer risks. For example, after initial exposure at age 1 year, the lifetime excess radiation risk and baseline risk of all solid cancers in females were estimated to be 0.7 · 10(-2) and 29.0 · 10(-2), respectively. The 15 year risks based on the lifetime reference dose are very small. However, for initial exposure in childhood, the 15 year risks based on the lifetime reference dose are up to 33 and 88% as large as the 15 year baseline risks for leukemia and thyroid cancer, respectively. The results may be scaled to particular dose estimates after consideration of caveats. One caveat is related to the lack of epidemiological evidence defining risks at low doses, because the predicted risks come from cancer risk models fitted to a wide dose range (0-4 Gy), which assume that the solid cancer and leukemia lifetime risks for doses less than about 0.5 Gy and 0.2 Gy, respectively, are proportional to organ/tissue doses: this is unlikely to seriously underestimate risks, but may overestimate risks. This WHO-HRA framework may be used to update the risk estimates, when new population health statistics data, dosimetry information and radiation risk models become available.

  5. Probabilistic flood damage modelling at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2014-05-01

    Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.

  6. The risk of airborne influenza transmission in passenger cars.

    PubMed

    Knibbs, L D; Morawska, L; Bell, S C

    2012-03-01

    Travel in passenger cars is a ubiquitous aspect of the daily activities of many people. During the 2009 influenza A(H1N1) pandemic a case of probable transmission during car travel was reported in Australia, to which spread via the airborne route may have contributed. However, there are no data to indicate the likely risks of such events, and how they may vary and be mitigated. To address this knowledge gap, we estimated the risk of airborne influenza transmission in two cars (1989 model and 2005 model) by employing ventilation measurements and a variation of the Wells-Riley model. Results suggested that infection risk can be reduced by not recirculating air; however, estimated risk ranged from 59% to 99·9% for a 90-min trip when air was recirculated in the newer vehicle. These results have implications for interrupting in-car transmission of other illnesses spread by the airborne route.

  7. Air pollution and health risks due to vehicle traffic.

    PubMed

    Zhang, Kai; Batterman, Stuart

    2013-04-15

    Traffic congestion increases vehicle emissions and degrades ambient air quality, and recent studies have shown excess morbidity and mortality for drivers, commuters and individuals living near major roadways. Presently, our understanding of the air pollution impacts from congestion on roads is very limited. This study demonstrates an approach to characterize risks of traffic for on- and near-road populations. Simulation modeling was used to estimate on- and near-road NO2 concentrations and health risks for freeway and arterial scenarios attributable to traffic for different traffic volumes during rush hour periods. The modeling used emission factors from two different models (Comprehensive Modal Emissions Model and Motor Vehicle Emissions Factor Model version 6.2), an empirical traffic speed-volume relationship, the California Line Source Dispersion Model, an empirical NO2-NOx relationship, estimated travel time changes during congestion, and concentration-response relationships from the literature, which give emergency doctor visits, hospital admissions and mortality attributed to NO2 exposure. An incremental analysis, which expresses the change in health risks for small increases in traffic volume, showed non-linear effects. For a freeway, "U" shaped trends of incremental risks were predicted for on-road populations, and incremental risks are flat at low traffic volumes for near-road populations. For an arterial road, incremental risks increased sharply for both on- and near-road populations as traffic increased. These patterns result from changes in emission factors, the NO2-NOx relationship, the travel delay for the on-road population, and the extended duration of rush hour for the near-road population. This study suggests that health risks from congestion are potentially significant, and that additional traffic can significantly increase risks, depending on the type of road and other factors. Further, evaluations of risk associated with congestion must consider travel time, the duration of rush-hour, congestion-specific emission estimates, and uncertainties. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Air pollution and health risks due to vehicle traffic

    PubMed Central

    Zhang, Kai; Batterman, Stuart

    2014-01-01

    Traffic congestion increases vehicle emissions and degrades ambient air quality, and recent studies have shown excess morbidity and mortality for drivers, commuters and individuals living near major roadways. Presently, our understanding of the air pollution impacts from congestion on roads is very limited. This study demonstrates an approach to characterize risks of traffic for on- and near-road populations. Simulation modeling was used to estimate on- and near-road NO2 concentrations and health risks for freeway and arterial scenarios attributable to traffic for different traffic volumes during rush hour periods. The modeling used emission factors from two different models (Comprehensive Modal Emissions Model and Motor Vehicle Emissions Factor Model version 6.2), an empirical traffic speed–volume relationship, the California Line Source Dispersion Model, an empirical NO2–NOx relationship, estimated travel time changes during congestion, and concentration–response relationships from the literature, which give emergency doctor visits, hospital admissions and mortality attributed to NO2 exposure. An incremental analysis, which expresses the change in health risks for small increases in traffic volume, showed non-linear effects. For a freeway, “U” shaped trends of incremental risks were predicted for on-road populations, and incremental risks are flat at low traffic volumes for near-road populations. For an arterial road, incremental risks increased sharply for both on- and near-road populations as traffic increased. These patterns result from changes in emission factors, the NO2–NOx relationship, the travel delay for the on-road population, and the extended duration of rush hour for the near-road population. This study suggests that health risks from congestion are potentially significant, and that additional traffic can significantly increase risks, depending on the type of road and other factors. Further, evaluations of risk associated with congestion must consider travel time, the duration of rush-hour, congestion-specific emission estimates, and uncertainties. PMID:23500830

  9. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    PubMed

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  10. Grading the probabilities of credit default risk for Malaysian listed companies by using the KMV-Merton model

    NASA Astrophysics Data System (ADS)

    Anuwar, Muhammad Hafidz; Jaffar, Maheran Mohd

    2017-08-01

    This paper provides an overview for the assessment of credit risk specific to the banks. In finance, risk is a term to reflect the potential of financial loss. The risk of default on loan may increase when a company does not make a payment on that loan when the time comes. Hence, this framework analyses the KMV-Merton model to estimate the probabilities of default for Malaysian listed companies. In this way, banks can verify the ability of companies to meet their loan commitments in order to overcome bad investments and financial losses. This model has been applied to all Malaysian listed companies in Bursa Malaysia for estimating the credit default probabilities of companies and compare with the rating given by the rating agency, which is RAM Holdings Berhad to conform to reality. Then, the significance of this study is a credit risk grade is proposed by using the KMV-Merton model for the Malaysian listed companies.

  11. A New View of Radiation-Induced Cancer: Integrating Short- and Long-Term Processes. Part II: Second Cancer Risk Estimation

    NASA Technical Reports Server (NTRS)

    Shuryak, Igor; Brenner, David J.; Hahnfeldt, Philip; Hlatky, Lynn; Sachs, Rainer K.

    2009-01-01

    As the number of cancer survivors grows, prediction of radiotherapy-induced second cancer risks becomes increasingly important. Because the latency period for solid tumors is long, the risks of recently introduced radiotherapy protocols are not yet directly measurable. In the accompanying article, we presented a new biologically based mathematical model, which, in principle, can estimate second cancer risks for any protocol. The novelty of the model is that it integrates, into a single formalism, mechanistic analyses of pre-malignant cell dynamics on two different time scales: short-term during radiotherapy and recovery; long-term during the entire life span. Here, we apply the model to nine solid cancer types (stomach, lung, colon, rectal, pancreatic, bladder, breast, central nervous system, and thyroid) using data on radiotherapy-induced second malignancies, on Japanese atomic bomb survivors, and on background US cancer incidence. Potentially, the model can be incorporated into radiotherapy treatment planning algorithms, adding second cancer risk as an optimization criterion.

  12. A new view of radiation-induced cancer: integrating short- and long-term processes. Part II: second cancer risk estimation.

    PubMed

    Shuryak, Igor; Hahnfeldt, Philip; Hlatky, Lynn; Sachs, Rainer K; Brenner, David J

    2009-08-01

    As the number of cancer survivors grows, prediction of radiotherapy-induced second cancer risks becomes increasingly important. Because the latency period for solid tumors is long, the risks of recently introduced radiotherapy protocols are not yet directly measurable. In the accompanying article, we presented a new biologically based mathematical model, which, in principle, can estimate second cancer risks for any protocol. The novelty of the model is that it integrates, into a single formalism, mechanistic analyses of pre-malignant cell dynamics on two different time scales: short-term during radiotherapy and recovery; long-term during the entire life span. Here, we apply the model to nine solid cancer types (stomach, lung, colon, rectal, pancreatic, bladder, breast, central nervous system, and thyroid) using data on radiotherapy-induced second malignancies, on Japanese atomic bomb survivors, and on background US cancer incidence. Potentially, the model can be incorporated into radiotherapy treatment planning algorithms, adding second cancer risk as an optimization criterion.

  13. Idiosyncratic risk in the Dow Jones Eurostoxx50 Index

    NASA Astrophysics Data System (ADS)

    Daly, Kevin; Vo, Vinh

    2008-07-01

    Recent evidence by Campbell et al. [J.Y. Campbell, M. Lettau B.G. Malkiel, Y. Xu, Have individual stocks become more volatile? An empirical exploration of idiosyncratic risk, The Journal of Finance (February) (2001)] shows an increase in firm-level volatility and a decline of the correlation among stock returns in the US. In relation to the Euro-Area stock markets, we find that both aggregate firm-level volatility and average stock market correlation have trended upwards. We estimate a linear model of the market risk-return relationship nested in an EGARCH(1, 1)-M model for conditional second moments. We then show that traditional estimates of the conditional risk-return relationship, that use ex-post excess-returns as the conditioning information set, lead to joint tests of the theoretical model (usually the ICAPM) and of the Efficient Market Hypothesis in its strong form. To overcome this problem we propose alternative measures of expected market risk based on implied volatility extracted from traded option prices and we discuss the conditions under which implied volatility depends solely on expected risk. We then regress market excess-returns on lagged market implied variance computed from implied market volatility to estimate the relationship between expected market excess-returns and expected market risk.We investigate whether, as predicted by the ICAPM, the expected market risk is the main factor in explaining the market risk premium and the latter is independent of aggregate idiosyncratic risk.

  14. Technical Evaluation of the NASA Model for Cancer Risk to Astronauts Due to Space Radiation

    NASA Technical Reports Server (NTRS)

    2012-01-01

    At the request of NASA, the National Research Council's (NRC's) Committee for Evaluation of Space Radiation Cancer Risk Model1 reviewed a number of changes that NASA proposes to make to its model for estimating the risk of radiation-induced cancer in astronauts. The NASA model in current use was last updated in 2005, and the proposed model would incorporate recent research directed at improving the quantification and understanding of the health risks posed by the space radiation environment. NASA's proposed model is defined by the 2011 NASA report Space Radiation Cancer Risk Projections and Uncertainties--2010 . The committee's evaluation is based primarily on this source, which is referred to hereafter as the 2011 NASA report, with mention of specific sections or tables. The overall process for estimating cancer risks due to low linear energy transfer (LET) radiation exposure has been fully described in reports by a number of organizations. The approaches described in the reports from all of these expert groups are quite similar. NASA's proposed space radiation cancer risk assessment model calculates, as its main output, age- and gender-specific risk of exposure-induced death (REID) for use in the estimation of mission and astronaut-specific cancer risk. The model also calculates the associated uncertainties in REID. The general approach for estimating risk and uncertainty in the proposed model is broadly similar to that used for the current (2005) NASA model and is based on recommendations by the National Council on Radiation Protection and Measurements. However, NASA's proposed model has significant changes with respect to the following: the integration of new findings and methods into its components by taking into account newer epidemiological data and analyses, new radiobiological data indicating that quality factors differ for leukemia and solid cancers, an improved method for specifying quality factors in terms of radiation track structure concepts as opposed to the previous approach based on linear energy transfer, the development of a new solar particle event (SPE) model, and the updates to galactic cosmic ray (GCR) and shielding transport models. The newer epidemiological information includes updates to the cancer incidence rates from the life span study (LSS) of the Japanese atomic bomb survivors, transferred to the U.S. population and converted to cancer mortality rates from U.S. population statistics. In addition, the proposed model provides an alternative analysis applicable to lifetime never-smokers (NSs). Details of the uncertainty analysis in the model have also been updated and revised. NASA's proposed model and associated uncertainties are complex in their formulation and as such require a very clear and precise set of descriptions. The committee found the 2011 NASA report challenging to review largely because of the lack of clarity in the model descriptions and derivation of the various parameters used. The committee requested some clarifications from NASA throughout its review and was able to resolve many, but not all, of the ambiguities in the written description.

  15. Application of geostatistics to risk assessment.

    PubMed

    Thayer, William C; Griffith, Daniel A; Goodrum, Philip E; Diamond, Gary L; Hassett, James M

    2003-10-01

    Geostatistics offers two fundamental contributions to environmental contaminant exposure assessment: (1) a group of methods to quantitatively describe the spatial distribution of a pollutant and (2) the ability to improve estimates of the exposure point concentration by exploiting the geospatial information present in the data. The second contribution is particularly valuable when exposure estimates must be derived from small data sets, which is often the case in environmental risk assessment. This article addresses two topics related to the use of geostatistics in human and ecological risk assessments performed at hazardous waste sites: (1) the importance of assessing model assumptions when using geostatistics and (2) the use of geostatistics to improve estimates of the exposure point concentration (EPC) in the limited data scenario. The latter topic is approached here by comparing design-based estimators that are familiar to environmental risk assessors (e.g., Land's method) with geostatistics, a model-based estimator. In this report, we summarize the basics of spatial weighting of sample data, kriging, and geostatistical simulation. We then explore the two topics identified above in a case study, using soil lead concentration data from a Superfund site (a skeet and trap range). We also describe several areas where research is needed to advance the use of geostatistics in environmental risk assessment.

  16. Estimating Skin Cancer Risk: Evaluating Mobile Computer-Adaptive Testing.

    PubMed

    Djaja, Ngadiman; Janda, Monika; Olsen, Catherine M; Whiteman, David C; Chien, Tsair-Wei

    2016-01-22

    Response burden is a major detriment to questionnaire completion rates. Computer adaptive testing may offer advantages over non-adaptive testing, including reduction of numbers of items required for precise measurement. Our aim was to compare the efficiency of non-adaptive (NAT) and computer adaptive testing (CAT) facilitated by Partial Credit Model (PCM)-derived calibration to estimate skin cancer risk. We used a random sample from a population-based Australian cohort study of skin cancer risk (N=43,794). All 30 items of the skin cancer risk scale were calibrated with the Rasch PCM. A total of 1000 cases generated following a normal distribution (mean [SD] 0 [1]) were simulated using three Rasch models with three fixed-item (dichotomous, rating scale, and partial credit) scenarios, respectively. We calculated the comparative efficiency and precision of CAT and NAT (shortening of questionnaire length and the count difference number ratio less than 5% using independent t tests). We found that use of CAT led to smaller person standard error of the estimated measure than NAT, with substantially higher efficiency but no loss of precision, reducing response burden by 48%, 66%, and 66% for dichotomous, Rating Scale Model, and PCM models, respectively. CAT-based administrations of the skin cancer risk scale could substantially reduce participant burden without compromising measurement precision. A mobile computer adaptive test was developed to help people efficiently assess their skin cancer risk.

  17. Total Ionizing Dose Influence on the Single Event Effect Sensitivity in Samsung 8Gb NAND Flash Memories

    NASA Astrophysics Data System (ADS)

    Edmonds, Larry D.; Irom, Farokh; Allen, Gregory R.

    2017-08-01

    A recent model provides risk estimates for the deprogramming of initially programmed floating gates via prompt charge loss produced by an ionizing radiation environment. The environment can be a mixture of electrons, protons, and heavy ions. The model requires several input parameters. This paper extends the model to include TID effects in the control circuitry by including one additional parameter. Parameters intended to produce conservative risk estimates for the Samsung 8 Gb SLC NAND flash memory are given, subject to some qualifications.

  18. Assessing and Refining Myocardial Infarction Risk Estimation among Patients with Human Immunodeficiency Virus

    PubMed Central

    Feinstein, Matthew J.; Nance, Robin M.; Drozd, Daniel R.; Ning, Hongyan; Delaney, Joseph A.; Heckbert, Susan R.; Budoff, Matthew J.; Mathews, William C.; Kitahata, Mari M.; Saag, Michael S.; Eron, Joseph J.; Moore, Richard D.; Achenbach, Chad J.; Lloyd-Jones, Donald M.; Crane, Heidi M.

    2017-01-01

    Importance Persons with human immunodeficiency virus (HIV) treated with antiretroviral therapy (ART) have improved longevity but are at elevated risk for myocardial infarction (MI) due to common MI risk factors and HIV-specific factors. Despite these elevated MI rates, optimal methods to predict MI risks for HIV-infected persons remain unclear. Objective To determine the extent to which existing and de novo estimation tools predict MI in a multi-center HIV cohort with rigorous MI adjudication. Design We evaluated the performance of standard-of-care and two new data-derived MI risk estimation models in the Centers for AIDS Research Network of Integrated Clinical Systems (CNICS) multi-center prospective clinical cohort. The new risk estimation models were validated in a cohort separate from the derivation cohort. Setting Clinical sites across the U.S. where HIV-infected adults receive medical care in inpatient and outpatient settings. Participants HIV-infected adults receiving care anytime since 1995 at 5 CNICS sites where MIs were adjudicated (N=19829). Exposures Common cardiovascular risk factors, HIV viral load, CD4 count, and medication use were used to calculate predicted event rates. Main Outcome and Measures Observed MI rates over the course of follow-up, scaled to 10 years using an observed prime approach to account for dropout and loss to follow-up prior to 10 years. Results MI rates were higher among blacks, older participants, and participants who were not virally suppressed. The 2013 Pooled Cohort Equations (PCEs), which predict composite rates of MI and stroke, adequately discriminated MI risk (Harrell’s C Statistic = 0.75). Two data-derived models incorporating HIV-specific covariates exhibited weak calibration in a validation sample and did not discriminate risk any better (Harrell’s C Statistic = 0.72 and 0.73) than the PCEs. The PCEs were moderately calibrated in CNICS but predicted consistently lower than observed prime rates of MI. The PCEs Conclusions and relevance The PCEs discriminated MI risk and were moderately calibrated in this multi-center HIV cohort. Adding HIV-specific factors did not improve model performance. As HIV-infected cohorts capture and assess outcomes of MI and stroke, the performance of risk estimation tools should be revisited. PMID:28002550

  19. Exploring the Specifications of Spatial Adjacencies and Weights in Bayesian Spatial Modeling with Intrinsic Conditional Autoregressive Priors in a Small-area Study of Fall Injuries

    PubMed Central

    Law, Jane

    2016-01-01

    Intrinsic conditional autoregressive modeling in a Bayeisan hierarchical framework has been increasingly applied in small-area ecological studies. This study explores the specifications of spatial structure in this Bayesian framework in two aspects: adjacency, i.e., the set of neighbor(s) for each area; and (spatial) weight for each pair of neighbors. Our analysis was based on a small-area study of falling injuries among people age 65 and older in Ontario, Canada, that was aimed to estimate risks and identify risk factors of such falls. In the case study, we observed incorrect adjacencies information caused by deficiencies in the digital map itself. Further, when equal weights was replaced by weights based on a variable of expected count, the range of estimated risks increased, the number of areas with probability of estimated risk greater than one at different probability thresholds increased, and model fit improved. More importantly, significance of a risk factor diminished. Further research to thoroughly investigate different methods of variable weights; quantify the influence of specifications of spatial weights; and develop strategies for better defining spatial structure of a map in small-area analysis in Bayesian hierarchical spatial modeling is recommended. PMID:29546147

  20. Overcoming bias in estimating the volume-outcome relationship.

    PubMed

    Tsai, Alexander C; Votruba, Mark; Bridges, John F P; Cebul, Randall D

    2006-02-01

    To examine the effect of hospital volume on 30-day mortality for patients with congestive heart failure (CHF) using administrative and clinical data in conventional regression and instrumental variables (IV) estimation models. The primary data consisted of longitudinal information on comorbid conditions, vital signs, clinical status, and laboratory test results for 21,555 Medicare-insured patients aged 65 years and older hospitalized for CHF in northeast Ohio in 1991-1997. The patient was the primary unit of analysis. We fit a linear probability model to the data to assess the effects of hospital volume on patient mortality within 30 days of admission. Both administrative and clinical data elements were included for risk adjustment. Linear distances between patients and hospitals were used to construct the instrument, which was then used to assess the endogeneity of hospital volume. When only administrative data elements were included in the risk adjustment model, the estimated volume-outcome effect was statistically significant (p=.029) but small in magnitude. The estimate was markedly attenuated in magnitude and statistical significance when clinical data were added to the model as risk adjusters (p=.39). IV estimation shifted the estimate in a direction consistent with selective referral, but we were unable to reject the consistency of the linear probability estimates. Use of only administrative data for volume-outcomes research may generate spurious findings. The IV analysis further suggests that conventional estimates of the volume-outcome relationship may be contaminated by selective referral effects. Taken together, our results suggest that efforts to concentrate hospital-based CHF care in high-volume hospitals may not reduce mortality among elderly patients.

  1. A model of the costs of community and nosocomial pediatric respiratory syncytial virus infections in Canadian hospitals

    PubMed Central

    Jacobs, Philip; Lier, Douglas; Gooch, Katherine; Buesch, Katharina; Lorimer, Michelle; Mitchell, Ian

    2013-01-01

    BACKGROUND: Approximately one in 10 hospitalized patients will acquire a nosocomial infection (NI) after admission to hospital, of which 71% are due to respiratory viruses, including the respiratory syncytial virus (RSV). NIs are concerning and lead to prolonged hospitalizations. The economics of NIs are typically described in generalized terms and specific cost data are lacking. OBJECTIVE: To develop an evidence-based model for predicting the risk and cost of nosocomial RSV infection in pediatric settings. METHODS: A model was developed, from a Canadian perspective, to capture all costs related to an RSV infection hospitalization, including the risk and cost of an NI, diagnostic testing and infection control. All data inputs were derived from published literature. Deterministic sensitivity analyses were performed to evaluate the uncertainty associated with the estimates and to explore the impact of changes to key variables. A probabilistic sensitivity analysis was performed to estimate a confidence interval for the overall cost estimate. RESULTS: The estimated cost of nosocomial RSV infection adds approximately 30.5% to the hospitalization costs for the treatment of community-acquired severe RSV infection. The net benefits of the prevention activities were estimated to be equivalent to 9% of the total RSV-related costs. Changes in the estimated hospital infection transmission rates did not have a significant impact on the base-case estimate. CONCLUSIONS: The risk and cost of nosocomial RSV infection contributes to the overall burden of RSV. The present model, which was developed to estimate this burden, can be adapted to other countries with different disease epidemiology, costs and hospital infection transmission rates. PMID:24421788

  2. Estimating Longitudinal Risks and Benefits From Cardiovascular Preventive Therapies Among Medicare Patients: The Million Hearts Longitudinal ASCVD Risk Assessment Tool: A Special Report From the American Heart Association and American College of Cardiology.

    PubMed

    Lloyd-Jones, Donald M; Huffman, Mark D; Karmali, Kunal N; Sanghavi, Darshak M; Wright, Janet S; Pelser, Colleen; Gulati, Martha; Masoudi, Frederick A; Goff, David C

    2017-03-28

    The Million Hearts Initiative has a goal of preventing 1 million heart attacks and strokes-the leading causes of mortality-through several public health and healthcare strategies by 2017. The American Heart Association and American College of Cardiology support the program. The Cardiovascular Risk Reduction Model was developed by Million Hearts and the Center for Medicare & Medicaid Services as a strategy to assess a value-based payment approach toward reduction in 10-year predicted risk of atherosclerotic cardiovascular disease (ASCVD) by implementing cardiovascular preventive strategies to manage the "ABCS" (aspirin therapy in appropriate patients, blood pressure control, cholesterol management, and smoking cessation). The purpose of this special report is to describe the development and intended use of the Million Hearts Longitudinal ASCVD Risk Assessment Tool. The Million Hearts Tool reinforces and builds on the "2013 ACC/AHA Guideline on the Assessment of Cardiovascular Risk" by allowing clinicians to estimate baseline and updated 10-year ASCVD risk estimates for primary prevention patients adhering to the appropriate ABCS over time, alone or in combination. The tool provides updated risk estimates based on evidence from high-quality systematic reviews and meta-analyses of the ABCS therapies. This novel approach to personalized estimation of benefits from risk-reducing therapies in primary prevention may help target therapies to those in whom they will provide the greatest benefit, and serves as the basis for a Center for Medicare & Medicaid Services program designed to evaluate the Million Hearts Cardiovascular Risk Reduction Model. Copyright © 2017 American Heart Association, Inc., and the American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  3. A comparison of imputation techniques for handling missing predictor values in a risk model with a binary outcome.

    PubMed

    Ambler, Gareth; Omar, Rumana Z; Royston, Patrick

    2007-06-01

    Risk models that aim to predict the future course and outcome of disease processes are increasingly used in health research, and it is important that they are accurate and reliable. Most of these risk models are fitted using routinely collected data in hospitals or general practices. Clinical outcomes such as short-term mortality will be near-complete, but many of the predictors may have missing values. A common approach to dealing with this is to perform a complete-case analysis. However, this may lead to overfitted models and biased estimates if entire patient subgroups are excluded. The aim of this paper is to investigate a number of methods for imputing missing data to evaluate their effect on risk model estimation and the reliability of the predictions. Multiple imputation methods, including hotdecking and multiple imputation by chained equations (MICE), were investigated along with several single imputation methods. A large national cardiac surgery database was used to create simulated yet realistic datasets. The results suggest that complete case analysis may produce unreliable risk predictions and should be avoided. Conditional mean imputation performed well in our scenario, but may not be appropriate if using variable selection methods. MICE was amongst the best performing multiple imputation methods with regards to the quality of the predictions. Additionally, it produced the least biased estimates, with good coverage, and hence is recommended for use in practice.

  4. Modeling Joint Exposures and Health Outcomes for Cumulative Risk Assessment: The Case of Radon and Smoking

    PubMed Central

    Chahine, Teresa; Schultz, Bradley D.; Zartarian, Valerie G.; Xue, Jianping; Subramanian, SV; Levy, Jonathan I.

    2011-01-01

    Community-based cumulative risk assessment requires characterization of exposures to multiple chemical and non-chemical stressors, with consideration of how the non-chemical stressors may influence risks from chemical stressors. Residential radon provides an interesting case example, given its large attributable risk, effect modification due to smoking, and significant variability in radon concentrations and smoking patterns. In spite of this fact, no study to date has estimated geographic and sociodemographic patterns of both radon and smoking in a manner that would allow for inclusion of radon in community-based cumulative risk assessment. In this study, we apply multi-level regression models to explain variability in radon based on housing characteristics and geological variables, and construct a regression model predicting housing characteristics using U.S. Census data. Multi-level regression models of smoking based on predictors common to the housing model allow us to link the exposures. We estimate county-average lifetime lung cancer risks from radon ranging from 0.15 to 1.8 in 100, with high-risk clusters in areas and for subpopulations with high predicted radon and smoking rates. Our findings demonstrate the viability of screening-level assessment to characterize patterns of lung cancer risk from radon, with an approach that can be generalized to multiple chemical and non-chemical stressors. PMID:22016710

  5. Risk adjustment model of credit life insurance using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Saputra, A.; Sukono; Rusyaman, E.

    2018-03-01

    In managing the risk of credit life insurance, insurance company should acknowledge the character of the risks to predict future losses. Risk characteristics can be learned in a claim distribution model. There are two standard approaches in designing the distribution model of claims over the insurance period i.e, collective risk model and individual risk model. In the collective risk model, the claim arises when risk occurs is called individual claim, accumulation of individual claim during a period of insurance is called an aggregate claim. The aggregate claim model may be formed by large model and a number of individual claims. How the measurement of insurance risk with the premium model approach and whether this approach is appropriate for estimating the potential losses occur in the future. In order to solve the problem Genetic Algorithm with Roulette Wheel Selection is used.

  6. Additive mixed effect model for recurrent gap time data.

    PubMed

    Ding, Jieli; Sun, Liuquan

    2017-04-01

    Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.

  7. Interplay Between Genetic Substrate, QTc Duration, and Arrhythmia Risk in Patients With Long QT Syndrome.

    PubMed

    Mazzanti, Andrea; Maragna, Riccardo; Vacanti, Gaetano; Monteforte, Nicola; Bloise, Raffaella; Marino, Maira; Braghieri, Lorenzo; Gambelli, Patrick; Memmi, Mirella; Pagan, Eleonora; Morini, Massimo; Malovini, Alberto; Ortiz, Martin; Sacilotto, Luciana; Bellazzi, Riccardo; Monserrat, Lorenzo; Napolitano, Carlo; Bagnardi, Vincenzo; Priori, Silvia G

    2018-04-17

    Long QT syndrome (LQTS) is a common inheritable arrhythmogenic disorder, often secondary to mutations in the KCNQ1, KCNH2, and SCN5A genes. The disease is characterized by a prolonged ventricular repolarization (QTc interval) that confers susceptibility to life-threatening arrhythmic events (LAEs). This study sought to create an evidence-based risk stratification scheme to personalize the quantification of the arrhythmic risk in patients with LQTS. Data from 1,710 patients with LQTS followed up for a median of 7.1 years (interquartile range [IQR]: 2.7 to 13.4 years) were analyzed to estimate the 5-year risk of LAEs based on QTc duration and genotype and to assess the antiarrhythmic efficacy of beta-blockers. The relationship between QTc duration and risk of events was investigated by comparison of linear and cubic spline models, and the linear model provided the best fit. The 5-year risk of LAEs while patients were off therapy was then calculated in a multivariable Cox model with QTc and genotype considered as independent factors. The estimated risk of LAEs increased by 15% for every 10-ms increment of QTc duration for all genotypes. Intergenotype comparison showed that the risk for patients with LQT2 and LQT3 increased by 130% and 157% at any QTc duration versus patients with LQT1. Analysis of response to beta-blockers showed that only nadolol reduced the arrhythmic risk in all genotypes significantly compared with no therapy (hazard ratio: 0.38; 95% confidence interval: 0.15 to 0.93; p = 0.03). The study provides an estimator of risk of LAEs in LQTS that allows a granular estimate of 5-year arrhythmic risk and demonstrate the superiority of nadolol in reducing the risk of LAEs in LQTS. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  8. Application of Ecosystem Models to Assess Environmental Drivers of Mosquito Abundance and Virus Transmission Risk and Associated Public Health Implications of Climate and Land Use Change

    NASA Astrophysics Data System (ADS)

    Melton, F.; Barker, C.; Park, B.; Reisen, W.; Michaelis, A.; Wang, W.; Hashimoto, H.; Milesi, C.; Hiatt, S.; Nemani, R.

    2008-12-01

    The NASA Terrestrial Observation and Prediction System (TOPS) is a modeling framework that integrates satellite observations, meteorological observations, and ancillary data to support monitoring and modeling of ecosystem and land surface conditions in near real-time. TOPS provides spatially continuous gridded estimates of a suite of measurements describing environmental conditions, and these data products are currently being applied to support the development of new models capable of forecasting estimated mosquito abundance and transmission risk for mosquito-borne diseases such as West Nile virus. We present results from the modeling analyses, describe their incorporation into the California Vectorborne Disease Surveillance System, and describe possible implications of projected climate and land use change for patterns in mosquito abundance and transmission risk for West Nile virus in California.

  9. One vs. Two Breast Density Measures to Predict 5- and 10- Year Breast Cancer Risk

    PubMed Central

    Kerlikowske, Karla; Gard, Charlotte C.; Sprague, Brian L.; Tice, Jeffrey A.; Miglioretti, Diana L.

    2015-01-01

    Background One measure of Breast Imaging Reporting and Data System (BI-RADS) breast density improves 5-year breast cancer risk prediction, but the value of sequential measures is unknown. We determined if two BI-RADS density measures improves the predictive accuracy of the Breast Cancer Surveillance Consortium 5-year risk model compared to one measure. Methods We included 722,654 women aged 35–74 years with two mammograms with BI-RADS density measures on average 1.8 years apart; 13,715 developed invasive breast cancer. We used Cox regression to estimate the relative hazards of breast cancer for age, race/ethnicity, family history of breast cancer, history of breast biopsy, and one or two density measures. We developed a risk prediction model by combining these estimates with 2000–2010 Surveillance, Epidemiology, and End Results incidence and 2010 vital statistics for competing risk of death. Results The two-measure density model had marginally greater discriminatory accuracy than the one-measure model (AUC=0.640 vs. 0.635). Of 18.6% of women (134,404/722,654) who decreased density categories, 15.4% (20,741/134,404) of women whose density decreased from heterogeneously or extremely dense to a lower density category with one other risk factor had a clinically meaningful increase in 5-year risk from <1.67% with the one-density model to ≥1.67% with the two-density model. Conclusion The two-density model has similar overall discrimination to the one-density model for predicting 5-year breast cancer risk and improves risk classification for women with risk factors and a decrease in density. Impact A two-density model should be considered for women whose density decreases when calculating breast cancer risk. PMID:25824444

  10. One versus Two Breast Density Measures to Predict 5- and 10-Year Breast Cancer Risk.

    PubMed

    Kerlikowske, Karla; Gard, Charlotte C; Sprague, Brian L; Tice, Jeffrey A; Miglioretti, Diana L

    2015-06-01

    One measure of Breast Imaging Reporting and Data System (BI-RADS) breast density improves 5-year breast cancer risk prediction, but the value of sequential measures is unknown. We determined whether two BI-RADS density measures improve the predictive accuracy of the Breast Cancer Surveillance Consortium 5-year risk model compared with one measure. We included 722,654 women of ages 35 to 74 years with two mammograms with BI-RADS density measures on average 1.8 years apart; 13,715 developed invasive breast cancer. We used Cox regression to estimate the relative hazards of breast cancer for age, race/ethnicity, family history of breast cancer, history of breast biopsy, and one or two density measures. We developed a risk prediction model by combining these estimates with 2000-2010 Surveillance, Epidemiology, and End Results incidence and 2010 vital statistics for competing risk of death. The two-measure density model had marginally greater discriminatory accuracy than the one-measure model (AUC, 0.640 vs. 0.635). Of 18.6% of women (134,404 of 722,654) who decreased density categories, 15.4% (20,741 of 134,404) of women whose density decreased from heterogeneously or extremely dense to a lower density category with one other risk factor had a clinically meaningful increase in 5-year risk from <1.67% with the one-density model to ≥1.67% with the two-density model. The two-density model has similar overall discrimination to the one-density model for predicting 5-year breast cancer risk and improves risk classification for women with risk factors and a decrease in density. A two-density model should be considered for women whose density decreases when calculating breast cancer risk. ©2015 American Association for Cancer Research.

  11. Health-seeking behavior and transmission dynamics in the control of influenza infection among different age groups

    PubMed Central

    You, Shu-Han; Chen, Szu-Chieh; Liao, Chung-Min

    2018-01-01

    Background It has been found that health-seeking behavior has a certain impact on influenza infection. However, behaviors with/without risk perception on the control of influenza transmission among age groups have not been well quantified. Objectives The purpose of this study was to assess to what extent, under scenarios of with/without control and preventive/protective behaviors, the age-specific network-driven risk perception influences influenza infection. Materials and methods A behavior-influenza model was used to estimate the spread rate of age-specific risk perception in response to an influenza outbreak. A network-based information model was used to assess the effect of network-driven risk perception information transmission on influenza infection. A probabilistic risk model was used to assess the infection risk effect of risk perception with a health behavior change. Results The age-specific overlapping percentage was estimated to be 40%–43%, 55%–60%, and 19%–35% for child, teenage and adult, and elderly age groups, respectively. Individuals perceive the preventive behavior to improve risk perception information transmission among teenage and adult and elderly age groups, but not in the child age group. The population with perceived health behaviors could not effectively decrease the percentage of infection risk in the child age group, whereas for the elderly age group, the percentage of decrease in infection risk was more significant, with a 97.5th percentile estimate of 97%. Conclusion The present integrated behavior-infection model can help health authorities in communicating health messages for an intertwined belief network in which health-seeking behavior plays a key role in controlling influenza infection. PMID:29563814

  12. STakeholder-Objective Risk Model (STORM): Determining the aggregated risk of multiple contaminant hazards in groundwater well catchments

    NASA Astrophysics Data System (ADS)

    Enzenhoefer, R.; Binning, P. J.; Nowak, W.

    2015-09-01

    Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any point in time, which then affects the pumped quality upon transport through the aquifer. In such situations, estimating the overall risk is not trivial, and three key questions emerge: (1) How to aggregate the impacts from different contaminants and spill locations to an overall, cumulative impact on the value at risk? (2) How to properly account for the stochastic nature of spill events when converting the aggregated impact to a risk estimate? (3) How will the overall risk and subsequent decision making depend on stakeholder objectives, where stakeholder objectives refer to the values at risk, risk attitudes and risk metrics that can vary between stakeholders. In this study, we provide a STakeholder-Objective Risk Model (STORM) for assessing the total aggregated risk. Or concept is a quantitative, probabilistic and modular framework for simulation-based risk estimation. It rests on the source-pathway-receptor concept, mass-discharge-based aggregation of stochastically occuring spill events, accounts for uncertainties in the involved flow and transport models through Monte Carlo simulation, and can address different stakeholder objectives. We illustrate the application of STORM in a numerical test case inspired by a German drinking water catchment. As one may expect, the results depend strongly on the chosen stakeholder objectives, but they are equally sensitive to different approaches for risk aggregation across different hazards, contaminant types, and over time.

  13. A quantitative risk assessment model for Vibrio parahaemolyticus in raw oysters in Sao Paulo State, Brazil.

    PubMed

    Sobrinho, Paulo de S Costa; Destro, Maria T; Franco, Bernadette D G M; Landgraf, Mariza

    2014-06-16

    A risk assessment of Vibrio parahaemolyticus associated with raw oysters produced and consumed in São Paulo State was developed. The model was built according to the United States Food and Drug Administration framework for risk assessment. The outcome of the exposure assessment estimated the prevalence and density of pathogenic V. parahaemolyticus in raw oysters from harvest to consumption. The result of the exposure step was combined with a Beta-Poisson dose-response model to estimate the probability of illness. The model predicted that the average risks per serving of raw oysters were 4.7×10(-4), 6.0×10(-4), 4.7×10(-4) and 3.1×10(-4) for spring, summer, fall and winter, respectively. Sensitivity analyses indicated that the most influential variables on the risk of illness were the total density of V. parahaemolyticus at harvest, transport temperature, relative prevalence of pathogenic strains and storage time at retail. Only storage time under refrigeration at retail showed negative correlation with the risk of illness. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Development and validation of a Markov microsimulation model for the economic evaluation of treatments in osteoporosis.

    PubMed

    Hiligsmann, Mickaël; Ethgen, Olivier; Bruyère, Olivier; Richy, Florent; Gathon, Henry-Jean; Reginster, Jean-Yves

    2009-01-01

    Markov models are increasingly used in economic evaluations of treatments for osteoporosis. Most of the existing evaluations are cohort-based Markov models missing comprehensive memory management and versatility. In this article, we describe and validate an original Markov microsimulation model to accurately assess the cost-effectiveness of prevention and treatment of osteoporosis. We developed a Markov microsimulation model with a lifetime horizon and a direct health-care cost perspective. The patient history was recorded and was used in calculations of transition probabilities, utilities, and costs. To test the internal consistency of the model, we carried out an example calculation for alendronate therapy. Then, external consistency was investigated by comparing absolute lifetime risk of fracture estimates with epidemiologic data. For women at age 70 years, with a twofold increase in the fracture risk of the average population, the costs per quality-adjusted life-year gained for alendronate therapy versus no treatment were estimated at €9105 and €15,325, respectively, under full and realistic adherence assumptions. All the sensitivity analyses in terms of model parameters and modeling assumptions were coherent with expected conclusions and absolute lifetime risk of fracture estimates were within the range of previous estimates, which confirmed both internal and external consistency of the model. Microsimulation models present some major advantages over cohort-based models, increasing the reliability of the results and being largely compatible with the existing state of the art, evidence-based literature. The developed model appears to be a valid model for use in economic evaluations in osteoporosis.

  15. The estimated effect of mass or footprint reduction in recent light-duty vehicles on U.S. societal fatality risk per vehicle mile traveled.

    PubMed

    Wenzel, Tom

    2013-10-01

    The National Highway Traffic Safety Administration (NHTSA) recently updated its 2003 and 2010 logistic regression analyses of the effect of a reduction in light-duty vehicle mass on US societal fatality risk per vehicle mile traveled (VMT; Kahane, 2012). Societal fatality risk includes the risk to both the occupants of the case vehicle as well as any crash partner or pedestrians. The current analysis is the most thorough investigation of this issue to date. This paper replicates the Kahane analysis and extends it by testing the sensitivity of his results to changes in the definition of risk, and the data and control variables used in the regression models. An assessment by Lawrence Berkeley National Laboratory (LBNL) indicates that the estimated effect of mass reduction on risk is smaller than in Kahane's previous studies, and is statistically non-significant for all but the lightest cars (Wenzel, 2012a). The estimated effects of a reduction in mass or footprint (i.e. wheelbase times track width) are small relative to other vehicle, driver, and crash variables used in the regression models. The recent historical correlation between mass and footprint is not so large to prohibit including both variables in the same regression model; excluding footprint from the model, i.e. allowing footprint to decrease with mass, increases the estimated detrimental effect of mass reduction on risk in cars and crossover utility vehicles (CUVs)/minivans, but has virtually no effect on light trucks. Analysis by footprint deciles indicates that risk does not consistently increase with reduced mass for vehicles of similar footprint. Finally, the estimated effects of mass and footprint reduction are sensitive to the measure of exposure used (fatalities per induced exposure crash, rather than per VMT), as well as other changes in the data or control variables used. It appears that the safety penalty from lower mass can be mitigated with careful vehicle design, and that manufacturers can reduce mass as a strategy to increase their vehicles' fuel economy and reduce greenhouse gas emissions without necessarily compromising societal safety. Published by Elsevier Ltd.

  16. Quantifying the predictive accuracy of time-to-event models in the presence of competing risks.

    PubMed

    Schoop, Rotraut; Beyersmann, Jan; Schumacher, Martin; Binder, Harald

    2011-02-01

    Prognostic models for time-to-event data play a prominent role in therapy assignment, risk stratification and inter-hospital quality assurance. The assessment of their prognostic value is vital not only for responsible resource allocation, but also for their widespread acceptance. The additional presence of competing risks to the event of interest requires proper handling not only on the model building side, but also during assessment. Research into methods for the evaluation of the prognostic potential of models accounting for competing risks is still needed, as most proposed methods measure either their discrimination or calibration, but do not examine both simultaneously. We adapt the prediction error proposal of Graf et al. (Statistics in Medicine 1999, 18, 2529–2545) and Gerds and Schumacher (Biometrical Journal 2006, 48, 1029–1040) to handle models with competing risks, i.e. more than one possible event type, and introduce a consistent estimator. A simulation study investigating the behaviour of the estimator in small sample size situations and for different levels of censoring together with a real data application follows.

  17. Advances in Inhalation Dosimetry Models and Methods for Occupational Risk Assessment and Exposure Limit Derivation

    PubMed Central

    Kuempel, Eileen D.; Sweeney, Lisa M.; Morris, John B.; Jarabek, Annie M.

    2015-01-01

    The purpose of this article is to provide an overview and practical guide to occupational health professionals concerning the derivation and use of dose estimates in risk assessment for development of occupational exposure limits (OELs) for inhaled substances. Dosimetry is the study and practice of measuring or estimating the internal dose of a substance in individuals or a population. Dosimetry thus provides an essential link to understanding the relationship between an external exposure and a biological response. Use of dosimetry principles and tools can improve the accuracy of risk assessment, and reduce the uncertainty, by providing reliable estimates of the internal dose at the target tissue. This is accomplished through specific measurement data or predictive models, when available, or the use of basic dosimetry principles for broad classes of materials. Accurate dose estimation is essential not only for dose-response assessment, but also for interspecies extrapolation and for risk characterization at given exposures. Inhalation dosimetry is the focus of this paper since it is a major route of exposure in the workplace. Practical examples of dose estimation and OEL derivation are provided for inhaled gases and particulates. PMID:26551218

  18. EVALUATING RISK-PREDICTION MODELS USING DATA FROM ELECTRONIC HEALTH RECORDS.

    PubMed

    Wang, L E; Shaw, Pamela A; Mathelier, Hansie M; Kimmel, Stephen E; French, Benjamin

    2016-03-01

    The availability of data from electronic health records facilitates the development and evaluation of risk-prediction models, but estimation of prediction accuracy could be limited by outcome misclassification, which can arise if events are not captured. We evaluate the robustness of prediction accuracy summaries, obtained from receiver operating characteristic curves and risk-reclassification methods, if events are not captured (i.e., "false negatives"). We derive estimators for sensitivity and specificity if misclassification is independent of marker values. In simulation studies, we quantify the potential for bias in prediction accuracy summaries if misclassification depends on marker values. We compare the accuracy of alternative prognostic models for 30-day all-cause hospital readmission among 4548 patients discharged from the University of Pennsylvania Health System with a primary diagnosis of heart failure. Simulation studies indicate that if misclassification depends on marker values, then the estimated accuracy improvement is also biased, but the direction of the bias depends on the direction of the association between markers and the probability of misclassification. In our application, 29% of the 1143 readmitted patients were readmitted to a hospital elsewhere in Pennsylvania, which reduced prediction accuracy. Outcome misclassification can result in erroneous conclusions regarding the accuracy of risk-prediction models.

  19. Health effects models for nuclear power plant accident consequence analysis: Low LET radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, J.S.

    1990-01-01

    This report describes dose-response models intended to be used in estimating the radiological health effects of nuclear power plant accidents. Models of early and continuing effects, cancers and thyroid nodules, and genetic effects are provided. Weibull dose-response functions are recommended for evaluating the risks of early and continuing health effects. Three potentially lethal early effects -- the hematopoietic, pulmonary, and gastrointestinal syndromes -- are considered. In addition, models are included for assessing the risks of several nonlethal early and continuing effects -- including prodromal vomiting and diarrhea, hypothyroidism and radiation thyroiditis, skin burns, reproductive effects, and pregnancy losses. Linear andmore » linear-quadratic models are recommended for estimating cancer risks. Parameters are given for analyzing the risks of seven types of cancer in adults -- leukemia, bone, lung, breast, gastrointestinal, thyroid, and other.'' The category, other'' cancers, is intended to reflect the combined risks of multiple myeloma, lymphoma, and cancers of the bladder, kidney, brain, ovary, uterus and cervix. Models of childhood cancers due to in utero exposure are also developed. For most cancers, both incidence and mortality are addressed. The models of cancer risk are derived largely from information summarized in BEIR III -- with some adjustment to reflect more recent studies. 64 refs., 18 figs., 46 tabs.« less

  20. Estimating community health needs against a Triple Aim background: What can we learn from current predictive risk models?

    PubMed

    Elissen, Arianne M J; Struijs, Jeroen N; Baan, Caroline A; Ruwaard, Dirk

    2015-05-01

    To support providers and commissioners in accurately assessing their local populations' health needs, this study produces an overview of Dutch predictive risk models for health care, focusing specifically on the type, combination and relevance of included determinants for achieving the Triple Aim (improved health, better care experience, and lower costs). We conducted a mixed-methods study combining document analyses, interviews and a Delphi study. Predictive risk models were identified based on a web search and expert input. Participating in the study were Dutch experts in predictive risk modelling (interviews; n=11) and experts in healthcare delivery, insurance and/or funding methodology (Delphi panel; n=15). Ten predictive risk models were analysed, comprising 17 unique determinants. Twelve were considered relevant by experts for estimating community health needs. Although some compositional similarities were identified between models, the combination and operationalisation of determinants varied considerably. Existing predictive risk models provide a good starting point, but optimally balancing resources and targeting interventions on the community level will likely require a more holistic approach to health needs assessment. Development of additional determinants, such as measures of people's lifestyle and social network, may require policies pushing the integration of routine data from different (healthcare) sources. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. How rapidly does the excess risk of lung cancer decline following quitting smoking? A quantitative review using the negative exponential model.

    PubMed

    Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J

    2013-10-01

    The excess lung cancer risk from smoking declines with time quit, but the shape of the decline has never been precisely modelled, or meta-analyzed. From a database of studies of at least 100 cases, we extracted 106 blocks of RRs (from 85 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls (or at-risk) formed the data for fitting the negative exponential model. We estimated the half-life (H, time in years when the excess risk becomes half that for a continuing smoker) for each block, investigated model fit, and studied heterogeneity in H. We also conducted sensitivity analyses allowing for reverse causation, either ignoring short-term quitters (S1) or considering them smokers (S2). Model fit was poor ignoring reverse causation, but much improved for both sensitivity analyses. Estimates of H were similar for all three analyses. For the best-fitting analysis (S1), H was 9.93 (95% CI 9.31-10.60), but varied by sex (females 7.92, males 10.71), and age (<50years 6.98, 70+years 12.99). Given that reverse causation is taken account of, the model adequately describes the decline in excess risk. However, estimates of H may be biased by factors including misclassification of smoking status. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Transferability and robustness of real-time freeway crash risk assessment.

    PubMed

    Shew, Cameron; Pande, Anurag; Nuworsoo, Cornelius

    2013-09-01

    This study examines the data from single loop detectors on northbound (NB) US-101 in San Jose, California to estimate real-time crash risk assessment models. The classification tree and neural network based crash risk assessment models developed with data from NB US-101 are applied to data from the same freeway, as well as to the data from nearby segments of the SB US-101, NB I-880, and SB I-880 corridors. The performance of crash risk assessment models on these nearby segments is the focus of this research. The model applications show that it is in fact possible to use the same model for multiple freeways, as the underlying relationships between traffic data and crash risk remain similar. The framework provided here may be helpful to authorities for freeway segments with newly installed traffic surveillance apparatuses, since the real-time crash risk assessment models from nearby freeways with existing infrastructure would be able to provide a reasonable estimate of crash risk. The robustness of the model output is also assessed by location, time of day, and day of week. The analysis shows that on some locations the models may require further learning due to higher than expected false positive (e.g., the I-680/I-280 interchange on US-101 NB) or false negative rates. The approach for post-processing the results from the model provides ideas to refine the model prior to or during the implementation. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  3. Assessment of ecologic regression in the study of lung cancer and indoor radon.

    PubMed

    Stidley, C A; Samet, J M

    1994-02-01

    Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.

  4. PREDICTIVE ORGANOPHOSPHORUS (OP) PESTICIDE QSARS AND PBPK/PD MODELS FOR RISK ASSESSMENT OF SUSCEPTIBLE SUB-POPULATIONS

    EPA Science Inventory

    Successful use of the Exposure Related Dose Estimating Model (ERDEM) in risk assessment of susceptible human sub-populations, e.g., infants and children, requires input of quality experimental data. In the clear absence of quality data, PBPK models can be developed and possibl...

  5. Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes

    EPA Science Inventory

    We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...

  6. Probability of Causation for Space Radiation Carcinogenesis Following International Space Station, Near Earth Asteroid, and Mars Missions

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Kim, Myung-Hee Y.; Chappell, Lori J.

    2012-01-01

    Cancer risk is an important concern for International Space Station (ISS) missions and future exploration missions. An important question concerns the likelihood of a causal association between a crew members radiation exposure and the occurrence of cancer. The probability of causation (PC), also denoted as attributable risk, is used to make such an estimate. This report summarizes the NASA model of space radiation cancer risks and uncertainties, including improvements to represent uncertainties in tissue-specific cancer incidence models for never-smokers and the U.S. average population. We report on tissue-specific cancer incidence estimates and PC for different post-mission times for ISS and exploration missions. An important conclusion from our analysis is that the NASA policy to limit the risk of exposure-induced death to 3% at the 95% confidence level largely ensures that estimates of the PC for most cancer types would not reach a level of significance. Reducing uncertainties through radiobiological research remains the most efficient method to extend mission length and establish effective mitigators for cancer risks. Efforts to establish biomarkers of space radiation-induced tumors and to estimate PC for rarer tumor types are briefly discussed.

  7. Comparison of cluster-based and source-attribution methods for estimating transmission risk using large HIV sequence databases.

    PubMed

    Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M

    2018-06-01

    Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Analysis of dengue fever risk using geostatistics model in bone regency

    NASA Astrophysics Data System (ADS)

    Amran, Stang, Mallongi, Anwar

    2017-03-01

    This research aim is to analysis of dengue fever risk based on Geostatistics model in Bone Regency. Risk levels of dengue fever are denoted by parameter of Binomial distribution. Effect of temperature, rainfalls, elevation, and larvae abundance are investigated through Geostatistics model. Bayesian hierarchical method is used in estimation process. Using dengue fever data in eleven locations this research shows that temperature and rainfall have significant effect of dengue fever risk in Bone regency.

  9. Application of the Flood-IMPAT procedure in the Valle d'Aosta Region, Italy

    NASA Astrophysics Data System (ADS)

    Minucci, Guido; Mendoza, Marina Tamara; Molinari, Daniela; Atun, Funda; Menoni, Scira; Ballio, Francesco

    2016-04-01

    Flood Risk Management Plans (FRMPs) established by European "Floods" Directive (Directive 2007/60/EU) to Member States in order to address all aspects of flood risk management, taking into account costs and benefits of proposed mitigation tools must be reviewed by the same law every six years. This is aimed at continuously increasing the effectiveness of risk management, on the bases of the most advanced knowledge of flood risk and most (economically) feasible solutions, also taking into consideration achievements of the previous management cycle. Within this context, the Flood-IMPAT (i.e. Integrated Meso-scale Procedure to Assess Territorial flood risk) procedure has been developed aiming at overcoming limits of risk maps produced by the Po River Basin Authority and adopted for the first version of the Po River FRMP. The procedure allows the estimation of flood risk at the meso-scale and it is characterized by three main peculiarities. First is its feasibility for the entire Italian territory. Second is the possibility to express risk in monetary terms (i.e. expected damage), at least for those categories of damage for which suitable models are available. Finally, independent modules compose the procedure: each module allows the estimation of a certain type of damage (i.e. direct, indirect, intangibles) on a certain sector (e.g. residential, industrial, agriculture, environment, etc.) separately, guaranteeing flexibility in the implementation. This paper shows the application of the Flood-IMPAT procedure and the recent advancements in the procedure, aiming at increasing its reliability and usability. Through a further implementation of the procedure in the Dora Baltea River Basin (North of Italy), it was possible to test the sensitivity of risk estimates supplied by Flood-IMPAT with respect to different damage models and different approaches for the estimation of assets at risk. Risk estimates were also compared with observed damage data in the investigated areas to identify the most suitable damage model/exposure assessment approach to be implemented in the procedure. In the end, the procedure was adapted to be applied at the micro-scale, in such a way to supply risk estimates, which are coherent with those at the meso-scale. This way the procedure can be first implemented in the whole catchment to identify hotspots; the micro-scale approach can be implemented in a second run to investigate in depth (i) the most risk prone areas and (ii) the possible risk mitigation strategies.

  10. Predicting the cumulative risk of death during hospitalization by modeling weekend, weekday and diurnal mortality risks.

    PubMed

    Coiera, Enrico; Wang, Ying; Magrabi, Farah; Concha, Oscar Perez; Gallego, Blanca; Runciman, William

    2014-05-21

    Current prognostic models factor in patient and disease specific variables but do not consider cumulative risks of hospitalization over time. We developed risk models of the likelihood of death associated with cumulative exposure to hospitalization, based on time-varying risks of hospitalization over any given day, as well as day of the week. Model performance was evaluated alone, and in combination with simple disease-specific models. Patients admitted between 2000 and 2006 from 501 public and private hospitals in NSW, Australia were used for training and 2007 data for evaluation. The impact of hospital care delivered over different days of the week and or times of the day was modeled by separating hospitalization risk into 21 separate time periods (morning, day, night across the days of the week). Three models were developed to predict death up to 7-days post-discharge: 1/a simple background risk model using age, gender; 2/a time-varying risk model for exposure to hospitalization (admission time, days in hospital); 3/disease specific models (Charlson co-morbidity index, DRG). Combining these three generated a full model. Models were evaluated by accuracy, AUC, Akaike and Bayesian information criteria. There was a clear diurnal rhythm to hospital mortality in the data set, peaking in the evening, as well as the well-known 'weekend-effect' where mortality peaks with weekend admissions. Individual models had modest performance on the test data set (AUC 0.71, 0.79 and 0.79 respectively). The combined model which included time-varying risk however yielded an average AUC of 0.92. This model performed best for stays up to 7-days (93% of admissions), peaking at days 3 to 5 (AUC 0.94). Risks of hospitalization vary not just with the day of the week but also time of the day, and can be used to make predictions about the cumulative risk of death associated with an individual's hospitalization. Combining disease specific models with such time varying- estimates appears to result in robust predictive performance. Such risk exposure models should find utility both in enhancing standard prognostic models as well as estimating the risk of continuation of hospitalization.

  11. A statistical regression model for the estimation of acrylamide concentrations in French fries for excess lifetime cancer risk assessment.

    PubMed

    Chen, Ming-Jen; Hsu, Hui-Tsung; Lin, Cheng-Li; Ju, Wei-Yuan

    2012-10-01

    Human exposure to acrylamide (AA) through consumption of French fries and other foods has been recognized as a potential health concern. Here, we used a statistical non-linear regression model, based on the two most influential factors, cooking temperature and time, to estimate AA concentrations in French fries. The R(2) of the predictive model is 0.83, suggesting the developed model was significant and valid. Based on French fry intake survey data conducted in this study and eight frying temperature-time schemes which can produce tasty and visually appealing French fries, the Monte Carlo simulation results showed that if AA concentration is higher than 168 ppb, the estimated cancer risk for adolescents aged 13-18 years in Taichung City would be already higher than the target excess lifetime cancer risk (ELCR), and that by taking into account this limited life span only. In order to reduce the cancer risk associated with AA intake, the AA levels in French fries might have to be reduced even further if the epidemiological observations are valid. Our mathematical model can serve as basis for further investigations on ELCR including different life stages and behavior and population groups. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Cryptosporidiosis susceptibility and risk: a case study.

    PubMed

    Makri, Anna; Modarres, Reza; Parkin, Rebecca

    2004-02-01

    Regional estimates of cryptosporidiosis risks from drinking water exposure were developed and validated, accounting for AIDS status and age. We constructed a model with probability distributions and point estimates representing Cryptosporidium in tap water, tap water consumed per day (exposure characterization); dose response, illness given infection, prolonged illness given illness; and three conditional probabilities describing the likelihood of case detection by active surveillance (health effects characterization). The model predictions were combined with population data to derive expected case numbers and incidence rates per 100,000 population, by age and AIDS status, borough specific and for New York City overall in 2000 (risk characterization). They were compared with same-year surveillance data to evaluate predictive ability, assumed to represent true incidence of waterborne cryptosporidiosis. The predicted mean risks, similar to previously published estimates for this region, overpredicted observed incidence-most extensively when accounting for AIDS status. The results suggest that overprediction may be due to conservative parameters applied to both non-AIDS and AIDS populations, and that biological differences for children need to be incorporated. Interpretations are limited by the unknown accuracy of available surveillance data, in addition to variability and uncertainty of model predictions. The model appears sensitive to geographical differences in AIDS prevalence. The use of surveillance data for validation and model parameters pertinent to susceptibility are discussed.

  13. Risk to life due to flooding in post-Katrina New Orleans

    NASA Astrophysics Data System (ADS)

    Miller, A.; Jonkman, S. N.; Van Ledden, M.

    2015-01-01

    Since the catastrophic flooding of New Orleans due to Hurricane Katrina in 2005, the city's hurricane protection system has been improved to provide protection against a hurricane load with a 1/100 per year exceedance frequency. This paper investigates the risk to life in post-Katrina New Orleans. In a flood risk analysis the probabilities and consequences of various flood scenarios have been analyzed for the central area of the city (the metro bowl) to give a preliminary estimate of the risk to life in the post-Katrina situation. A two-dimensional hydrodynamic model has been used to simulate flood characteristics of various breaches. The model for estimation of fatality rates is based on the loss of life data for Hurricane Katrina. Results indicate that - depending on the flood scenario - the estimated loss of life in case of flooding ranges from about 100 to nearly 500, with the highest life loss due to breaching of the river levees leading to large flood depths. The probability and consequence estimates are combined to determine the individual risk and societal risk for New Orleans. When compared to risks of other large-scale engineering systems (e.g., other flood prone areas, dams and the nuclear sector) and acceptable risk criteria found in literature, the risks for the metro bowl are found to be relatively high. Thus, despite major improvements to the flood protection system, the flood risk to life of post-Katrina New Orleans is still expected to be significant. Indicative effects of reduction strategies on the risk level are discussed as a basis for further evaluation and discussion.

  14. Risk to life due to flooding in post-Katrina New Orleans

    NASA Astrophysics Data System (ADS)

    Miller, A.; Jonkman, S. N.; Van Ledden, M.

    2014-01-01

    After the catastrophic flooding of New Orleans due to hurricane Katrina in the year 2005, the city's hurricane protection system has been improved to provide protection against a hurricane load with a 1/100 per year exceedance frequency. This paper investigates the risk to life in post-Katrina New Orleans. In a risk-based approach the probabilities and consequences of various flood scenarios have been analyzed for the central area of the city (the metro bowl) to give a preliminary estimate of the risk to life in the post-Katrina situation. A two-dimensional hydrodynamic model has been used to simulate flood characteristics of various breaches. The model for estimation of fatality rates is based on the loss of life data for Hurricane Katrina. Results indicate that - depending on the flood scenario - the estimated loss of life in case of flooding ranges from about 100 to nearly 500, with the highest life loss due to breaching of the river levees leading to large flood depths. The probability and consequence estimates are combined to determine the individual risk and societal risk for New Orleans. When compared to risks of other large scale engineering systems (e.g. other flood prone areas, dams and the nuclear sector) and acceptable risk criteria found in literature, the risks for the metro bowl are found to be relatively high. Thus, despite major improvements to the flood protection system, the flood risk of post-Katrina New Orleans is still expected to be significant. Effects of reduction strategies on the risk level are discussed as a basis for further evaluation.

  15. Modeling the environmental suitability of anthrax in Ghana and estimating populations at risk: Implications for vaccination and control.

    PubMed

    Kracalik, Ian T; Kenu, Ernest; Ayamdooh, Evans Nsoh; Allegye-Cudjoe, Emmanuel; Polkuu, Paul Nokuma; Frimpong, Joseph Asamoah; Nyarko, Kofi Mensah; Bower, William A; Traxler, Rita; Blackburn, Jason K

    2017-10-01

    Anthrax is hyper-endemic in West Africa. Despite the effectiveness of livestock vaccines in controlling anthrax, underreporting, logistics, and limited resources makes implementing vaccination campaigns difficult. To better understand the geographic limits of anthrax, elucidate environmental factors related to its occurrence, and identify human and livestock populations at risk, we developed predictive models of the environmental suitability of anthrax in Ghana. We obtained data on the location and date of livestock anthrax from veterinary and outbreak response records in Ghana during 2005-2016, as well as livestock vaccination registers and population estimates of characteristically high-risk groups. To predict the environmental suitability of anthrax, we used an ensemble of random forest (RF) models built using a combination of climatic and environmental factors. From 2005 through the first six months of 2016, there were 67 anthrax outbreaks (851 cases) in livestock; outbreaks showed a seasonal peak during February through April and primarily involved cattle. There was a median of 19,709 vaccine doses [range: 0-175 thousand] administered annually. Results from the RF model suggest a marked ecological divide separating the broad areas of environmental suitability in northern Ghana from the southern part of the country. Increasing alkaline soil pH was associated with a higher probability of anthrax occurrence. We estimated 2.2 (95% CI: 2.0, 2.5) million livestock and 805 (95% CI: 519, 890) thousand low income rural livestock keepers were located in anthrax risk areas. Based on our estimates, the current anthrax vaccination efforts in Ghana cover a fraction of the livestock potentially at risk, thus control efforts should be focused on improving vaccine coverage among high risk groups.

  16. Modeling the environmental suitability of anthrax in Ghana and estimating populations at risk: Implications for vaccination and control

    PubMed Central

    Allegye-Cudjoe, Emmanuel; Polkuu, Paul Nokuma; Frimpong, Joseph Asamoah; Nyarko, Kofi Mensah; Bower, William A.; Traxler, Rita

    2017-01-01

    Anthrax is hyper-endemic in West Africa. Despite the effectiveness of livestock vaccines in controlling anthrax, underreporting, logistics, and limited resources makes implementing vaccination campaigns difficult. To better understand the geographic limits of anthrax, elucidate environmental factors related to its occurrence, and identify human and livestock populations at risk, we developed predictive models of the environmental suitability of anthrax in Ghana. We obtained data on the location and date of livestock anthrax from veterinary and outbreak response records in Ghana during 2005–2016, as well as livestock vaccination registers and population estimates of characteristically high-risk groups. To predict the environmental suitability of anthrax, we used an ensemble of random forest (RF) models built using a combination of climatic and environmental factors. From 2005 through the first six months of 2016, there were 67 anthrax outbreaks (851 cases) in livestock; outbreaks showed a seasonal peak during February through April and primarily involved cattle. There was a median of 19,709 vaccine doses [range: 0–175 thousand] administered annually. Results from the RF model suggest a marked ecological divide separating the broad areas of environmental suitability in northern Ghana from the southern part of the country. Increasing alkaline soil pH was associated with a higher probability of anthrax occurrence. We estimated 2.2 (95% CI: 2.0, 2.5) million livestock and 805 (95% CI: 519, 890) thousand low income rural livestock keepers were located in anthrax risk areas. Based on our estimates, the current anthrax vaccination efforts in Ghana cover a fraction of the livestock potentially at risk, thus control efforts should be focused on improving vaccine coverage among high risk groups. PMID:29028799

  17. Modeling Freedom From Progression for Standard-Risk Medulloblastoma: A Mathematical Tumor Control Model With Multiple Modes of Failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodin, N. Patrik, E-mail: nils.patrik.brodin@rh.dk; Niels Bohr Institute, University of Copenhagen, Copenhagen; Vogelius, Ivan R.

    2013-10-01

    Purpose: As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published pattern of failure data. Methods and Materials: Outcome data for standard-risk MB published after 1990 with pattern of relapse information were used to fit a tumor control dose-response model addressing failures in both the high-dose boost volume and the elective craniospinal volume. Estimates of 5-year event-free survival from 2 large randomized MB trials were used tomore » model the time-to-progression distribution. Uncertainty in freedom from progression (FFP) was estimated by Monte Carlo sampling over the statistical uncertainty in input data. Results: The estimated 5-year FFP (95% confidence intervals [CI]) for craniospinal doses of 15, 18, 24, and 36 Gy while maintaining 54 Gy to the posterior fossa was 77% (95% CI, 70%-81%), 78% (95% CI, 73%-81%), 79% (95% CI, 76%-82%), and 80% (95% CI, 77%-84%) respectively. The uncertainty in FFP was considerably larger for craniospinal doses below 18 Gy, reflecting the lack of data in the lower dose range. Conclusions: Estimates of tumor control and time-to-progression for standard-risk MB provides a data-driven setting for hypothesis generation or power calculations for prospective trials, taking the uncertainties into account. The presented methods can also be applied to incorporate further risk-stratification for example based on molecular biomarkers, when the necessary data become available.« less

  18. Mammographic density, breast cancer risk and risk prediction

    PubMed Central

    Vachon, Celine M; van Gils, Carla H; Sellers, Thomas A; Ghosh, Karthik; Pruthi, Sandhya; Brandt, Kathleen R; Pankratz, V Shane

    2007-01-01

    In this review, we examine the evidence for mammographic density as an independent risk factor for breast cancer, describe the risk prediction models that have incorporated density, and discuss the current and future implications of using mammographic density in clinical practice. Mammographic density is a consistent and strong risk factor for breast cancer in several populations and across age at mammogram. Recently, this risk factor has been added to existing breast cancer risk prediction models, increasing the discriminatory accuracy with its inclusion, albeit slightly. With validation, these models may replace the existing Gail model for clinical risk assessment. However, absolute risk estimates resulting from these improved models are still limited in their ability to characterize an individual's probability of developing cancer. Promising new measures of mammographic density, including volumetric density, which can be standardized using full-field digital mammography, will likely result in a stronger risk factor and improve accuracy of risk prediction models. PMID:18190724

  19. Comparing risk estimates following diagnostic CT radiation exposures employing different methodological approaches.

    PubMed

    Kashcheev, Valery V; Pryakhin, Evgeny A; Menyaylo, Alexander N; Chekin, Sergey Yu; Ivanov, Viktor K

    2014-06-01

    The current study has two aims: the first is to quantify the difference between radiation risks estimated with the use of organ or effective doses, particularly when planning pediatric and adult computed tomography (CT) examinations. The second aim is to determine the method of calculating organ doses and cancer risk using dose-length product (DLP) for typical routine CT examinations. In both cases, the radiation-induced cancer risks from medical CT examinations were evaluated as a function of gender and age. Lifetime attributable risk values from CT scanning were estimated with the use of ICRP (Publication 103) risk models and Russian national medical statistics data. For populations under the age of 50 y, the risk estimates based on organ doses usually are 30% higher than estimates based on effective doses. In older populations, the difference can be up to a factor of 2.5. The typical distributions of organ doses were defined for Chest Routine, Abdominal Routine, and Head Routine examinations. The distributions of organ doses were dependent on the anatomical region of scanning. The most exposed organs/tissues were thyroid, breast, esophagus, and lungs in cases of Chest Routine examination; liver, stomach, colon, ovaries, and bladder in cases of Abdominal Routine examination; and brain for Head Routine examinations. The conversion factors for calculation of typical organ doses or tissues at risk using DLP were determined. Lifetime attributable risk of cancer estimated with organ doses calculated from DLP was compared with the risk estimated on the basis of organ doses measured with the use of silicon photodiode dosimeters. The estimated difference in LAR is less than 29%.

  20. Application of the Rosner-Wei Risk-Prediction Model to Estimate Sexual Orientation Patterns in Colon Cancer Risk in a Prospective Cohort of U.S. Women

    PubMed Central

    Austin, S. Bryn; Pazaris, Mathew J.; Wei, Esther K.; Rosner, Bernard; Kennedy, Grace A.; Bowen, Deborah; Spiegelman, Donna

    2014-01-01

    Purpose We examined whether lesbian and bisexual women may be at greater risk of colon cancer (CC) than heterosexual women. Methods Working with a large cohort of U.S. women ages 25-64 years, we analyzed 20 years of prospective data to estimate CC incidence, based on known risk factors by applying the Rosner-Wei CC risk-prediction model. Comparing to heterosexual women, we calculated for lesbian and bisexual women the predicted one-year incidence rate (IR) per 100,000 person-years and estimated incidence rate ratios (IRR) and 95% confidence intervals (CI), based on each woman’s comprehensive risk factor profile. Results Analyses included 1,373,817 person-years of data from 66,257 women. For each sexual orientation group, mean predicted one-year CC IR per 100,000 person-years was slightly over 12 cases for each of the sexual orientation groups. After controlling for confounders in fully adjusted models and compared to heterosexuals, no significant differences in IRR were observed for lesbians (IRR 1.01; 95% CI 0.99, 1.04) or bisexuals (IRR 1.01; 95% CI 0.98, 1.04). Conclusions CC risk is similar across all sexual orientation subgroups, with all groups comparably affected. Health professionals must ensure that prevention, screening, and treatment programs are adequately reaching each of these communities. PMID:24852207

  1. Large-scale model-based assessment of deer-vehicle collision risk.

    PubMed

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife-vehicle collisions elsewhere.

  2. Large-Scale Model-Based Assessment of Deer-Vehicle Collision Risk

    PubMed Central

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer–vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on 74,000 deer–vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer–vehicle collisions and to investigate the relationship between deer–vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer–vehicle collisions, which allows nonlinear environment–deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new “deer–vehicle collision index” for deer management. We show that the risk of deer–vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer–vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer–vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife–vehicle collisions elsewhere. PMID:22359535

  3. Estimating the return-on-investment from changes in employee health risks on the Dow Chemical Company's health care costs.

    PubMed

    Goetzel, Ron Z; Ozminkowski, Ronald J; Baase, Catherine M; Billotti, Gary M

    2005-08-01

    We sought to estimate the impact of corporate health-management and risk-reduction programs for The Dow Chemical Company by using a prospective return-on-investment (ROI) model. The risk and expenditure estimates were derived from multiple regression analyses showing relationships between worker demographics, health risks, and medical expenditures. A "break-even" scenario would require Dow to reduce each of 10 population health risks by 0.17% points per year over the course of 10 years. More successful efforts at reducing health risks in the population would produce a more significant ROI for the company. Findings from this study were incorporated into other components of a business case for health and productivity management, and these supported continued investments in health improvement programs designed to achieve risk reduction and cost savings.

  4. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes

    PubMed Central

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    Background A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities’ preparedness and response capabilities and to mitigate future consequences. Methods An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model’s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. Results the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. Conclusion The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. PMID:26959647

  5. Detection, Emission Estimation and Risk Prediction of Forest Fires in China Using Satellite Sensors and Simulation Models in the Past Three Decades—An Overview

    PubMed Central

    Zhang, Jia-Hua; Yao, Feng-Mei; Liu, Cheng; Yang, Li-Min; Boken, Vijendra K.

    2011-01-01

    Forest fires have major impact on ecosystems and greatly impact the amount of greenhouse gases and aerosols in the atmosphere. This paper presents an overview in the forest fire detection, emission estimation, and fire risk prediction in China using satellite imagery, climate data, and various simulation models over the past three decades. Since the 1980s, remotely-sensed data acquired by many satellites, such as NOAA/AVHRR, FY-series, MODIS, CBERS, and ENVISAT, have been widely utilized for detecting forest fire hot spots and burned areas in China. Some developed algorithms have been utilized for detecting the forest fire hot spots at a sub-pixel level. With respect to modeling the forest burning emission, a remote sensing data-driven Net Primary productivity (NPP) estimation model was developed for estimating forest biomass and fuel. In order to improve the forest fire risk modeling in China, real-time meteorological data, such as surface temperature, relative humidity, wind speed and direction, have been used as the model input for improving prediction of forest fire occurrence and its behavior. Shortwave infrared (SWIR) and near infrared (NIR) channels of satellite sensors have been employed for detecting live fuel moisture content (FMC), and the Normalized Difference Water Index (NDWI) was used for evaluating the forest vegetation condition and its moisture status. PMID:21909297

  6. Detection, emission estimation and risk prediction of forest fires in China using satellite sensors and simulation models in the past three decades--an overview.

    PubMed

    Zhang, Jia-Hua; Yao, Feng-Mei; Liu, Cheng; Yang, Li-Min; Boken, Vijendra K

    2011-08-01

    Forest fires have major impact on ecosystems and greatly impact the amount of greenhouse gases and aerosols in the atmosphere. This paper presents an overview in the forest fire detection, emission estimation, and fire risk prediction in China using satellite imagery, climate data, and various simulation models over the past three decades. Since the 1980s, remotely-sensed data acquired by many satellites, such as NOAA/AVHRR, FY-series, MODIS, CBERS, and ENVISAT, have been widely utilized for detecting forest fire hot spots and burned areas in China. Some developed algorithms have been utilized for detecting the forest fire hot spots at a sub-pixel level. With respect to modeling the forest burning emission, a remote sensing data-driven Net Primary productivity (NPP) estimation model was developed for estimating forest biomass and fuel. In order to improve the forest fire risk modeling in China, real-time meteorological data, such as surface temperature, relative humidity, wind speed and direction, have been used as the model input for improving prediction of forest fire occurrence and its behavior. Shortwave infrared (SWIR) and near infrared (NIR) channels of satellite sensors have been employed for detecting live fuel moisture content (FMC), and the Normalized Difference Water Index (NDWI) was used for evaluating the forest vegetation condition and its moisture status.

  7. A study of lens opacification for a Mars mission

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Wilson, J. W.; Cox, A. B.; Lett, J. T.

    1991-01-01

    A method based on risk-related cross sections is used to estimate risks of 'stationary' cataracts caused by radiation exposures during extended missions in deep space. Estimates of the even more important risk of late degenerative cataractogenesis are made on the basis of the limited data available. Data on lenticular opacification in the New Zealand white rabbit, an animal model from which such results can be extrapolated to humans, are analyzed by the Langley cosmic ray shielding code (HZETRN) to generate estimates of stationary cataract formation resulting from a Mars mission. The effects of the composition of shielding material and the relationship between risk and LET are given, and the effects of target fragmentation on the risk coefficients are evaluated explicitly.

  8. Functional Risk Modeling for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Thomson, Fraser; Mathias, Donovan; Go, Susie; Nejad, Hamed

    2010-01-01

    We introduce an approach to risk modeling that we call functional modeling , which we have developed to estimate the capabilities of a lunar base. The functional model tracks the availability of functions provided by systems, in addition to the operational state of those systems constituent strings. By tracking functions, we are able to identify cases where identical functions are provided by elements (rovers, habitats, etc.) that are connected together on the lunar surface. We credit functional diversity in those cases, and in doing so compute more realistic estimates of operational mode availabilities. The functional modeling approach yields more realistic estimates of the availability of the various operational modes provided to astronauts by the ensemble of surface elements included in a lunar base architecture. By tracking functional availability the effects of diverse backup, which often exists when two or more independent elements are connected together, is properly accounted for.

  9. Development and External Validation of a Melanoma Risk Prediction Model Based on Self-assessed Risk Factors.

    PubMed

    Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin

    2016-08-01

    Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction model to classify individuals as high risk compared with classifying all individuals as high risk. The melanoma risk prediction model performs well and may be useful in prevention interventions reliant on a risk assessment using self-assessed risk factors.

  10. Acute Gastrointestinal Illness Risks in North Carolina Community Water Systems: A Methodological Comparison.

    PubMed

    DeFelice, Nicholas B; Johnston, Jill E; Gibson, Jacqueline MacDonald

    2015-08-18

    The magnitude and spatial variability of acute gastrointestinal illness (AGI) cases attributable to microbial contamination of U.S. community drinking water systems are not well characterized. We compared three approaches (drinking water attributable risk, quantitative microbial risk assessment, and population intervention model) to estimate the annual number of emergency department visits for AGI attributable to microorganisms in North Carolina community water systems. All three methods used 2007-2013 water monitoring and emergency department data obtained from state agencies. The drinking water attributable risk method, which was the basis for previous U.S. Environmental Protection Agency national risk assessments, estimated that 7.9% of annual emergency department visits for AGI are attributable to microbial contamination of community water systems. However, the other methods' estimates were more than 2 orders of magnitude lower, each attributing 0.047% of annual emergency department visits for AGI to community water system contamination. The differences in results between the drinking water attributable risk method, which has been the main basis for previous national risk estimates, and the other two approaches highlight the need to improve methods for estimating endemic waterborne disease risks, in order to prioritize investments to improve community drinking water systems.

  11. Sensitivity to Uncertainty in Asteroid Impact Risk Assessment

    NASA Astrophysics Data System (ADS)

    Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.

    2015-12-01

    The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.

  12. A spatial Bayesian network model to assess the benefits of early warning for urban flood risk to people

    NASA Astrophysics Data System (ADS)

    Balbi, Stefano; Villa, Ferdinando; Mojtahed, Vahid; Hegetschweiler, Karin Tessa; Giupponi, Carlo

    2016-06-01

    This article presents a novel methodology to assess flood risk to people by integrating people's vulnerability and ability to cushion hazards through coping and adapting. The proposed approach extends traditional risk assessments beyond material damages; complements quantitative and semi-quantitative data with subjective and local knowledge, improving the use of commonly available information; and produces estimates of model uncertainty by providing probability distributions for all of its outputs. Flood risk to people is modeled using a spatially explicit Bayesian network model calibrated on expert opinion. Risk is assessed in terms of (1) likelihood of non-fatal physical injury, (2) likelihood of post-traumatic stress disorder and (3) likelihood of death. The study area covers the lower part of the Sihl valley (Switzerland) including the city of Zurich. The model is used to estimate the effect of improving an existing early warning system, taking into account the reliability, lead time and scope (i.e., coverage of people reached by the warning). Model results indicate that the potential benefits of an improved early warning in terms of avoided human impacts are particularly relevant in case of a major flood event.

  13. Estimating wildfire risk on a Mojave Desert landscape using remote sensing and field sampling

    USGS Publications Warehouse

    Van Linn, Peter F.; Nussear, Kenneth E.; Esque, Todd C.; DeFalco, Lesley A.; Inman, Richard D.; Abella, Scott R.

    2013-01-01

    Predicting wildfires that affect broad landscapes is important for allocating suppression resources and guiding land management. Wildfire prediction in the south-western United States is of specific concern because of the increasing prevalence and severe effects of fire on desert shrublands and the current lack of accurate fire prediction tools. We developed a fire risk model to predict fire occurrence in a north-eastern Mojave Desert landscape. First we developed a spatial model using remote sensing data to predict fuel loads based on field estimates of fuels. We then modelled fire risk (interactions of fuel characteristics and environmental conditions conducive to wildfire) using satellite imagery, our model of fuel loads, and spatial data on ignition potential (lightning strikes and distance to roads), topography (elevation and aspect) and climate (maximum and minimum temperatures). The risk model was developed during a fire year at our study landscape and validated at a nearby landscape; model performance was accurate and similar at both sites. This study demonstrates that remote sensing techniques used in combination with field surveys can accurately predict wildfire risk in the Mojave Desert and may be applicable to other arid and semiarid lands where wildfires are prevalent.

  14. Estimating Rates of Motor Vehicle Crashes Using Medical Encounter Data: A Feasibility Study

    DTIC Science & Technology

    2015-11-05

    used to develop more detailed predictive risk models as well as strategies for preventing specific types of MVCs. Systematic Review of Evidence... used to estimate rates of accident-related injuries more generally,9 but not with specific reference to MVCs. For the present report, rates of...precise rate estimates based on person-years rather than active duty strength, (e) multivariable effects of specific risk /protective factors after

  15. Lunar Landing Operational Risk Model

    NASA Technical Reports Server (NTRS)

    Mattenberger, Chris; Putney, Blake; Rust, Randy; Derkowski, Brian

    2010-01-01

    Characterizing the risk of spacecraft goes beyond simply modeling equipment reliability. Some portions of the mission require complex interactions between system elements that can lead to failure without an actual hardware fault. Landing risk is currently the least characterized aspect of the Altair lunar lander and appears to result from complex temporal interactions between pilot, sensors, surface characteristics and vehicle capabilities rather than hardware failures. The Lunar Landing Operational Risk Model (LLORM) seeks to provide rapid and flexible quantitative insight into the risks driving the landing event and to gauge sensitivities of the vehicle to changes in system configuration and mission operations. The LLORM takes a Monte Carlo based approach to estimate the operational risk of the Lunar Landing Event and calculates estimates of the risk of Loss of Mission (LOM) - Abort Required and is Successful, Loss of Crew (LOC) - Vehicle Crashes or Cannot Reach Orbit, and Success. The LLORM is meant to be used during the conceptual design phase to inform decision makers transparently of the reliability impacts of design decisions, to identify areas of the design which may require additional robustness, and to aid in the development and flow-down of requirements.

  16. Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Dalton, Angela C.; Dale, Crystal

    2014-06-01

    Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less

  17. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  18. Monte Carlo mixture model of lifetime cancer incidence risk from radiation exposure on shuttle and international space station

    NASA Technical Reports Server (NTRS)

    Peterson, L. E.; Cucinotta, F. A.; Wilson, J. W. (Principal Investigator)

    1999-01-01

    Estimating uncertainty in lifetime cancer risk for human exposure to space radiation is a unique challenge. Conventional risk assessment with low-linear-energy-transfer (LET)-based risk from Japanese atomic bomb survivor studies may be inappropriate for relativistic protons and nuclei in space due to track structure effects. This paper develops a Monte Carlo mixture model (MCMM) for transferring additive, National Institutes of Health multiplicative, and multiplicative excess cancer incidence risks based on Japanese atomic bomb survivor data to determine excess incidence risk for various US astronaut exposure profiles. The MCMM serves as an anchor point for future risk projection methods involving biophysical models of DNA damage from space radiation. Lifetime incidence risks of radiation-induced cancer for the MCMM based on low-LET Japanese data for nonleukemia (all cancers except leukemia) were 2.77 (90% confidence limit, 0.75-11.34) for males exposed to 1 Sv at age 45 and 2.20 (90% confidence limit, 0.59-10.12) for males exposed at age 55. For females, mixture model risks for nonleukemia exposed separately to 1 Sv at ages of 45 and 55 were 2.98 (90% confidence limit, 0.90-11.70) and 2.44 (90% confidence limit, 0.70-10.30), respectively. Risks for high-LET 200 MeV protons (LET=0.45 keV/micrometer), 1 MeV alpha-particles (LET=100 keV/micrometer), and 600 MeV iron particles (LET=180 keV/micrometer) were scored on a per particle basis by determining the particle fluence required for an average of one particle per cell nucleus of area 100 micrometer(2). Lifetime risk per proton was 2.68x10(-2)% (90% confidence limit, 0.79x10(-3)%-0. 514x10(-2)%). For alpha-particles, lifetime risk was 14.2% (90% confidence limit, 2.5%-31.2%). Conversely, lifetime risk per iron particle was 23.7% (90% confidence limit, 4.5%-53.0%). Uncertainty in the DDREF for high-LET particles may be less than that for low-LET radiation because typically there is very little dose-rate dependence. Probability density functions for high-LET radiation quality and dose-rate may be preferable to conventional risk assessment approaches. Nuclear reactions and track structure effects in tissue may not be properly estimated by existing data using in vitro models for estimating RBEs. The method used here is being extended to estimate uncertainty in spacecraft shielding effectiveness in various space radiation environments.

  19. Patients with Testicular Cancer Undergoing CT Surveillance Demonstrate a Pitfall of Radiation-induced Cancer Risk Estimates: The Timing Paradox

    PubMed Central

    Eisenberg, Jonathan D.; Lee, Richard J.; Gilmore, Michael E.; Turan, Ekin A.; Singh, Sarabjeet; Kalra, Mannudeep K.; Liu, Bob; Kong, Chung Yin; Gazelle, G. Scott

    2013-01-01

    Purpose: To demonstrate a limitation of lifetime radiation-induced cancer risk metrics in the setting of testicular cancer surveillance—in particular, their failure to capture the delayed timing of radiation-induced cancers over the course of a patient’s lifetime. Materials and Methods: Institutional review board approval was obtained for the use of computed tomographic (CT) dosimetry data in this study. Informed consent was waived. This study was HIPAA compliant. A Markov model was developed to project outcomes in patients with testicular cancer who were undergoing CT surveillance in the decade after orchiectomy. To quantify effects of early versus delayed risks, life expectancy losses and lifetime mortality risks due to testicular cancer were compared with life expectancy losses and lifetime mortality risks due to radiation-induced cancers from CT. Projections of life expectancy loss, unlike lifetime risk estimates, account for the timing of risks over the course of a lifetime, which enabled evaluation of the described limitation of lifetime risk estimates. Markov chain Monte Carlo methods were used to estimate the uncertainty of the results. Results: As an example of evidence yielded, 33-year-old men with stage I seminoma who were undergoing CT surveillance were projected to incur a slightly higher lifetime mortality risk from testicular cancer (598 per 100 000; 95% uncertainty interval [UI]: 302, 894) than from radiation-induced cancers (505 per 100 000; 95% UI: 280, 730). However, life expectancy loss attributable to testicular cancer (83 days; 95% UI: 42, 124) was more than three times greater than life expectancy loss attributable to radiation-induced cancers (24 days; 95% UI: 13, 35). Trends were consistent across modeled scenarios. Conclusion: Lifetime radiation risk estimates, when used for decision making, may overemphasize radiation-induced cancer risks relative to short-term health risks. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12121015/-/DC1 PMID:23249573

  20. Cardiorespiratory fitness and classification of risk of cardiovascular disease mortality.

    PubMed

    Gupta, Sachin; Rohatgi, Anand; Ayers, Colby R; Willis, Benjamin L; Haskell, William L; Khera, Amit; Drazner, Mark H; de Lemos, James A; Berry, Jarett D

    2011-04-05

    Cardiorespiratory fitness (fitness) is associated with cardiovascular disease (CVD) mortality. However, the extent to which fitness improves risk classification when added to traditional risk factors is unclear. Fitness was measured by the Balke protocol in 66 371 subjects without prior CVD enrolled in the Cooper Center Longitudinal Study between 1970 and 2006; follow-up was extended through 2006. Cox proportional hazards models were used to estimate the risk of CVD mortality with a traditional risk factor model (age, sex, systolic blood pressure, diabetes mellitus, total cholesterol, and smoking) with and without the addition of fitness. The net reclassification improvement and integrated discrimination improvement were calculated at 10 and 25 years. Ten-year risk estimates for CVD mortality were categorized as <1%, 1% to <5%, and ≥5%, and 25-year risk estimates were categorized as <8%, 8% to 30%, and ≥30%. During a median follow-up period of 16 years, there were 1621 CVD deaths. The addition of fitness to the traditional risk factor model resulted in reclassification of 10.7% of the men, with significant net reclassification improvement at both 10 years (net reclassification improvement=0.121) and 25 years (net reclassification improvement=0.041) (P<0.001 for both). The integrated discrimination improvement was 0.010 at 10 years (P<0.001), and the relative integrated discrimination improvement was 29%. Similar findings were observed for women at 25 years. A single measurement of fitness significantly improves classification of both short-term (10-year) and long-term (25-year) risk for CVD mortality when added to traditional risk factors.

  1. Evidence that breast tissue stiffness is associated with risk of breast cancer.

    PubMed

    Boyd, Norman F; Li, Qing; Melnichouk, Olga; Huszti, Ella; Martin, Lisa J; Gunasekara, Anoma; Mawdsley, Gord; Yaffe, Martin J; Minkin, Salomon

    2014-01-01

    Evidence from animal models shows that tissue stiffness increases the invasion and progression of cancers, including mammary cancer. We here use measurements of the volume and the projected area of the compressed breast during mammography to derive estimates of breast tissue stiffness and examine the relationship of stiffness to risk of breast cancer. Mammograms were used to measure the volume and projected areas of total and radiologically dense breast tissue in the unaffected breasts of 362 women with newly diagnosed breast cancer (cases) and 656 women of the same age who did not have breast cancer (controls). Measures of breast tissue volume and the projected area of the compressed breast during mammography were used to calculate the deformation of the breast during compression and, with the recorded compression force, to estimate the stiffness of breast tissue. Stiffness was compared in cases and controls, and associations with breast cancer risk examined after adjustment for other risk factors. After adjustment for percent mammographic density by area measurements, and other risk factors, our estimate of breast tissue stiffness was significantly associated with breast cancer (odds ratio = 1.21, 95% confidence interval = 1.03, 1.43, p = 0.02) and improved breast cancer risk prediction in models with percent mammographic density, by both area and volume measurements. An estimate of breast tissue stiffness was associated with breast cancer risk and improved risk prediction based on mammographic measures and other risk factors. Stiffness may provide an additional mechanism by which breast tissue composition is associated with risk of breast cancer and merits examination using more direct methods of measurement.

  2. Assessing the impact of a cattle risk-based trading scheme on the movement of bovine tuberculosis infected animals in England and Wales.

    PubMed

    Adkin, A; Brouwer, A; Downs, S H; Kelly, L

    2016-01-01

    The adoption of bovine tuberculosis (bTB) risk-based trading (RBT) schemes has the potential to reduce the risk of bTB spread. However, any scheme will have cost implications that need to be balanced against its likely success in reducing bTB. This paper describes the first stochastic quantitative model assessing the impact of the implementation of a cattle risk-based trading scheme to inform policy makers and contribute to cost-benefit analyses. A risk assessment for England and Wales was developed to estimate the number of infected cattle traded using historic movement data recorded between July 2010 and June 2011. Three scenarios were implemented: cattle traded with no RBT scheme in place, voluntary provision of the score and a compulsory, statutory scheme applying a bTB risk score to each farm. For each scenario, changes in trade were estimated due to provision of the risk score to potential purchasers. An estimated mean of 3981 bTB infected animals were sold to purchasers with no RBT scheme in place in one year, with 90% confidence the true value was between 2775 and 5288. This result is dependent on the estimated between herd prevalence used in the risk assessment which is uncertain. With the voluntary provision of the risk score by farmers, on average, 17% of movements was affected (purchaser did not wish to buy once the risk score was available), with a reduction of 23% in infected animals being purchased initially. The compulsory provision of the risk score in a statutory scheme resulted in an estimated mean change to 26% of movements, with a reduction of 37% in infected animals being purchased initially, increasing to a 53% reduction in infected movements from higher risk sellers (score 4 and 5). The estimated mean reduction in infected animals being purchased could be improved to 45% given a 10% reduction in risky purchase behaviour by farmers which may be achieved through education programmes, or to an estimated mean of 49% if a rule was implemented preventing farmers from the purchase of animals of higher risk than their own herd. Given voluntary trials currently taking place of a trading scheme, recommendations for future work include the monitoring of initial uptake and changes in the purchase patterns of farmers. Such data could be used to update the risk assessment to reduce uncertainty associated with model estimates. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  3. Cancer Risk Prediction and Assessment

    Cancer.gov

    Cancer prediction models provide an important approach to assessing risk and prognosis by identifying individuals at high risk, facilitating the design and planning of clinical cancer trials, fostering the development of benefit-risk indices, and enabling estimates of the population burden and cost of cancer.

  4. Integration of second cancer risk calculations in a radiotherapy treatment planning system

    NASA Astrophysics Data System (ADS)

    Hartmann, M.; Schneider, U.

    2014-03-01

    Second cancer risk in patients, in particular in children, who were treated with radiotherapy is an important side effect. It should be minimized by selecting an appropriate treatment plan for the patient. The objectives of this study were to integrate a risk model for radiation induced cancer into a treatment planning system which allows to judge different treatment plans with regard to second cancer induction and to quantify the potential reduction in predicted risk. A model for radiation induced cancer including fractionation effects which is valid for doses in the radiotherapy range was integrated into a treatment planning system. From the three-dimensional (3D) dose distribution the 3D-risk equivalent dose (RED) was calculated on an organ specific basis. In addition to RED further risk coefficients like OED (organ equivalent dose), EAR (excess absolute risk) and LAR (lifetime attributable risk) are computed. A risk model for radiation induced cancer was successfully integrated in a treatment planning system. Several risk coefficients can be viewed and used to obtain critical situations were a plan can be optimised. Risk-volume-histograms and organ specific risks were calculated for different treatment plans and were used in combination with NTCP estimates for plan evaluation. It is concluded that the integration of second cancer risk estimates in a commercial treatment planning system is feasible. It can be used in addition to NTCP modelling for optimising treatment plans which result in the lowest possible second cancer risk for a patient.

  5. Essays in applied microeconomics

    NASA Astrophysics Data System (ADS)

    Davis, Lucas William

    2005-11-01

    The first essay measures the impact of an outbreak of pediatric leukemia on local housing values. A model of residential location choice is used to describe conditions under which the gradient of the hedonic price function with respect to health risk is equal to household marginal willingness to pay to avoid pediatric leukemia risk. This equalizing differential is estimated using property-level sales records from a county in Nevada where residents have recently experienced a severe increase in pediatric leukemia. Housing values are compared before and after the increase with a nearby county acting as a control group. The results indicate that housing values decreased 15.6% during the period of maximum risk. Results are similar for alternative measures of risk and across houses of different sizes. With risk estimates derived using a Bayesian learning model the results imply a statistical value of pediatric leukemia of $5.6 million. The results from the paper provide some of the first market-based estimates of the value of health for children. The second essay evaluates the cost-effectiveness of public incentives that encourage households to purchase high-efficiency durable goods. The demand for durable goods and the demand for energy and other inputs are modeled jointly as the solution to a household production problem. The empirical analysis focuses on the case of clothes washers. The production technology and utilization decision are estimated using household-level data from field trials in which participants received front-loading clothes washers free of charge. The estimation strategy exploits this quasi-random replacement of washers to derive robust estimates of the utilization decision. The results indicate a price elasticity, -.06, that is statistically different from zero across specifications. The parameters from the utilization decision are used to estimate the purchase decision using data from the Consumer Expenditure Survey, 1994-2002. Households consider optimal utilization levels, purchase prices, water rates, energy rates and other factors when deciding which clothes washer to purchase. The complete model is used to simulate the effects of rebate programs and other policies on adoption patterns of clothes washers and household demand for water and energy.

  6. Tier I Rice Model - Version 1.0 - Guidance for Estimating Pesticide Concentrations in Rice Paddies

    EPA Pesticide Factsheets

    Describes a Tier I Rice Model (Version 1.0) for estimating surface water exposure from the use of pesticides in rice paddies. The concentration calculated can be used for aquatic ecological risk and drinking water exposure assessments.

  7. Fire spread estimation on forest wildfire using ensemble kalman filter

    NASA Astrophysics Data System (ADS)

    Syarifah, Wardatus; Apriliani, Erna

    2018-04-01

    Wildfire is one of the most frequent disasters in the world, for example forest wildfire, causing population of forest decrease. Forest wildfire, whether naturally occurring or prescribed, are potential risks for ecosystems and human settlements. These risks can be managed by monitoring the weather, prescribing fires to limit available fuel, and creating firebreaks. With computer simulations we can predict and explore how fires may spread. The model of fire spread on forest wildfire was established to determine the fire properties. The fire spread model is prepared based on the equation of the diffusion reaction model. There are many methods to estimate the spread of fire. The Kalman Filter Ensemble Method is a modified estimation method of the Kalman Filter algorithm that can be used to estimate linear and non-linear system models. In this research will apply Ensemble Kalman Filter (EnKF) method to estimate the spread of fire on forest wildfire. Before applying the EnKF method, the fire spread model will be discreted using finite difference method. At the end, the analysis obtained illustrated by numerical simulation using software. The simulation results show that the Ensemble Kalman Filter method is closer to the system model when the ensemble value is greater, while the covariance value of the system model and the smaller the measurement.

  8. Modelling the risk of Taenia solium exposure from pork produced in western Kenya.

    PubMed

    Thomas, Lian F; de Glanville, William A; Cook, Elizabeth A J; Bronsvoort, Barend M De C; Handel, Ian; Wamae, Claire N; Kariuki, Samuel; Fèvre, Eric M

    2017-02-01

    The tapeworm Taenia solium is the parasite responsible for neurocysticercosis, a neglected tropical disease of public health importance, thought to cause approximately 1/3 of epilepsy cases across endemic regions. The consumption of undercooked infected pork perpetuates the parasite's life-cycle through the establishment of adult tapeworm infections in the community. Reducing the risk associated with pork consumption in the developing world is therefore a public health priority. The aim of this study was to estimate the risk of any one pork meal in western Kenya containing a potentially infective T. solium cysticercus at the point of consumption, an aspect of the parasite transmission that has not been estimated before. To estimate this, we used a quantitative food chain risk assessment model built in the @RISK add-on to Microsoft Excel. This model indicates that any one pork meal consumed in western Kenya has a 0.006 (99% Uncertainty Interval (U.I). 0.0002-0.0164) probability of containing at least one viable T. solium cysticercus at the point of consumption and therefore being potentially infectious to humans. This equates to 22,282 (99% U.I. 622-64,134) potentially infective pork meals consumed in the course of one year within Busia District alone. This model indicates a high risk of T. solium infection associated with pork consumption in western Kenya and the work presented here can be built upon to investigate the efficacy of various mitigation strategies for this locality.

  9. Modelling the risk of Taenia solium exposure from pork produced in western Kenya

    PubMed Central

    de Glanville, William A.; Cook, Elizabeth A. J.; Bronsvoort, Barend M. De C.; Handel, Ian; Wamae, Claire N.; Kariuki, Samuel; Fèvre, Eric M.

    2017-01-01

    The tapeworm Taenia solium is the parasite responsible for neurocysticercosis, a neglected tropical disease of public health importance, thought to cause approximately 1/3 of epilepsy cases across endemic regions. The consumption of undercooked infected pork perpetuates the parasite’s life-cycle through the establishment of adult tapeworm infections in the community. Reducing the risk associated with pork consumption in the developing world is therefore a public health priority. The aim of this study was to estimate the risk of any one pork meal in western Kenya containing a potentially infective T. solium cysticercus at the point of consumption, an aspect of the parasite transmission that has not been estimated before. To estimate this, we used a quantitative food chain risk assessment model built in the @RISK add-on to Microsoft Excel. This model indicates that any one pork meal consumed in western Kenya has a 0.006 (99% Uncertainty Interval (U.I). 0.0002–0.0164) probability of containing at least one viable T. solium cysticercus at the point of consumption and therefore being potentially infectious to humans. This equates to 22,282 (99% U.I. 622–64,134) potentially infective pork meals consumed in the course of one year within Busia District alone. This model indicates a high risk of T. solium infection associated with pork consumption in western Kenya and the work presented here can be built upon to investigate the efficacy of various mitigation strategies for this locality. PMID:28212398

  10. Cost Estimation and Control for Flight Systems

    NASA Technical Reports Server (NTRS)

    Hammond, Walter E.; Vanhook, Michael E. (Technical Monitor)

    2002-01-01

    Good program management practices, cost analysis, cost estimation, and cost control for aerospace flight systems are interrelated and depend upon each other. The best cost control process cannot overcome poor design or poor systems trades that lead to the wrong approach. The project needs robust Technical, Schedule, Cost, Risk, and Cost Risk practices before it can incorporate adequate Cost Control. Cost analysis both precedes and follows cost estimation -- the two are closely coupled with each other and with Risk analysis. Parametric cost estimating relationships and computerized models are most often used. NASA has learned some valuable lessons in controlling cost problems, and recommends use of a summary Project Manager's checklist as shown here.

  11. Geographic exposure risk of variant Creutzfeldt-Jakob disease in US blood donors: a risk-ranking model to evaluate alternative donor-deferral policies.

    PubMed

    Yang, Hong; Huang, Yin; Gregori, Luisa; Asher, David M; Bui, Travis; Forshee, Richard A; Anderson, Steven A

    2017-04-01

    Variant Creutzfeldt-Jakob disease (vCJD) has been transmitted by blood transfusion (TTvCJD). The US Food and Drug Administration (FDA) recommends deferring blood donors who resided in or traveled to 30 European countries where they may have been exposed to bovine spongiform encephalopathy (BSE) through beef consumption. Those recommendations warrant re-evaluation, because new cases of BSE and vCJD have markedly abated. The FDA developed a risk-ranking model to calculate the geographic vCJD risk using country-specific case rates and person-years of exposure of US blood donors. We used the reported country vCJD case rates, when available, or imputed vCJD case rates from reported BSE and UK beef exports during the risk period. We estimated the risk reduction and donor loss should the deferral be restricted to a few high-risk countries. We also estimated additional risk reduction by leukocyte reduction (LR) of red blood cells (RBCs). The United Kingdom, Ireland, and France had the greatest vCJD risk, contributing approximately 95% of the total risk. The model estimated that deferring US donors who spent extended periods of time in these three countries, combined with currently voluntary LR (95% of RBC units), would reduce the vCJD risk by 89.3%, a reduction similar to that achieved under the current policy (89.8%). Limiting deferrals to exposure in these three countries would potentially allow donations from an additional 100,000 donors who are currently deferred. Our analysis suggests that a deferral option focusing on the three highest risk countries would achieve a level of blood safety similar to that achieved by the current policy. © 2016 AABB.

  12. Applicability and feasibility of systematic review for performing evidence-based risk assessment in food and feed safety.

    PubMed

    Aiassa, E; Higgins, J P T; Frampton, G K; Greiner, M; Afonso, A; Amzal, B; Deeks, J; Dorne, J-L; Glanville, J; Lövei, G L; Nienstedt, K; O'connor, A M; Pullin, A S; Rajić, A; Verloo, D

    2015-01-01

    Food and feed safety risk assessment uses multi-parameter models to evaluate the likelihood of adverse events associated with exposure to hazards in human health, plant health, animal health, animal welfare, and the environment. Systematic review and meta-analysis are established methods for answering questions in health care, and can be implemented to minimize biases in food and feed safety risk assessment. However, no methodological frameworks exist for refining risk assessment multi-parameter models into questions suitable for systematic review, and use of meta-analysis to estimate all parameters required by a risk model may not be always feasible. This paper describes novel approaches for determining question suitability and for prioritizing questions for systematic review in this area. Risk assessment questions that aim to estimate a parameter are likely to be suitable for systematic review. Such questions can be structured by their "key elements" [e.g., for intervention questions, the population(s), intervention(s), comparator(s), and outcome(s)]. Prioritization of questions to be addressed by systematic review relies on the likely impact and related uncertainty of individual parameters in the risk model. This approach to planning and prioritizing systematic review seems to have useful implications for producing evidence-based food and feed safety risk assessment.

  13. The neural representation of unexpected uncertainty during value-based decision making.

    PubMed

    Payzan-LeNestour, Elise; Dunne, Simon; Bossaerts, Peter; O'Doherty, John P

    2013-07-10

    Uncertainty is an inherent property of the environment and a central feature of models of decision-making and learning. Theoretical propositions suggest that one form, unexpected uncertainty, may be used to rapidly adapt to changes in the environment, while being influenced by two other forms: risk and estimation uncertainty. While previous studies have reported neural representations of estimation uncertainty and risk, relatively little is known about unexpected uncertainty. Here, participants performed a decision-making task while undergoing functional magnetic resonance imaging (fMRI), which, in combination with a Bayesian model-based analysis, enabled us to separately examine each form of uncertainty examined. We found representations of unexpected uncertainty in multiple cortical areas, as well as the noradrenergic brainstem nucleus locus coeruleus. Other unique cortical regions were found to encode risk, estimation uncertainty, and learning rate. Collectively, these findings support theoretical models in which several formally separable uncertainty computations determine the speed of learning. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Air pollution, health and social deprivation: A fine-scale risk assessment.

    PubMed

    Morelli, Xavier; Rieux, Camille; Cyrys, Josef; Forsberg, Bertil; Slama, Rémy

    2016-05-01

    Risk assessment studies often ignore within-city variations of air pollutants. Our objective was to quantify the risk associated with fine particulate matter (PM2.5) exposure in 2 urban areas using fine-scale air pollution modeling and to characterize how this risk varied according to social deprivation. In Grenoble and Lyon areas (0.4 and 1.2 million inhabitants, respectively) in 2012, PM2.5 exposure was estimated on a 10×10m grid by coupling a dispersion model to population density. Outcomes were mortality, lung cancer and term low birth weight incidences. Cases attributable to air pollution were estimated overall and stratifying areas according to the European Deprivation Index (EDI), taking 10µg/m(3) yearly average as reference (counterfactual) level. Estimations were repeated assuming spatial homogeneity of air pollutants within urban area. Median PM2.5 levels were 18.1 and 19.6μg/m(3) in Grenoble and Lyon urban areas, respectively, corresponding to 114 (5.1% of total, 95% confidence interval, CI, 3.2-7.0%) and 491 non-accidental deaths (6.0% of total, 95% CI 3.7-8.3%) attributable to long-term exposure to PM2.5, respectively. Attributable term low birth weight cases represented 23.6% of total cases (9.0-37.1%) in Grenoble and 27.6% of cases (10.7-42.6%) in Lyon. In Grenoble, 6.8% of incident lung cancer cases were attributable to air pollution (95% CI 3.1-10.1%). Risk was lower by 8 to 20% when estimating exposure through background stations. Risk was highest in neighborhoods with intermediate to higher social deprivation. Risk assessment studies relying on background stations to estimate air pollution levels may underestimate the attributable risk. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Non-Targeted Effects and the Dose Response for Heavy Ion Tumorigenesis

    NASA Technical Reports Server (NTRS)

    Chappell, Lori J.; Cucinotta, Francis A.

    2010-01-01

    There is no human epidemiology data available to estimate the heavy ion cancer risks experienced by astronauts in space. Studies of tumor induction in mice are a necessary step to estimate risks to astronauts. Previous experimental data can be better utilized to model dose response for heavy ion tumorigenesis and plan future low dose studies.

  16. The 2006 William Feinberg lecture: shifting the paradigm from stroke to global vascular risk estimation.

    PubMed

    Sacco, Ralph L

    2007-06-01

    By the year 2010, it is estimated that 18.1 million people worldwide will die annually because of cardiovascular diseases and stroke. "Global vascular risk" more broadly includes the multiple overlapping disease silos of stroke, myocardial infarction, peripheral arterial disease, and vascular death. Estimation of global vascular risk requires consideration of a variety of variables including demographics, environmental behaviors, and risk factors. Data from multiple studies suggest continuous linear relationships between the physiological vascular risk modulators of blood pressure, lipids, and blood glucose rather than treating these conditions as categorical risk factors. Constellations of risk factors may be more relevant than individual categorical components. Exciting work with novel risk factors may also have predictive value in estimates of global vascular risk. Advances in imaging have led to the measurement of subclinical conditions such as carotid intima-media thickness and subclinical brain conditions such as white matter hyperintensities and silent infarcts. These subclinical measurements may be intermediate stages in the transition from asymptomatic to symptomatic vascular events, appear to be associated with the fundamental vascular risk factors, and represent opportunities to more precisely quantitate disease progression. The expansion of studies in molecular epidemiology and detection of genetic markers underlying vascular risks also promises to extend our precision of global vascular risk estimation. Global vascular risk estimation will require quantitative methods that bundle these multi-dimensional data into more precise estimates of future risk. The power of genetic information coupled with data on demographics, risk-inducing behaviors, vascular risk modulators, biomarkers, and measures of subclinical conditions should provide the most realistic approximation of an individual's future global vascular risk. The ultimate public health benefit, however, will depend on not only identification of global vascular risk but also the realization that we can modify this risk and prove the prediction models wrong.

  17. Identifying crash-prone traffic conditions under different weather on freeways.

    PubMed

    Xu, Chengcheng; Wang, Wei; Liu, Pan

    2013-09-01

    Understanding the relationships between traffic flow characteristics and crash risk under adverse weather conditions will help highway agencies develop proactive safety management strategies to improve traffic safety in adverse weather conditions. The primary objective is to develop separate crash risk prediction models for different weather conditions. The crash data, weather data, and traffic data used in this study were collected on the I-880N freeway in California in 2008 and 2010. This study considered three different weather conditions: clear weather, rainy weather, and reduced visibility weather. The preliminary analysis showed that there was some heterogeneity in the risk estimates for traffic flow characteristics by weather conditions, and that the crash risk prediction model for all weather conditions cannot capture the impacts of the traffic flow variables on crash risk under adverse weather conditions. The Bayesian random intercept logistic regression models were applied to link the likelihood of crash occurrence with various traffic flow characteristics under different weather conditions. The crash risk prediction models were compared to their corresponding logistic regression model. It was found that the random intercept model improved the goodness-of-fit of the crash risk prediction models. The model estimation results showed that the traffic flow characteristics contributing to crash risk were different across different weather conditions. The speed difference between upstream and downstream stations was found to be significant in each crash risk prediction model. Speed difference between upstream and downstream stations had the largest impact on crash risk in reduced visibility weather, followed by that in rainy weather. The ROC curves were further developed to evaluate the predictive performance of the crash risk prediction models under different weather conditions. The predictive performance of the crash risk model for clear weather was better than those of the crash risk models for adverse weather conditions. The research results could promote a better understanding of the impacts of traffic flow characteristics on crash risk under adverse weather conditions, which will help transportation professionals to develop better crash prevention strategies in adverse weather. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  18. Does Implementation of Biomathematical Models Mitigate Fatigue and Fatigue-related Risks in Emergency Medical Services Operations? A Systematic Review

    DOT National Transportation Integrated Search

    2018-01-11

    Background: Work schedules like those of Emergency Medical Services (EMS) personnel have been associated with increased risk of fatigue-related impairment. Biomathematical modeling is a means of objectively estimating the potential impacts of fatigue...

  19. Characterizing Uncertainty and Variability in PBPK Models: State of the Science and Needs for Research and Implementation

    EPA Science Inventory

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variabilit...

  20. PROPOSED SUITE OF MODELS FOR ESTIMATING DOSE RESULTING FROM EXPOSURES BY THE DERMAL ROUTE

    EPA Science Inventory

    Recent risk assessment guidance emphasizes consideration of mechanistic factors for influencing disposition of a toxicant. To incorporate mechanistic information into risk assessment, a suite of models is proposed for use in characterizing and quantifying dosimetry of toxic age...

  1. MODELING APPROACHES FOR ESTIMATING THE DOSIMETRY OF INHALED TOXICANTS IN CHILDREN

    EPA Science Inventory

    Risk assessment of inhaled toxicants has typically focused upon adults, with modeling used to extrapolate dosimetry and risks from laboratory animals to humans. However, behavioral factors such as time spent playing outdoors can lead to more exposure to inhaled toxicants in chil...

  2. Quantifying Cancer Risk from Radiation.

    PubMed

    Keil, Alexander P; Richardson, David B

    2017-12-06

    Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation-related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation-related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions. © 2017 Society for Risk Analysis.

  3. Use of Longitudinal Data in Genetic Studies in the Genome-wide Association Studies Era: Summary of Group 14

    PubMed Central

    Kerner, Berit; North, Kari E; Fallin, M Daniele

    2010-01-01

    Participants analyzed actual and simulated longitudinal data from the Framingham Heart Study for various metabolic and cardiovascular traits. The genetic information incorporated into these investigations ranged from selected single-nucleotide polymorphisms to genome-wide association arrays. Genotypes were incorporated using a broad range of methodological approaches including conditional logistic regression, linear mixed models, generalized estimating equations, linear growth curve estimation, growth modeling, growth mixture modeling, population attributable risk fraction based on survival functions under the proportional hazards models, and multivariate adaptive splines for the analysis of longitudinal data. The specific scientific questions addressed by these different approaches also varied, ranging from a more precise definition of the phenotype, bias reduction in control selection, estimation of effect sizes and genotype associated risk, to direct incorporation of genetic data into longitudinal modeling approaches and the exploration of population heterogeneity with regard to longitudinal trajectories. The group reached several overall conclusions: 1) The additional information provided by longitudinal data may be useful in genetic analyses. 2) The precision of the phenotype definition as well as control selection in nested designs may be improved, especially if traits demonstrate a trend over time or have strong age-of-onset effects. 3) Analyzing genetic data stratified for high-risk subgroups defined by a unique development over time could be useful for the detection of rare mutations in common multi-factorial diseases. 4) Estimation of the population impact of genomic risk variants could be more precise. The challenges and computational complexity demanded by genome-wide single-nucleotide polymorphism data were also discussed. PMID:19924713

  4. Do causal concentration-response functions exist? A critical review of associational and causal relations between fine particulate matter and mortality.

    PubMed

    Cox, Louis Anthony Tony

    2017-08-01

    Concentration-response (C-R) functions relating concentrations of pollutants in ambient air to mortality risks or other adverse health effects provide the basis for many public health risk assessments, benefits estimates for clean air regulations, and recommendations for revisions to existing air quality standards. The assumption that C-R functions relating levels of exposure and levels of response estimated from historical data usefully predict how future changes in concentrations would change risks has seldom been carefully tested. This paper critically reviews literature on C-R functions for fine particulate matter (PM2.5) and mortality risks. We find that most of them describe historical associations rather than valid causal models for predicting effects of interventions that change concentrations. The few papers that explicitly attempt to model causality rely on unverified modeling assumptions, casting doubt on their predictions about effects of interventions. A large literature on modern causal inference algorithms for observational data has been little used in C-R modeling. Applying these methods to publicly available data from Boston and the South Coast Air Quality Management District around Los Angeles shows that C-R functions estimated for one do not hold for the other. Changes in month-specific PM2.5 concentrations from one year to the next do not help to predict corresponding changes in average elderly mortality rates in either location. Thus, the assumption that estimated C-R relations predict effects of pollution-reducing interventions may not be true. Better causal modeling methods are needed to better predict how reducing air pollution would affect public health.

  5. CURRENT USE AND FUTURE NEEDS OF BIODOSIMETRY IN STUDIES OF LONG-TERM HEALTH RISK FOLLOWING RADIATION EXPOSURE

    PubMed Central

    Simon, Steven L.; Bouville, André; Kleinerman, Ruth

    2009-01-01

    Biodosimetry measurements can potentially be an important and integral part of the dosimetric methods used in long-term studies of health risk following radiation exposure. Such studies rely on accurate estimation of doses to the whole body or to specific organs of individuals in order to derive reliable estimates of cancer risk. However, dose estimates based on analytical dose reconstruction (i.e., models) or personnel monitoring measurements, e.g., film-badges, can have substantial uncertainty. Biodosimetry can potentially reduce uncertainty in health risk studies by corroboration of model-based dose estimates or by using them to assess bias in dose models. While biodosimetry has begun to play a more significant role in long-term health risk studies, its use is still generally limited in that context due to one or more factors including, inadequate limits of detection, large inter-individual variability of the signal measured, high per-sample cost, and invasiveness. Presently, the most suitable biodosimetry methods for epidemiologic studies are chromosome aberration frequencies from fluorescence in situ hybridization (FISH) of peripheral blood lymphocytes and electron paramagnetic resonance (EPR) measurements made on tooth enamel. Both types of measurements, however, are usually invasive and require difficult to obtain biological samples. Moreover, doses derived from these methods are not always directly relevant to the tissues of interest. To increase the value of biodosimetry to epidemiologic studies, a number of issues need to be considered including limits of detection, effects of inhomogenous exposure of the body, how to extrapolate from the tissue sampled to the tissues of interest, and how to adjust dosimetry models applied to large populations based on sparse biodosimetry measurements. The requirements of health risk studies suggest a set of characteristics that, if satisfied by new biodosimetry methods, would increase the overall usefulness of biodosimetry to determining radiation health risks. PMID:20065672

  6. Influence of Taxonomic Relatedness and Chemical Mode of Action in Acute Interspecies Estimation Models for Aquatic species

    EPA Science Inventory

    Ecological risks to aquatic organisms are typically assessed using toxicity data for relatively few species and with limited understanding of relative species sensitivity. We developed a comprehensive set of interspecies correlation estimation (ICE) models for aquatic organisms a...

  7. The Effects of Climate Model Similarity on Local, Risk-Based Adaptation Planning

    NASA Astrophysics Data System (ADS)

    Steinschneider, S.; Brown, C. M.

    2014-12-01

    The climate science community has recently proposed techniques to develop probabilistic projections of climate change from ensemble climate model output. These methods provide a means to incorporate the formal concept of risk, i.e., the product of impact and probability, into long-term planning assessments for local systems under climate change. However, approaches for pdf development often assume that different climate models provide independent information for the estimation of probabilities, despite model similarities that stem from a common genealogy. Here we utilize an ensemble of projections from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to develop probabilistic climate information, with and without an accounting of inter-model correlations, and use it to estimate climate-related risks to a local water utility in Colorado, U.S. We show that the tail risk of extreme climate changes in both mean precipitation and temperature is underestimated if model correlations are ignored. When coupled with impact models of the hydrology and infrastructure of the water utility, the underestimation of extreme climate changes substantially alters the quantification of risk for water supply shortages by mid-century. We argue that progress in climate change adaptation for local systems requires the recognition that there is less information in multi-model climate ensembles than previously thought. Importantly, adaptation decisions cannot be limited to the spread in one generation of climate models.

  8. A framework for quantifying net benefits of alternative prognostic models.

    PubMed

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-30

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Modelling the economic impact of three lameness causing diseases using herd and cow level evidence.

    PubMed

    Ettema, Jehan; Østergaard, Søren; Kristensen, Anders Ringgaard

    2010-06-01

    Diseases to the cow's hoof, interdigital skin and legs are highly prevalent and of large economic impact in modern dairy farming. In order to support farmer's decisions on preventing and treating lameness and its underlying causes, decision support models can be used to predict the economic profitability of such actions. An existing approach of modelling lameness as one health disorder in a dynamic, stochastic and mechanistic simulation model has been improved in two ways. First of all, three underlying diseases causing lameness were modelled: digital dermatitis, interdigital hyperplasia and claw horn diseases. Secondly, the existing simulation model was set-up in way that it uses hyper-distributions describing diseases risk of the three lameness causing diseases. By combining information on herd level risk factors with prevalence of lameness or prevalence of underlying diseases among cows, marginal posterior probability distributions for disease prevalence in the specific herd are created in a Bayesian network. Random draws from these distributions are used by the simulation model to describe disease risk. Hereby field data on prevalence is used systematically and uncertainty around herd specific risk is represented. Besides the fact that estimated profitability of halving disease risk depended on the hyper-distributions used, the estimates differed for herds with different levels of diseases risk and reproductive efficiency. (c) 2010 Elsevier B.V. All rights reserved.

  10. NASA Space Radiation Program Integrative Risk Model Toolkit

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Hu, Shaowen; Plante, Ianik; Ponomarev, Artem L.; Sandridge, Chris

    2015-01-01

    NASA Space Radiation Program Element scientists have been actively involved in development of an integrative risk models toolkit that includes models for acute radiation risk and organ dose projection (ARRBOD), NASA space radiation cancer risk projection (NSCR), hemocyte dose estimation (HemoDose), GCR event-based risk model code (GERMcode), and relativistic ion tracks (RITRACKS), NASA radiation track image (NASARTI), and the On-Line Tool for the Assessment of Radiation in Space (OLTARIS). This session will introduce the components of the risk toolkit with opportunity for hands on demonstrations. The brief descriptions of each tools are: ARRBOD for Organ dose projection and acute radiation risk calculation from exposure to solar particle event; NSCR for Projection of cancer risk from exposure to space radiation; HemoDose for retrospective dose estimation by using multi-type blood cell counts; GERMcode for basic physical and biophysical properties for an ion beam, and biophysical and radiobiological properties for a beam transport to the target in the NASA Space Radiation Laboratory beam line; RITRACKS for simulation of heavy ion and delta-ray track structure, radiation chemistry, DNA structure and DNA damage at the molecular scale; NASARTI for modeling of the effects of space radiation on human cells and tissue by incorporating a physical model of tracks, cell nucleus, and DNA damage foci with image segmentation for the automated count; and OLTARIS, an integrated tool set utilizing HZETRN (High Charge and Energy Transport) intended to help scientists and engineers study the effects of space radiation on shielding materials, electronics, and biological systems.

  11. A Robust Approach to Risk Assessment Based on Species Sensitivity Distributions.

    PubMed

    Monti, Gianna S; Filzmoser, Peter; Deutsch, Roland C

    2018-05-03

    The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real-world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation. © 2018 Society for Risk Analysis.

  12. A probabilistic method for the estimation of residual risk in donated blood.

    PubMed

    Bish, Ebru K; Ragavan, Prasanna K; Bish, Douglas R; Slonim, Anthony D; Stramer, Susan L

    2014-10-01

    The residual risk (RR) of transfusion-transmitted infections, including the human immunodeficiency virus and hepatitis B and C viruses, is typically estimated by the incidence[Formula: see text]window period model, which relies on the following restrictive assumptions: Each screening test, with probability 1, (1) detects an infected unit outside of the test's window period; (2) fails to detect an infected unit within the window period; and (3) correctly identifies an infection-free unit. These assumptions need not hold in practice due to random or systemic errors and individual variations in the window period. We develop a probability model that accurately estimates the RR by relaxing these assumptions, and quantify their impact using a published cost-effectiveness study and also within an optimization model. These assumptions lead to inaccurate estimates in cost-effectiveness studies and to sub-optimal solutions in the optimization model. The testing solution generated by the optimization model translates into fewer expected infections without an increase in the testing cost. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. External validation of the Garvan nomograms for predicting absolute fracture risk: the Tromsø study.

    PubMed

    Ahmed, Luai A; Nguyen, Nguyen D; Bjørnerem, Åshild; Joakimsen, Ragnar M; Jørgensen, Lone; Størmer, Jan; Bliuc, Dana; Center, Jacqueline R; Eisman, John A; Nguyen, Tuan V; Emaus, Nina

    2014-01-01

    Absolute risk estimation is a preferred approach for assessing fracture risk and treatment decision making. This study aimed to evaluate and validate the predictive performance of the Garvan Fracture Risk Calculator in a Norwegian cohort. The analysis included 1637 women and 1355 aged 60+ years from the Tromsø study. All incident fragility fractures between 2001 and 2009 were registered. The predicted probabilities of non-vertebral osteoporotic and hip fractures were determined using models with and without BMD. The discrimination and calibration of the models were assessed. Reclassification analysis was used to compare the models performance. The incidence of osteoporotic and hip fracture was 31.5 and 8.6 per 1000 population in women, respectively; in men the corresponding incidence was 12.2 and 5.1. The predicted 5-year and 10-year probability of fractures was consistently higher in the fracture group than the non-fracture group for all models. The 10-year predicted probabilities of hip fracture in those with fracture was 2.8 (women) to 3.1 times (men) higher than those without fracture. There was a close agreement between predicted and observed risk in both sexes and up to the fifth quintile. Among those in the highest quintile of risk, the models over-estimated the risk of fracture. Models with BMD performed better than models with body weight in correct classification of risk in individuals with and without fracture. The overall net decrease in reclassification of the model with weight compared to the model with BMD was 10.6% (p = 0.008) in women and 17.2% (p = 0.001) in men for osteoporotic fractures, and 13.3% (p = 0.07) in women and 17.5% (p = 0.09) in men for hip fracture. The Garvan Fracture Risk Calculator is valid and clinically useful in identifying individuals at high risk of fracture. The models with BMD performed better than those with body weight in fracture risk prediction.

  14. Chapter 7: Description of MISCAN-lung, the Erasmus MC Lung Cancer microsimulation model for evaluating cancer control interventions.

    PubMed

    Schultz, F W; Boer, R; de Koning, H J

    2012-07-01

    The MISCAN-lung model was designed to simulate population trends in lung cancer (LC) for comprehensive surveillance of the disease, to relate past exposure to risk factors to (observed) LC incidence and mortality, and to estimate the impact of cancer-control interventions. MISCAN-lung employs the technique of stochastic microsimulation of life histories affected by risk factors. It includes the two-stage clonal expansion model for carcinogenesis and a detailed LC progression model; the latter is specifically intended for the evaluation of screenings. This article elucidates further the principles of MISCAN-lung and describes its application to a comparative study within the CISNET Lung Working Group on the impact of tobacco control on U.S. LC mortality. MISCAN-lung yields an estimate of the number of LC deaths avoided during 1975-2000. The potential number of avoidable LC deaths, had everybody quit smoking in 1965, is 2.2 million; 750,000 deaths (30%) were avoided in the United States due to actual tobacco control interventions. The model fits in the actual tobacco-control scenario, providing credibility to the estimates of other scenarios, although considering survey-reported smoking trends alone has limitations. © 2012 Society for Risk Analysis.

  15. Development of a method to rate the primary safety of vehicles using linked New Zealand crash and vehicle licensing data.

    PubMed

    Keall, Michael D; Newstead, Stuart

    2016-01-01

    Vehicle safety rating systems aim firstly to inform consumers about safe vehicle choices and, secondly, to encourage vehicle manufacturers to aspire to safer levels of vehicle performance. Primary rating systems (that measure the ability of a vehicle to assist the driver in avoiding crashes) have not been developed for a variety of reasons, mainly associated with the difficult task of disassociating driver behavior and vehicle exposure characteristics from the estimation of crash involvement risk specific to a given vehicle. The aim of the current study was to explore different approaches to primary safety estimation, identifying which approaches (if any) may be most valid and most practical, given typical data that may be available for producing ratings. Data analyzed consisted of crash data and motor vehicle registration data for the period 2003 to 2012: 21,643,864 observations (representing vehicle-years) and 135,578 crashed vehicles. Various logistic models were tested as a means to estimate primary safety: Conditional models (conditioning on the vehicle owner over all vehicles owned); full models not conditioned on the owner, with all available owner and vehicle data; reduced models with few variables; induced exposure models; and models that synthesised elements from the latter two models. It was found that excluding young drivers (aged 25 and under) from all primary safety estimates attenuated some high risks estimated for make/model combinations favored by young people. The conditional model had clear biases that made it unsuitable. Estimates from a reduced model based just on crash rates per year (but including an owner location variable) produced estimates that were generally similar to the full model, although there was more spread in the estimates. The best replication of the full model estimates was generated by a synthesis of the reduced model and an induced exposure model. This study compared approaches to estimating primary safety that could mimic an analysis based on a very rich data set, using variables that are commonly available when registered fleet data are linked to crash data. This exploratory study has highlighted promising avenues for developing primary safety rating systems for vehicle makes and models.

  16. Sea-Level Rise and Land Subsidence in Deltas: Estimating Future Flood Risk Through Integrated Natural and Human System Modeling

    NASA Astrophysics Data System (ADS)

    Tessler, Z. D.; Vorosmarty, C. J.

    2016-12-01

    Deltas are highly sensitive to local human activities, land subsidence, regional water management, global sea-level rise, and climate extremes. We present a new delta flood exposure and risk framework for estimating the sensitivity of deltas to relative sea-level rise. We have applied this framework to a set of global environmental, geophysical, and social indicators over 48 major river deltas to quantify how contemporary risks vary across delta systems. The risk modeling framework incorporates upstream sediment flux and coastal land subsidence models, global empirical estimates of contemporary storm surge exposure, and population distribution and growth. Future scenarios are used to test the impacts on coastal flood risk of upstream dam construction, coastal population growth, accelerated sea-level rise, and enhanced storm surge. Results suggest a wide range of outcomes across different delta systems within each scenario. Deltas in highly engineered watersheds (Mississippi, Rhine) exhibit less sensitivity to increased dams due to saturation of sediment retention effects, though planned or under-construction dams are expected to have a substantial impact in the Yangtze, Irrawaddy, and Magdalena deltas. Population growth and sea-level rise are expected to be the dominant drivers of increased human risk in most deltas, with important exceptions in several countries, particularly China, where population are forecast to contract over the next several decades.

  17. Accounting for pH heterogeneity and variability in modelling human health risks from cadmium in contaminated land.

    PubMed

    Gay, J Rebecca; Korre, Anna

    2009-07-01

    The authors have previously published a methodology which combines quantitative probabilistic human health risk assessment and spatial statistical methods (geostatistics) to produce an assessment, incorporating uncertainty, of risks to human health from exposure to contaminated land. The model assumes a constant soil to plant concentration factor (CF(veg)) when calculating intake of contaminants. This model is modified here to enhance its use in a situation where CF(veg) varies according to soil pH, as is the case for cadmium. The original methodology uses sequential indicator simulation (SIS) to map soil concentration estimates for one contaminant across a site. A real, age-stratified population is mapped across the contaminated area, and intake of soil contaminants by individuals is calculated probabilistically using an adaptation of the Contaminated Land Exposure Assessment (CLEA) model. The proposed improvement involves not only the geostatistical estimation of the contaminant concentration, but also that of soil pH, which in turn leads to a variable CF(veg) estimate which influences the human intake results. The results presented demonstrate that taking pH into account can influence the outcome of the risk assessment greatly. It is proposed that a similar adaptation could be used for other combinations of soil variables which influence CF(veg).

  18. Use of logistic regression for modelling risk factors: with application to non-melanoma skin cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitaliano, P.P.

    Logistic regression was used to estimate the relative risk of basal and squamous skin cancer for such factors as cumulative lifetime solar exposure, age, complexion, and tannability. In previous reports, a subject's exposure was estimated indirectly, by latitude, or by the number of sun days in a subject's habitat. In contrast, these results are based on interview data gathered for each subject. A relatively new technique was used to estimate relative risk by controlling for confounding and testing for effect modification. A linear effect for the relative risk of cancer versus exposure was found. Tannability was shown to be amore » more important risk factor than complexion. This result is consistent with the work of Silverstone and Searle.« less

  19. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  20. VULNERABILITY TO HURRICANE DAMAGE ON THE U.S. GULF COAST SINCE 1950

    PubMed Central

    LOGAN, JOHN R.; XU, ZENGWANG

    2015-01-01

    We study hurricane risk on the U.S. Gulf Coast during 1950–2005, estimating the wind damage and storm surge from every hurricane in this extended period. Wind damage is estimated from the known path and wind speeds of individual storms and calibrated to fit actual damage reports for a sample of Gulf Coast storms. Storm surge is estimated using the SLOSH model developed by NOAA. These models provide the first comprehensive overview of the hurricane storm hazard as it has been experienced over a fifty-six-year period. We link the estimated damage with information on the population and specific socio-demographic components of the population (by age, race, and poverty status). Results show that white, young adult, and nonpoor populations have shifted over time away from zones with higher risk of wind damage, while more vulnerable population groups–the elderly, African Americans, and poor—have moved in the opposite direction. All groups have moved away from areas with high risk of storm surge since 1970. But in this case, perhaps because living near the water is still perceived as an amenity, those at highest risk are whites, elderly, and nonpoor households. Here exposure represents a trade-off between the risk and the amenity. PMID:25926706

  1. Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana Kelly; Song-Hua Shen; Gary DeMoss

    2010-06-01

    Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less

  2. Cost Modeling for low-cost planetary missions

    NASA Technical Reports Server (NTRS)

    Kwan, Eric; Habib-Agahi, Hamid; Rosenberg, Leigh

    2005-01-01

    This presentation will provide an overview of the JPL parametric cost models used to estimate flight science spacecrafts and instruments. This material will emphasize the cost model approaches to estimate low-cost flight hardware, sensors, and instrumentation, and to perform cost-risk assessments. This presentation will also discuss JPL approaches to perform cost modeling and the methodologies and analyses used to capture low-cost vs. key cost drivers.

  3. Assessing Progress in Reducing the At-Risk Population after 13 Years of the Global Programme to Eliminate Lymphatic Filariasis

    PubMed Central

    Hooper, Pamela J.; Chu, Brian K.; Mikhailov, Alexei; Ottesen, Eric A.; Bradley, Mark

    2014-01-01

    Background In 1997, the World Health Assembly adopted Resolution 50.29, committing to the elimination of lymphatic filariasis (LF) as a public health problem, subsequently targeted for 2020. The initial estimates were that 1.2 billion people were at-risk for LF infection globally. Now, 13 years after the Global Programme to Eliminate Lymphatic Filariasis (GPELF) began implementing mass drug administration (MDA) against LF in 2000—during which over 4.4 billion treatments have been distributed in 56 endemic countries—it is most appropriate to estimate the impact that the MDA has had on reducing the population at risk of LF. Methodology/Principal Findings To assess GPELF progress in reducing the population at-risk for LF, we developed a model based on defining reductions in risk of infection among cohorts of treated populations following each round of MDA. The model estimates that the number of people currently at risk of infection decreased by 46% to 789 million through 2012. Conclusions/Significance Important progress has been made in the global efforts to eliminate LF, but significant scale-up is required over the next 8 years to reach the 2020 elimination goal. PMID:25411843

  4. Using a Modification of the Capture-Recapture Model To Estimate the Need for Substance Abuse Treatment.

    ERIC Educational Resources Information Center

    Maxwell, Jane Carlisle; Pullum, Thomas W.

    2001-01-01

    Applied the capture-recapture model, through a Poisson regression to a time series of data for admissions to treatment from 1987 to 1996 to estimate the number of heroin addicts in Texas who are "at-risk" for treatment. The entire data set produced estimates that were lower and more plausible than those produced by drawing samples,…

  5. WRF-based fire risk modelling and evaluation for years 2010 and 2012 in Poland

    NASA Astrophysics Data System (ADS)

    Stec, Magdalena; Szymanowski, Mariusz; Kryza, Maciej

    2016-04-01

    Wildfires are one of the main ecosystems' disturbances for forested, seminatural and agricultural areas. They generate significant economic loss, especially in forest management and agriculture. Forest fire risk modeling is therefore essential e.g. for forestry administration. In August 2015 a new method of forest fire risk forecasting entered into force in Poland. The method allows to predict a fire risk level in a 4-degree scale (0 - no risk, 3 - highest risk) and consists of a set of linearized regression equations. Meteorological information is used as predictors in regression equations, with air temperature, relative humidity, average wind speed, cloudiness and rainfall. The equations include also pine litter humidity as a measure of potential fuel characteristics. All these parameters are measured routinely in Poland at 42 basic and 94 auxiliary sites. The fire risk level is estimated for a current (basing on morning measurements) or next day (basing on midday measurements). Entire country is divided into 42 prognostic zones, and fire risk level for each zone is taken from the closest measuring site. The first goal of this work is to assess if the measurements needed for fire risk forecasting may be replaced by the data from mesoscale meteorological model. Additionally, the use of a meteorological model would allow to take into account much more realistic spatial differentiation of weather elements determining the fire risk level instead of discrete point-made measurements. Meteorological data have been calculated using the Weather Research and Forecasting model (WRF). For the purpose of this study the WRF model is run in the reanalysis mode allowing to estimate all required meteorological data in a 5-kilometers grid. The only parameter that cannot be directly calculated using WRF is the litter humidity, which has been estimated using empirical formula developed by Sakowska (2007). The experiments are carried out for two selected years: 2010 and 2012. The year 2010 was characterized by the smallest number of wildfires and burnt area whereas 2012 - by the biggest number of fires and the largest area of conflagration. The data about time, localization, scale and causes of individual wildfire occurrence in given years are taken from the National Forest Fire Information System (KSIPL), administered by Forest Fire Protection Department of Polish Forest Research Institute. The database is a part of European Forest Fire Information System (EFFIS). Basing on this data and on the WRF-based fire risk modelling we intend to achieve the second goal of the study, which is the evaluation of the forecasted fire risk with an occurrence of wildfires. Special attention is paid here to the number, time and the spatial distribution of wildfires occurred in cases of low-level predicted fire risk. Results obtained reveals the effectiveness of the new forecasting method. The outcome of our investigation allows to draw a conclusion that some adjustments are possible to improve the efficiency on the fire-risk estimation method.

  6. Developing small-area predictions for smoking and obesity prevalence in the United States for use in Environmental Public Health Tracking.

    PubMed

    Ortega Hinojosa, Alberto M; Davies, Molly M; Jarjour, Sarah; Burnett, Richard T; Mann, Jennifer K; Hughes, Edward; Balmes, John R; Turner, Michelle C; Jerrett, Michael

    2014-10-01

    Globally and in the United States, smoking and obesity are leading causes of death and disability. Reliable estimates of prevalence for these risk factors are often missing variables in public health surveillance programs. This may limit the capacity of public health surveillance to target interventions or to assess associations between other environmental risk factors (e.g., air pollution) and health because smoking and obesity are often important confounders. To generate prevalence estimates of smoking and obesity rates over small areas for the United States (i.e., at the ZIP code and census tract levels). We predicted smoking and obesity prevalence using a combined approach first using a lasso-based variable selection procedure followed by a two-level random effects regression with a Poisson link clustered on state and county. We used data from the Behavioral Risk Factor Surveillance System (BRFSS) from 1991 to 2010 to estimate the model. We used 10-fold cross-validated mean squared errors and the variance of the residuals to test our model. To downscale the estimates we combined the prediction equations with 1990 and 2000 U.S. Census data for each of the four five-year time periods in this time range at the ZIP code and census tract levels. Several sensitivity analyses were conducted using models that included only basic terms, that accounted for spatial autocorrelation, and used Generalized Linear Models that did not include random effects. The two-level random effects model produced improved estimates compared to the fixed effects-only models. Estimates were particularly improved for the two-thirds of the conterminous U.S. where BRFSS data were available to estimate the county level random effects. We downscaled the smoking and obesity rate predictions to derive ZIP code and census tract estimates. To our knowledge these smoking and obesity predictions are the first to be developed for the entire conterminous U.S. for census tracts and ZIP codes. Our estimates could have significant utility for public health surveillance. Copyright © 2014. Published by Elsevier Inc.

  7. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  8. Radiation-Induced Leukemia at Doses Relevant to Radiation Therapy: Modeling Mechanisms and Estimating Risks

    NASA Technical Reports Server (NTRS)

    Shuryak, Igor; Sachs, Rainer K.; Hlatky, Lynn; Mark P. Little; Hahnfeldt, Philip; Brenner, David J.

    2006-01-01

    Because many cancer patients are diagnosed earlier and live longer than in the past, second cancers induced by radiation therapy have become a clinically significant issue. An earlier biologically based model that was designed to estimate risks of high-dose radiation induced solid cancers included initiation of stem cells to a premalignant state, inactivation of stem cells at high radiation doses, and proliferation of stem cells during cellular repopulation after inactivation. This earlier model predicted the risks of solid tumors induced by radiation therapy but overestimated the corresponding leukemia risks. Methods: To extend the model to radiation-induced leukemias, we analyzed in addition to cellular initiation, inactivation, and proliferation a repopulation mechanism specific to the hematopoietic system: long-range migration through the blood stream of hematopoietic stem cells (HSCs) from distant locations. Parameters for the model were derived from HSC biologic data in the literature and from leukemia risks among atomic bomb survivors v^ ho were subjected to much lower radiation doses. Results: Proliferating HSCs that migrate from sites distant from the high-dose region include few preleukemic HSCs, thus decreasing the high-dose leukemia risk. The extended model for leukemia provides risk estimates that are consistent with epidemiologic data for leukemia risk associated with radiation therapy over a wide dose range. For example, when applied to an earlier case-control study of 110000 women undergoing radiotherapy for uterine cancer, the model predicted an excess relative risk (ERR) of 1.9 for leukemia among women who received a large inhomogeneous fractionated external beam dose to the bone marrow (mean = 14.9 Gy), consistent with the measured ERR (2.0, 95% confidence interval [CI] = 0.2 to 6.4; from 3.6 cases expected and 11 cases observed). As a corresponding example for brachytherapy, the predicted ERR of 0.80 among women who received an inhomogeneous low-dose-rate dose to the bone marrow (mean = 2.5 Gy) was consistent with the measured ERR (0.62, 95% Cl =-0.2 to 1.9). Conclusions: An extended, biologically based model for leukemia that includes HSC initiation, inactivation, proliferation, and, uniquely for leukemia, long-range HSC migration predicts, %Kith reasonable accuracy, risks for radiationinduced leukemia associated with exposure to therapeutic doses of radiation.

  9. Using Web-based Interspecies Correlation Estimation (Web-ICE) models as a tool for acute toxicity prediction

    EPA Science Inventory

    In order to assess risk of contaminants to taxa with limited or no toxicity data available, Interspecies Correlation Estimation (ICE) models have been developed by the U.S. Environmental Protection Agency to extrapolate contaminant sensitivity predictions based on data from commo...

  10. Web-based Interspecies Correlation Estimation (Web-ICE) for Acute Toxicity: User Manual Version 3.1

    EPA Science Inventory

    Predictive toxicological models are integral to ecological risk assessment because data for most species are limited. Web-based Interspecies Correlation Estimation (Web-ICE) models are least square regressions that predict acute toxicity (LC50/LD50) of a chemical to a species, ge...

  11. WEB-BASED INTERSPECIES CORRELATION ESTIMATION (WEB-ICE) FOR ACUTE TOXICITY: USER MANUAL V2

    EPA Science Inventory

    Predictive toxicological models are integral to environmental risk Assessment where data for most species are limited. Web-based Interspecies Correlation Estimation (Web-ICE) models are least square regressions that predict acute toxicity (LC50/LD50) of a chemical to a species, ...

  12. Using multiple decrement models to estimate risk and morbidity from specific AIDS illnesses. Multicenter AIDS Cohort Study (MACS).

    PubMed

    Hoover, D R; Peng, Y; Saah, A J; Detels, R R; Day, R S; Phair, J P

    A simple non-parametric approach is developed to simultaneously estimate net incidence and morbidity time from specific AIDS illnesses in populations at high risk for death from these illnesses and other causes. The disease-death process has four-stages that can be recast as two sandwiching three-state multiple decrement processes. Non-parametric estimation of net incidence and morbidity time with error bounds are achieved from these sandwiching models through modification of methods from Aalen and Greenwood, and bootstrapping. An application to immunosuppressed HIV-1 infected homosexual men reveals that cytomegalovirus disease, Kaposi's sarcoma and Pneumocystis pneumonia are likely to occur and cause significant morbidity time.

  13. A Joint Model for Longitudinal Measurements and Survival Data in the Presence of Multiple Failure Types

    PubMed Central

    Elashoff, Robert M.; Li, Gang; Li, Ning

    2009-01-01

    Summary In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel (Prentice et al., 1978, Biometrics 34, 541-554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease. PMID:18162112

  14. Finely Resolved On-Road PM2.5 and Estimated Premature Mortality in Central North Carolina.

    PubMed

    Chang, Shih Ying; Vizuete, William; Serre, Marc; Vennam, Lakshmi Pradeepa; Omary, Mohammad; Isakov, Vlad; Breen, Michael; Arunachalam, Saravanan

    2017-12-01

    To quantify the on-road PM 2.5 -related premature mortality at a national scale, previous approaches to estimate concentrations at a 12-km × 12-km or larger grid cell resolution may not fully characterize concentration hotspots that occur near roadways and thus the areas of highest risk. Spatially resolved concentration estimates from on-road emissions to capture these hotspots may improve characterization of the associated risk, but are rarely used for estimating premature mortality. In this study, we compared the on-road PM 2.5 -related premature mortality in central North Carolina with two different concentration estimation approaches-(i) using the Community Multiscale Air Quality (CMAQ) model to model concentration at a coarser resolution of a 36-km × 36-km grid resolution, and (ii) using a hybrid of a Gaussian dispersion model, CMAQ, and a space-time interpolation technique to provide annual average PM 2.5 concentrations at a Census-block level (∼105,000 Census blocks). The hybrid modeling approach estimated 24% more on-road PM 2.5 -related premature mortality than CMAQ. The major difference is from the primary on-road PM 2.5 where the hybrid approach estimated 2.5 times more primary on-road PM 2.5 -related premature mortality than CMAQ due to predicted exposure hotspots near roadways that coincide with high population areas. The results show that 72% of primary on-road PM 2.5 premature mortality occurs within 1,000 m from roadways where 50% of the total population resides, highlighting the importance to characterize near-road primary PM 2.5 and suggesting that previous studies may have underestimated premature mortality due to PM 2.5 from traffic-related emissions. © 2017 Society for Risk Analysis.

  15. Building and validating a prediction model for paediatric type 1 diabetes risk using next generation targeted sequencing of class II HLA genes.

    PubMed

    Zhao, Lue Ping; Carlsson, Annelie; Larsson, Helena Elding; Forsander, Gun; Ivarsson, Sten A; Kockum, Ingrid; Ludvigsson, Johnny; Marcus, Claude; Persson, Martina; Samuelsson, Ulf; Örtqvist, Eva; Pyo, Chul-Woo; Bolouri, Hamid; Zhao, Michael; Nelson, Wyatt C; Geraghty, Daniel E; Lernmark, Åke

    2017-11-01

    It is of interest to predict possible lifetime risk of type 1 diabetes (T1D) in young children for recruiting high-risk subjects into longitudinal studies of effective prevention strategies. Utilizing a case-control study in Sweden, we applied a recently developed next generation targeted sequencing technology to genotype class II genes and applied an object-oriented regression to build and validate a prediction model for T1D. In the training set, estimated risk scores were significantly different between patients and controls (P = 8.12 × 10 -92 ), and the area under the curve (AUC) from the receiver operating characteristic (ROC) analysis was 0.917. Using the validation data set, we validated the result with AUC of 0.886. Combining both training and validation data resulted in a predictive model with AUC of 0.903. Further, we performed a "biological validation" by correlating risk scores with 6 islet autoantibodies, and found that the risk score was significantly correlated with IA-2A (Z-score = 3.628, P < 0.001). When applying this prediction model to the Swedish population, where the lifetime T1D risk ranges from 0.5% to 2%, we anticipate identifying approximately 20 000 high-risk subjects after testing all newborns, and this calculation would identify approximately 80% of all patients expected to develop T1D in their lifetime. Through both empirical and biological validation, we have established a prediction model for estimating lifetime T1D risk, using class II HLA. This prediction model should prove useful for future investigations to identify high-risk subjects for prevention research in high-risk populations. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Residual lifetime and 10 year absolute risks of osteoporotic fractures in Chinese men and women.

    PubMed

    Si, Lei; Winzenberg, Tania M; Chen, Mingsheng; Jiang, Qicheng; Palmer, Andrew J

    2015-06-01

    To determine the residual lifetime and 10 year absolute risks of osteoporotic fractures in Chinese men and women. A validated state-transition microsimulation model was used. Microsimulation and probabilistic sensitivity analyses were performed to address the uncertainties in the model. All parameters including fracture incidence rates and mortality rates were retrieved from published literature. Simulated subjects were run through the model until they died to estimate the residual lifetime fracture risks. A 10 year time horizon was used to determine the 10 year fracture risks. We estimated the risk of only the first osteoporotic fracture during the simulation time horizon. The residual lifetime and 10 year risks of having the first osteoporotic (hip, clinical vertebral or wrist) fracture for Chinese women aged 50 years were 40.9% (95% CI: 38.3-44.0%) and 8.2% (95% CI: 6.8-9.3%) respectively. For men, the residual lifetime and 10 year fracture risks were 8.7% (95% CI: 7.5-9.8%) and 1.2% (95% CI: 0.8-1.7%) respectively. The residual lifetime fracture risks declined with age, whilst the 10 year fracture risks increased with age until the short-term mortality risks outstripped the fracture risks. Residual lifetime and 10 year clinical vertebral fracture risks were higher than those of hip and wrist fractures in both sexes. More than one third of the Chinese women and approximately one tenth of the Chinese men aged 50 years are expected to sustain a major osteoporotic fracture in their remaining lifetimes. Due to increased fracture risks and a rapidly ageing population, osteoporosis will present a great challenge to the Chinese healthcare system. While national data was used wherever possible, regional Chinese hip and clinical vertebral fracture incidence rates were used, wrist fracture rates were taken from a Norwegian study and calibrated to the Chinese population. Other fracture sites like tibia, humerus, ribs and pelvis were not included in the analysis, thus these risks are likely to be underestimates. Fracture risk factors other than age and sex were not included in the model. Point estimates were used for fracture incidence rates, osteoporosis prevalence and mortality rates for the general population.

  17. Epidemiology of hip fracture and the development of FRAX in Ukraine.

    PubMed

    Povoroznyuk, V V; Grygorieva, N V; Kanis, J A; Ev, McCloskey; Johansson, H; Harvey, N C; Korzh, M O; Strafun, S S; Vaida, V M; Klymovytsky, F V; Vlasenko, R O; Forosenko, V S

    2017-12-01

    A country-specific FRAX model has been developed for the Ukraine to replace the Austrian model hitherto used. Comparison of the Austrian and Ukrainian models indicated that the former markedly overestimated fracture probability whilst correctly stratifying risk. FRAX has been used to estimate osteoporotic fracture risk since 2009. Rather than using a surrogate model, the Austrian version of FRAX was adopted for clinical practice. Since then, data have become available on hip fracture incidence in the Ukraine. The incidence of hip fracture was computed from three regional estimates and used to construct a country-specific FRAX model for the Ukraine. The model characteristics were compared with those of the Austrian FRAX model, previously used in Ukraine by using all combinations of six risk factors and eight values of BMD (total number of combinations =512). The relationship between the probabilities of a major fracture derived from the two versions of FRAX indicated a close correlation between the two estimates (r > 0.95). The Ukrainian version, however, gave markedly lower probabilities than the Austrian model at all ages. For a major osteoporotic fracture, the median probability was lower by 25% at age 50 years and the difference increased with age. At the age of 60, 70 and 80 years, the median value was lower by 30, 53 and 65%, respectively. Similar findings were observed for men and for hip fracture. The Ukrainian FRAX model should enhance accuracy of determining fracture probability among the Ukrainian population and help to guide decisions about treatment. The study also indicates that the use of surrogate FRAX models or models from other countries, whilst correctly stratifying risk, may markedly over or underestimate the absolute fracture probability.

  18. Inferring the risk factors behind the geographical spread and transmission of Zika in the Americas.

    PubMed

    Gardner, Lauren M; Bóta, András; Gangavarapu, Karthik; Kraemer, Moritz U G; Grubaugh, Nathan D

    2018-01-01

    An unprecedented Zika virus epidemic occurred in the Americas during 2015-2016. The size of the epidemic in conjunction with newly recognized health risks associated with the virus attracted significant attention across the research community. Our study complements several recent studies which have mapped epidemiological elements of Zika, by introducing a newly proposed methodology to simultaneously estimate the contribution of various risk factors for geographic spread resulting in local transmission and to compute the risk of spread (or re-introductions) between each pair of regions. The focus of our analysis is on the Americas, where the set of regions includes all countries, overseas territories, and the states of the US. We present a novel application of the Generalized Inverse Infection Model (GIIM). The GIIM model uses real observations from the outbreak and seeks to estimate the risk factors driving transmission. The observations are derived from the dates of reported local transmission of Zika virus in each region, the network structure is defined by the passenger air travel movements between all pairs of regions, and the risk factors considered include regional socioeconomic factors, vector habitat suitability, travel volumes, and epidemiological data. The GIIM relies on a multi-agent based optimization method to estimate the parameters, and utilizes a data driven stochastic-dynamic epidemic model for evaluation. As expected, we found that mosquito abundance, incidence rate at the origin region, and human population density are risk factors for Zika virus transmission and spread. Surprisingly, air passenger volume was less impactful, and the most significant factor was (a negative relationship with) the regional gross domestic product (GDP) per capita. Our model generates country level exportation and importation risk profiles over the course of the epidemic and provides quantitative estimates for the likelihood of introduced Zika virus resulting in local transmission, between all origin-destination travel pairs in the Americas. Our findings indicate that local vector control, rather than travel restrictions, will be more effective at reducing the risks of Zika virus transmission and establishment. Moreover, the inverse relationship between Zika virus transmission and GDP suggests that Zika cases are more likely to occur in regions where people cannot afford to protect themselves from mosquitoes. The modeling framework is not specific for Zika virus, and could easily be employed for other vector-borne pathogens with sufficient epidemiological and entomological data.

  19. Quantitative farm-to-fork risk assessment model for norovirus and hepatitis A virus in European leafy green vegetable and berry fruit supply chains.

    PubMed

    Bouwknegt, Martijn; Verhaelen, Katharina; Rzeżutka, Artur; Kozyra, Iwona; Maunula, Leena; von Bonsdorff, Carl-Henrik; Vantarakis, Apostolos; Kokkinos, Petros; Petrovic, Tamas; Lazic, Sava; Pavlik, Ivo; Vasickova, Petra; Willems, Kris A; Havelaar, Arie H; Rutjes, Saskia A; de Roda Husman, Ana Maria

    2015-04-02

    Fresh produce that is contaminated with viruses may lead to infection and viral gastroenteritis or hepatitis when consumed raw. It is thus important to reduce virus numbers on these foods. Prevention of virus contamination in fresh produce production and processing may be more effective than treatment, as sufficient virus removal or inactivation by post-harvest treatment requires high doses that may adversely affect food quality. To date knowledge of the contribution of various potential contamination routes is lacking. A risk assessment model was developed for human norovirus, hepatitis A virus and human adenovirus in raspberry and salad vegetable supply chains to quantify contributions of potential contamination sources to the contamination of produce at retail. These models were used to estimate public health risks. Model parameterization was based on monitoring data from European supply chains and literature data. No human pathogenic viruses were found in the soft fruit supply chains; human adenovirus (hAdV) was detected, which was additionally monitored as an indicator of fecal pollution to assess the contribution of potential contamination points. Estimated risks per serving of lettuce based on the models were 3×10(-4) (6×10(-6)-5×10(-3)) for NoV infection and 3×10(-8) (7×10(-10)-3×10(-6)) for hepatitis A jaundice. The contribution to virus contamination of hand-contact was larger as compared with the contribution of irrigation, the conveyor belt or the water used for produce rinsing. In conclusion, viral contamination in the lettuce and soft fruit supply chains occurred and estimated health risks were generally low. Nevertheless, the 97.5% upper limit for the estimated NoV contamination of lettuce suggested that infection risks up to 50% per serving might occur. Our study suggests that attention to full compliance for hand hygiene will improve fresh produce safety related to virus risks most as compared to the other examined sources, given the monitoring results. This effect will be further aided by compliance with other hygiene and water quality regulations in production and processing facilities. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Failure dynamics of the global risk network.

    PubMed

    Szymanski, Boleslaw K; Lin, Xin; Asztalos, Andrea; Sreenivasan, Sameet

    2015-06-18

    Risks threatening modern societies form an intricately interconnected network that often underlies crisis situations. Yet, little is known about how risk materializations in distinct domains influence each other. Here we present an approach in which expert assessments of likelihoods and influence of risks underlie a quantitative model of the global risk network dynamics. The modeled risks range from environmental to economic and technological, and include difficult to quantify risks, such as geo-political and social. Using the maximum likelihood estimation, we find the optimal model parameters and demonstrate that the model including network effects significantly outperforms the others, uncovering full value of the expert collected data. We analyze the model dynamics and study its resilience and stability. Our findings include such risk properties as contagion potential, persistence, roles in cascades of failures and the identity of risks most detrimental to system stability. The model provides quantitative means for measuring the adverse effects of risk interdependencies and the materialization of risks in the network.

  1. Quantitative risk assessment for a glass fiber insulation product.

    PubMed

    Fayerweather, W E; Bender, J R; Hadley, J G; Eastes, W

    1997-04-01

    California Proposition 65 (Prop65) provides a mechanism by which the manufacturer may perform a quantitative risk assessment to be used in determining the need for cancer warning labels. This paper presents a risk assessment under this regulation for professional and do-it-yourself insulation installers. It determines the level of insulation glass fiber exposure (specifically Owens Corning's R-25 PinkPlus with Miraflex) that, assuming a working lifetime exposure, poses no significant cancer risk under Prop65's regulations. "No significant risk" is defined under Prop65 as a lifetime risk of no more than one additional cancer case per 100,000 exposed persons, and nonsignificant exposure is defined as a working lifetime exposure associated with "no significant risk." This determination can be carried out despite the fact that the relevant underlying studies (i.e., chronic inhalation bioassays) of comparable glass wool fibers do not show tumorigenic activity. Nonsignificant exposures are estimated from (1) the most recent RCC chronic inhalation bioassay of nondurable fiberglass in rats; (2) intraperitoneal fiberglass injection studies in rats; (3) a distributional, decision analysis approach applied to four chronic inhalation rat bioassays of conventional fiberglass; (4) an extrapolation from the RCC chronic rat inhalation bioassay of durable refractory ceramic fibers; and (5) an extrapolation from the IOM chronic rat inhalation bioassay of durable E glass microfibers. When the EPA linear nonthreshold model is used, central estimates of nonsignificant exposure range from 0.36 fibers/cc (for the RCC chronic inhalation bioassay of fiberglass) through 21 fibers/cc (for the i.p. fiberglass injection studies). Lower 95% confidence bounds on these estimates vary from 0.17 fibers/cc through 13 fibers/cc. Estimates derived from the distributional approach or from applying the EPA linear nonthreshold model to chronic bioassays of durable fibers such as refractory ceramic fiber or E glass microfibers are intermediate to the other approaches. Estimates based on the Weibull 1.5-hit nonthreshold and 2-hit threshold models exceed by at least a factor of 10 the corresponding EPA linear nonthreshold estimates. The lowest nonsignificant exposures derived in this assessment are at least a factor of two higher than field exposures measured for professionals installing the R-25 fiberglass insulation product and are orders of magnitude higher than the estimated lifetime exposures for do-it-yourselfers.

  2. The Role of Inertia in Modeling Decisions from Experience with Instance-Based Learning

    PubMed Central

    Dutt, Varun; Gonzalez, Cleotilde

    2012-01-01

    One form of inertia is the tendency to repeat the last decision irrespective of the obtained outcomes while making decisions from experience (DFE). A number of computational models based upon the Instance-Based Learning Theory, a theory of DFE, have included different inertia implementations and have shown to simultaneously account for both risk-taking and alternations between alternatives. The role that inertia plays in these models, however, is unclear as the same model without inertia is also able to account for observed risk-taking quite well. This paper demonstrates the predictive benefits of incorporating one particular implementation of inertia in an existing IBL model. We use two large datasets, estimation and competition, from the Technion Prediction Tournament involving a repeated binary-choice task to show that incorporating an inertia mechanism in an IBL model enables it to account for the observed average risk-taking and alternations. Including inertia, however, does not help the model to account for the trends in risk-taking and alternations over trials compared to the IBL model without the inertia mechanism. We generalize the two IBL models, with and without inertia, to the competition set by using the parameters determined in the estimation set. The generalization process demonstrates both the advantages and disadvantages of including inertia in an IBL model. PMID:22685443

  3. The role of inertia in modeling decisions from experience with instance-based learning.

    PubMed

    Dutt, Varun; Gonzalez, Cleotilde

    2012-01-01

    One form of inertia is the tendency to repeat the last decision irrespective of the obtained outcomes while making decisions from experience (DFE). A number of computational models based upon the Instance-Based Learning Theory, a theory of DFE, have included different inertia implementations and have shown to simultaneously account for both risk-taking and alternations between alternatives. The role that inertia plays in these models, however, is unclear as the same model without inertia is also able to account for observed risk-taking quite well. This paper demonstrates the predictive benefits of incorporating one particular implementation of inertia in an existing IBL model. We use two large datasets, estimation and competition, from the Technion Prediction Tournament involving a repeated binary-choice task to show that incorporating an inertia mechanism in an IBL model enables it to account for the observed average risk-taking and alternations. Including inertia, however, does not help the model to account for the trends in risk-taking and alternations over trials compared to the IBL model without the inertia mechanism. We generalize the two IBL models, with and without inertia, to the competition set by using the parameters determined in the estimation set. The generalization process demonstrates both the advantages and disadvantages of including inertia in an IBL model.

  4. Evidence That Breast Tissue Stiffness Is Associated with Risk of Breast Cancer

    PubMed Central

    Boyd, Norman F.; Li, Qing; Melnichouk, Olga; Huszti, Ella; Martin, Lisa J.; Gunasekara, Anoma; Mawdsley, Gord; Yaffe, Martin J.; Minkin, Salomon

    2014-01-01

    Background Evidence from animal models shows that tissue stiffness increases the invasion and progression of cancers, including mammary cancer. We here use measurements of the volume and the projected area of the compressed breast during mammography to derive estimates of breast tissue stiffness and examine the relationship of stiffness to risk of breast cancer. Methods Mammograms were used to measure the volume and projected areas of total and radiologically dense breast tissue in the unaffected breasts of 362 women with newly diagnosed breast cancer (cases) and 656 women of the same age who did not have breast cancer (controls). Measures of breast tissue volume and the projected area of the compressed breast during mammography were used to calculate the deformation of the breast during compression and, with the recorded compression force, to estimate the stiffness of breast tissue. Stiffness was compared in cases and controls, and associations with breast cancer risk examined after adjustment for other risk factors. Results After adjustment for percent mammographic density by area measurements, and other risk factors, our estimate of breast tissue stiffness was significantly associated with breast cancer (odds ratio = 1.21, 95% confidence interval = 1.03, 1.43, p = 0.02) and improved breast cancer risk prediction in models with percent mammographic density, by both area and volume measurements. Conclusion An estimate of breast tissue stiffness was associated with breast cancer risk and improved risk prediction based on mammographic measures and other risk factors. Stiffness may provide an additional mechanism by which breast tissue composition is associated with risk of breast cancer and merits examination using more direct methods of measurement. PMID:25010427

  5. Evaluating alternative prescribed burning policies to reduce net economic damages from wildfire

    Treesearch

    D. Evan Mercer; Jeffrey P. Prestemon; David T. Butry; John M. Pye

    2007-01-01

    We estimate a wildfire risk model with a new measure of wildfire output, intensity-weighted risk and use it in Monte Carlo simulations to estimate welfare changes from alternative prescribed burning policies. Using Volusia County, Florida as a case study, an annual prescribed burning rate of 13% of all forest lands maximizes net welfare; ignoring the effects on...

  6. A simulation of probabilistic wildfire risk components for the continental United States

    Treesearch

    Mark A. Finney; Charles W. McHugh; Isaac C. Grenfell; Karin L. Riley; Karen C. Short

    2011-01-01

    This simulation research was conducted in order to develop a large-fire risk assessment system for the contiguous land area of the United States. The modeling system was applied to each of 134 Fire Planning Units (FPUs) to estimate burn probabilities and fire size distributions. To obtain stable estimates of these quantities, fire ignition and growth was simulated for...

  7. Risk Estimation and Sexual Behaviour: A Longitudinal Study of 16- 21-year olds.

    PubMed

    Breakwell, G M; Breakwell, G M

    1996-01-01

    The relationships among risk estimation, impulsivity and patterns of sexual risk-taking in 16-21-year-olds are examined. A sample of 236 males and 340 females completed a postal questionnaire on three occasions at annual intervals. They reported their assessment of their own risk of HIV infection, the risk of HIV infection associated with six types of sexual activity, their likelihood of engaging in each of these activities, and whether they had participated in these activities between the first and second data collections. Impulsivity was indexed using a standard test. The data support the conclusion that strong social representations of sexual risks exist which do not markedly change during late adolescence. These risk estimates predict behavioural expectations, primarily for the riskiest behaviours, and for females (actual participation in vaginal sex); but for males, risk estimates fail to predict behaviour. Evidence here for a rational model of individual decision- making in relation to sexual risk- taking is sparse. Impulsivity was not a good predictor of expected or actual patterns of sexual behaviour, though higher impulsivity was associated with having more sexual partners and, in females, with starting to have sex younger.

  8. Leptospirosis disease mapping with standardized morbidity ratio and Poisson-Gamma model: An analysis of Leptospirosis disease in Kelantan, Malaysia

    NASA Astrophysics Data System (ADS)

    Che Awang, Aznida; Azah Samat, Nor

    2017-09-01

    Leptospirosis is a disease caused by the infection of pathogenic species from the genus of Leptospira. Human can be infected by the leptospirosis from direct or indirect exposure to the urine of infected animals. The excretion of urine from the animal host that carries pathogenic Leptospira causes the soil or water to be contaminated. Therefore, people can become infected when they are exposed to contaminated soil and water by cut on the skin as well as open wound. It also can enter the human body by mucous membrane such nose, eyes and mouth, for example by splashing contaminated water or urine into the eyes or swallowing contaminated water or food. Currently, there is no vaccine available for the prevention or treatment of leptospirosis disease but this disease can be treated if it is diagnosed early to avoid any complication. The disease risk mapping is important in a way to control and prevention of disease. Using a good choice of statistical model will produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for leptospirosis disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and Poisson-gamma model. This paper begins by providing a review of the SMR method and Poisson-gamma model, which we then applied to leptospirosis data of Kelantan, Malaysia. Both results are displayed and compared using graph, tables and maps. The result shows that the second method Poisson-gamma model produces better relative risk estimates compared to the SMR method. This is because the Poisson-gamma model can overcome the drawback of SMR where the relative risk will become zero when there is no observed leptospirosis case in certain regions. However, the Poisson-gamma model also faced problems where the covariate adjustment for this model is difficult and no possibility for allowing spatial correlation between risks in neighbouring areas. The problems of this model have motivated many researchers to introduce other alternative methods for estimating the risk.

  9. Empirical Bayes Estimation of Semi-parametric Hierarchical Mixture Models for Unbiased Characterization of Polygenic Disease Architectures

    PubMed Central

    Nishino, Jo; Kochi, Yuta; Shigemizu, Daichi; Kato, Mamoru; Ikari, Katsunori; Ochi, Hidenori; Noma, Hisashi; Matsui, Kota; Morizono, Takashi; Boroevich, Keith A.; Tsunoda, Tatsuhiko; Matsui, Shigeyuki

    2018-01-01

    Genome-wide association studies (GWAS) suggest that the genetic architecture of complex diseases consists of unexpectedly numerous variants with small effect sizes. However, the polygenic architectures of many diseases have not been well characterized due to lack of simple and fast methods for unbiased estimation of the underlying proportion of disease-associated variants and their effect-size distribution. Applying empirical Bayes estimation of semi-parametric hierarchical mixture models to GWAS summary statistics, we confirmed that schizophrenia was extremely polygenic [~40% of independent genome-wide SNPs are risk variants, most within odds ratio (OR = 1.03)], whereas rheumatoid arthritis was less polygenic (~4 to 8% risk variants, significant portion reaching OR = 1.05 to 1.1). For rheumatoid arthritis, stratified estimations revealed that expression quantitative loci in blood explained large genetic variance, and low- and high-frequency derived alleles were prone to be risk and protective, respectively, suggesting a predominance of deleterious-risk and advantageous-protective mutations. Despite genetic correlation, effect-size distributions for schizophrenia and bipolar disorder differed across allele frequency. These analyses distinguished disease polygenic architectures and provided clues for etiological differences in complex diseases. PMID:29740473

  10. Chronic and Acute Ozone Exposure in the Week Prior to Delivery Is Associated with the Risk of Stillbirth

    PubMed Central

    Ha, Sandie; Pollack, Anna Z.; Zhu, Yeyi; Seeni, Indulaxmi; Kim, Sung Soo; Sherman, Seth; Liu, Danping

    2017-01-01

    Chronic and acute air pollution has been studied in relation to stillbirth with inconsistent findings. We examined stillbirth risk in a retrospective cohort of 223,375 singleton deliveries from 12 clinical sites across the United States. Average criteria air pollutant exposure was calculated using modified Community Multiscale Air Quality models for the day of delivery and each of the seven days prior, whole pregnancy, and first trimester. Poisson regression models using generalized estimating equations estimated the relative risk (RR) of stillbirth and 95% confidence intervals (CI) in relation to an interquartile range increase in pollutant with adjustment for temperature, clinical, and demographic factors. Ozone (O3) was associated with a 13–22% increased risk of stillbirth on days 2, 3, and 5–7 prior to delivery in single pollutant models, and these findings persisted in multi-pollutant models for days 5 (RR = 1.22, CI = 1.07–1.38) and 6 (RR = 1.18, CI = 1.04–1.33). Whole pregnancy and first trimester O3 increased risk 18–39% in single pollutant models. Maternal asthma increased stillbirth risk associated with chronic PM2.5 and carbon monoxide exposures. Both chronic and acute O3 exposure consistently increased stillbirth risk, while the role of other pollutants varied. Approximately 8000 stillbirths per year in the US may be attributable to O3 exposure. PMID:28684711

  11. Establishing endangered species recovery criteria using predictive simulation modeling

    USGS Publications Warehouse

    McGowan, Conor P.; Catlin, Daniel H.; Shaffer, Terry L.; Gratto-Trevor, Cheri L.; Aron, Carol

    2014-01-01

    Listing a species under the Endangered Species Act (ESA) and developing a recovery plan requires U.S. Fish and Wildlife Service to establish specific and measurable criteria for delisting. Generally, species are listed because they face (or are perceived to face) elevated risk of extinction due to issues such as habitat loss, invasive species, or other factors. Recovery plans identify recovery criteria that reduce extinction risk to an acceptable level. It logically follows that the recovery criteria, the defined conditions for removing a species from ESA protections, need to be closely related to extinction risk. Extinction probability is a population parameter estimated with a model that uses current demographic information to project the population into the future over a number of replicates, calculating the proportion of replicated populations that go extinct. We simulated extinction probabilities of piping plovers in the Great Plains and estimated the relationship between extinction probability and various demographic parameters. We tested the fit of regression models linking initial abundance, productivity, or population growth rate to extinction risk, and then, using the regression parameter estimates, determined the conditions required to reduce extinction probability to some pre-defined acceptable threshold. Binomial regression models with mean population growth rate and the natural log of initial abundance were the best predictors of extinction probability 50 years into the future. For example, based on our regression models, an initial abundance of approximately 2400 females with an expected mean population growth rate of 1.0 will limit extinction risk for piping plovers in the Great Plains to less than 0.048. Our method provides a straightforward way of developing specific and measurable recovery criteria linked directly to the core issue of extinction risk. Published by Elsevier Ltd.

  12. The use of cluster analysis techniques in spaceflight project cost risk estimation

    NASA Technical Reports Server (NTRS)

    Fox, G.; Ebbeler, D.; Jorgensen, E.

    2003-01-01

    Project cost risk is the uncertainty in final project cost, contingent on initial budget, requirements and schedule. For a proposed mission, a dynamic simulation model relying for some of its input on a simple risk elicitation is used to identify and quantify systemic cost risk.

  13. Ultraviolet Radiation: Human Exposure and Health Risks.

    ERIC Educational Resources Information Center

    Tenkate, Thomas D.

    1998-01-01

    Provides an overview of human exposure to ultraviolet radiation and associated health effects as well as risk estimates for acute and chronic conditions resulting from such exposure. Demonstrates substantial reductions in health risk that can be achieved through preventive actions. Also includes a risk assessment model for skin cancer. Contains 36…

  14. Associations between air emissions from sour gas processing plants and indices of cow retainment and survival in dairy herds in Alberta

    PubMed Central

    Scott, H. Morgan; Soskolne, Colin L.; Lissemore, Kerry D.; Martin, S. Wayne; Shoukri, Mohamed M.; Coppock, Robert W.; Guidotti, Tee L.

    2003-01-01

    This paper describes the results of an investigation into the effects of air emissions from sour gas processing plants on indices of retainment or survival of adult female dairy cattle on farms in Alberta; namely, the productive lifespan of individual animals, and annual herd-level risks for culling and mortality. Using a geographical information system, 2 dispersion models — 1 simple and 1 complex — were used to assess historical exposures to sour gas emissions at 1382 dairy farm sites from 1985 through to 1994. Multivariable survival models, adjusting for the dependence of survival responses within a herd over time, as well as potential confounding variables, were utilized to determine associations between sour gas exposure estimates and the time from the first calving date to either death or culling of 150 210 dairy cows. Generalized linear models were used to model the relationship between herd-level risks for culling and mortality and levels of sour gas exposure. No significant (P < 0.05) associations were found with the time to culling (n = 70 052). However, both dispersion model exposure estimates were significantly associated (P < 0.05) with a decreased hazard for mortality; that is, in cases where cattle had died on-farm (n = 8743). There were no significant associations (P > 0.05) between herd culling risks and the 2 dispersion model exposure estimates. There was no measurable impact of plant emissions on the annual herd risk of mortality. PMID:12528823

  15. Uncertainty analysis in vulnerability estimations for elements at risk- a review of concepts and some examples on landslides

    NASA Astrophysics Data System (ADS)

    Ciurean, R. L.; Glade, T.

    2012-04-01

    Decision under uncertainty is a constant of everyday life and an important component of risk management and governance. Recently, experts have emphasized the importance of quantifying uncertainty in all phases of landslide risk analysis. Due to its multi-dimensional and dynamic nature, (physical) vulnerability is inherently complex and the "degree of loss" estimates imprecise and to some extent even subjective. Uncertainty analysis introduces quantitative modeling approaches that allow for a more explicitly objective output, improving the risk management process as well as enhancing communication between various stakeholders for better risk governance. This study presents a review of concepts for uncertainty analysis in vulnerability of elements at risk to landslides. Different semi-quantitative and quantitative methods are compared based on their feasibility in real-world situations, hazard dependency, process stage in vulnerability assessment (i.e. input data, model, output), and applicability within an integrated landslide hazard and risk framework. The resulted observations will help to identify current gaps and future needs in vulnerability assessment, including estimation of uncertainty propagation, transferability of the methods, development of visualization tools, but also address basic questions like what is uncertainty and how uncertainty can be quantified or treated in a reliable and reproducible way.

  16. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  17. A framework for quantifying net benefits of alternative prognostic models‡

    PubMed Central

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066

  18. BOADICEA breast cancer risk prediction model: updates to cancer incidences, tumour pathology and web interface

    PubMed Central

    Lee, A J; Cunningham, A P; Kuchenbaecker, K B; Mavaddat, N; Easton, D F; Antoniou, A C

    2014-01-01

    Background: The Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (BOADICEA) is a risk prediction model that is used to compute probabilities of carrying mutations in the high-risk breast and ovarian cancer susceptibility genes BRCA1 and BRCA2, and to estimate the future risks of developing breast or ovarian cancer. In this paper, we describe updates to the BOADICEA model that extend its capabilities, make it easier to use in a clinical setting and yield more accurate predictions. Methods: We describe: (1) updates to the statistical model to include cancer incidences from multiple populations; (2) updates to the distributions of tumour pathology characteristics using new data on BRCA1 and BRCA2 mutation carriers and women with breast cancer from the general population; (3) improvements to the computational efficiency of the algorithm so that risk calculations now run substantially faster; and (4) updates to the model's web interface to accommodate these new features and to make it easier to use in a clinical setting. Results: We present results derived using the updated model, and demonstrate that the changes have a significant impact on risk predictions. Conclusion: All updates have been implemented in a new version of the BOADICEA web interface that is now available for general use: http://ccge.medschl.cam.ac.uk/boadicea/. PMID:24346285

  19. Modeling for the allocation of oil spill recovery capacity considering environmental and economic factors.

    PubMed

    Ha, Min-Jae

    2018-01-01

    This study presents a regional oil spill risk assessment and capacities for marine oil spill response in Korea. The risk assessment of oil spill is carried out using both causal factors and environmental/economic factors. The weight of each parameter is calculated using the Analytic Hierarchy Process (AHP). Final regional risk degrees of oil spill are estimated by combining the degree and weight of each existing parameter. From these estimated risk levels, oil recovery capacities were determined with reference to the recovery target of 7500kl specified in existing standards. The estimates were deemed feasible, and provided a more balanced distribution of resources than existing capacities set according to current standards. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. An emission-weighted proximity model for air pollution exposure assessment.

    PubMed

    Zou, Bin; Wilson, J Gaines; Zhan, F Benjamin; Zeng, Yongnian

    2009-08-15

    Among the most common spatial models for estimating personal exposure are Traditional Proximity Models (TPMs). Though TPMs are straightforward to configure and interpret, they are prone to extensive errors in exposure estimates and do not provide prospective estimates. To resolve these inherent problems with TPMs, we introduce here a novel Emission Weighted Proximity Model (EWPM) to improve the TPM, which takes into consideration the emissions from all sources potentially influencing the receptors. EWPM performance was evaluated by comparing the normalized exposure risk values of sulfur dioxide (SO(2)) calculated by EWPM with those calculated by TPM and monitored observations over a one-year period in two large Texas counties. In order to investigate whether the limitations of TPM in potential exposure risk prediction without recorded incidence can be overcome, we also introduce a hybrid framework, a 'Geo-statistical EWPM'. Geo-statistical EWPM is a synthesis of Ordinary Kriging Geo-statistical interpolation and EWPM. The prediction results are presented as two potential exposure risk prediction maps. The performance of these two exposure maps in predicting individual SO(2) exposure risk was validated with 10 virtual cases in prospective exposure scenarios. Risk values for EWPM were clearly more agreeable with the observed concentrations than those from TPM. Over the entire study area, the mean SO(2) exposure risk from EWPM was higher relative to TPM (1.00 vs. 0.91). The mean bias of the exposure risk values of 10 virtual cases between EWPM and 'Geo-statistical EWPM' are much smaller than those between TPM and 'Geo-statistical TPM' (5.12 vs. 24.63). EWPM appears to more accurately portray individual exposure relative to TPM. The 'Geo-statistical EWPM' effectively augments the role of the standard proximity model and makes it possible to predict individual risk in future exposure scenarios resulting in adverse health effects from environmental pollution.

  1. A spatial Bayesian network model to assess the benefits of early warning for urban flood risk to people

    NASA Astrophysics Data System (ADS)

    Balbi, S.; Villa, F.; Mojtahed, V.; Hegetschweiler, K. T.; Giupponi, C.

    2015-10-01

    This article presents a novel methodology to assess flood risk to people by integrating people's vulnerability and ability to cushion hazards through coping and adapting. The proposed approach extends traditional risk assessments beyond material damages; complements quantitative and semi-quantitative data with subjective and local knowledge, improving the use of commonly available information; produces estimates of model uncertainty by providing probability distributions for all of its outputs. Flood risk to people is modeled using a spatially explicit Bayesian network model calibrated on expert opinion. Risk is assessed in terms of: (1) likelihood of non-fatal physical injury; (2) likelihood of post-traumatic stress disorder; (3) likelihood of death. The study area covers the lower part of the Sihl valley (Switzerland) including the city of Zurich. The model is used to estimate the benefits of improving an existing Early Warning System, taking into account the reliability, lead-time and scope (i.e. coverage of people reached by the warning). Model results indicate that the potential benefits of an improved early warning in terms of avoided human impacts are particularly relevant in case of a major flood event: about 75 % of fatalities, 25 % of injuries and 18 % of post-traumatic stress disorders could be avoided.

  2. The Preventable Risk Integrated ModEl and Its Use to Estimate the Health Impact of Public Health Policy Scenarios

    PubMed Central

    Scarborough, Peter; Harrington, Richard A.; Mizdrak, Anja; Zhou, Lijuan Marissa; Doherty, Aiden

    2014-01-01

    Noncommunicable disease (NCD) scenario models are an essential part of the public health toolkit, allowing for an estimate of the health impact of population-level interventions that are not amenable to assessment by standard epidemiological study designs (e.g., health-related food taxes and physical infrastructure projects) and extrapolating results from small samples to the whole population. The PRIME (Preventable Risk Integrated ModEl) is an openly available NCD scenario model that estimates the effect of population-level changes in diet, physical activity, and alcohol and tobacco consumption on NCD mortality. The structure and methods employed in the PRIME are described here in detail, including the development of open source code that will support a PRIME web application to be launched in 2015. This paper reviews scenario results from eleven papers that have used the PRIME, including estimates of the impact of achieving government recommendations for healthy diets, health-related food taxes and subsidies, and low-carbon diets. Future challenges for NCD scenario modelling, including the need for more comparisons between models and the improvement of future prediction of NCD rates, are also discussed. PMID:25328757

  3. Risk assessment of TBT in the Japanese short-neck clam ( Ruditapes philippinarum) of Tokyo Bay using a chemical fate model

    NASA Astrophysics Data System (ADS)

    Horiguchi, Fumio; Nakata, Kisaburo; Ito, Naganori; Okawa, Ken

    2006-12-01

    A risk assessment of Tributyltin (TBT) in Tokyo Bay was conducted using the Margin of Exposure (MOE) method at the species level using the Japanese short-neck clam, Ruditapes philippinarum. The assessment endpoint was defined to protect R. philippinarum in Tokyo Bay from TBT (growth effects). A No Observed Effect Concentration (NOEC) for this species with respect to growth reduction induced by TBT was estimated from experimental results published in the scientific literature. Sources of TBT in this study were assumed to be commercial vessels in harbors and navigation routes. Concentrations of TBT in Tokyo Bay were estimated using a three-dimensional hydrodynamic model, an ecosystem model and a chemical fate model. MOEs for this species were estimated for the years 1990, 2000, and 2007. Estimated MOEs for R. philippinarum for 1990, 2000, and 2007 were approximately 1-3, 10, and 100, respectively, indicating a declining temporal trend in the probability of adverse growth effects. A simplified software package called RAMTB was developed by incorporating the chemical fate model and the databases of seasonal flow fields and distributions of organic substances (phytoplankton and detritus) in Tokyo Bay, simulated by the hydrodynamic and ecological model, respectively.

  4. Drawing the line on the sand

    NASA Astrophysics Data System (ADS)

    Ranasinghe, R.; Jongejan, R.; Wainwright, D.; Callaghan, D. P.

    2016-02-01

    Up to 70% of the world's sandy coastlines are eroding, resulting in gradual and continuous coastline recession. The rate of coastline recession is likely to increase due to the projected impacts of climate change on mean sea levels, offshore wave climate and storm surges. At the same time, rapid development in the world's coastal zones continues to increase potential damages, while often reducing the resilience of coastal systems. The risks associated with coastline recession are thus likely to increase over the coming decades, unless effective risk management plans are put in place. Land-use restrictions are a key component of coastal zone risk management plans. These involve the use of coastal setback lines which are mainly established by linearly adding the impacts of storms, recession due to sea level rise, and ambient long term trends in shoreline evolution. This approach does not differentiate between uncertainties that develop differently over time, nor takes into account the value and lifetime of property developments. Both shortcomings could entail considerable social cost. For balancing risk and reward, probabilistic estimates of coastline recession are a pre-requisite. Yet the presently adopted deterministic methods for establishing setback lines are unable to provide such estimates. Here, we present a quantitative risk analysis (QRA) model, underpinned by a multi-scale, physics based coastal recession model capable of providing time-dependent risk estimates. The modelling approach presented enables the determination of setback lines in terms of exceedance probabilities, a quantity that directly feeds into risk evaluations and economic optimizations. As a demonstration, the risk-informed approach is applied to Narrabeen beach, Sydney, Australia.

  5. Heart age differentials and general cardiovascular risk profiles for persons with varying disabilities: NHANES 2001-2010.

    PubMed

    Hollar, David W; Lewis, Jennifer S

    2015-01-01

    Persons with disabilities are at risk for secondary conditions, including allostatic load contributing to cardiovascular disease. The General Cardiovascular Risk Profile (GCRP) estimates cardiovascular disease risk for individuals. The GCRP variables are present in the National Health and Nutrition Examination Survey (NHANES) for the Healthy People 2010 decade. The objective of this study was to compare persons with varying disabilities versus persons without disabilities on GCRP cardiovascular disease risk estimates across the Healthy People 2010 decade. Weighted cross-sectional one-way Analyses of Variance (ANOVA) and non-parametric Kruskal-Wallis analyses compared persons with each of eight disability types versus persons without disabilities for point estimate GCRP heart vascular age differential and Cox regression model ten-year risk estimate in each NHANES survey year for 2001-2010. Persons with mobility or vision disabilities had significantly (p < .025) greater ten-year percent risks for cardiovascular disease and negative heart vascular age differentials (with respect to actual age, therefore "older" hearts) than persons without disabilities. The GCRP dual models conflict for certain disabilities (e.g., hearing, physical/mental/emotional) but are consistently reliable measures of GCRP for persons with mobility limitations and vision disabilities. With higher CVD risk among persons with disabilities, there is a clear need for increased interventions to benefit the health of persons with disabilities. The GCRP represents a valuable, simple measurement that uses routinely collected examination data. Physicians and nurses can use the GCRP to make immediate CVD assessments and to provide point-of-contact counseling to patients with and without disabilities. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Development and validation of a melanoma risk score based on pooled data from 16 case-control studies

    PubMed Central

    Davies, John R; Chang, Yu-mei; Bishop, D Timothy; Armstrong, Bruce K; Bataille, Veronique; Bergman, Wilma; Berwick, Marianne; Bracci, Paige M; Elwood, J Mark; Ernstoff, Marc S; Green, Adele; Gruis, Nelleke A; Holly, Elizabeth A; Ingvar, Christian; Kanetsky, Peter A; Karagas, Margaret R; Lee, Tim K; Le Marchand, Loïc; Mackie, Rona M; Olsson, Håkan; Østerlind, Anne; Rebbeck, Timothy R; Reich, Kristian; Sasieni, Peter; Siskind, Victor; Swerdlow, Anthony J; Titus, Linda; Zens, Michael S; Ziegler, Andreas; Gallagher, Richard P.; Barrett, Jennifer H; Newton-Bishop, Julia

    2015-01-01

    Background We report the development of a cutaneous melanoma risk algorithm based upon 7 factors; hair colour, skin type, family history, freckling, nevus count, number of large nevi and history of sunburn, intended to form the basis of a self-assessment webtool for the general public. Methods Predicted odds of melanoma were estimated by analysing a pooled dataset from 16 case-control studies using logistic random coefficients models. Risk categories were defined based on the distribution of the predicted odds in the controls from these studies. Imputation was used to estimate missing data in the pooled datasets. The 30th, 60th and 90th centiles were used to distribute individuals into four risk groups for their age, sex and geographic location. Cross-validation was used to test the robustness of the thresholds for each group by leaving out each study one by one. Performance of the model was assessed in an independent UK case-control study dataset. Results Cross-validation confirmed the robustness of the threshold estimates. Cases and controls were well discriminated in the independent dataset (area under the curve 0.75, 95% CI 0.73-0.78). 29% of cases were in the highest risk group compared with 7% of controls, and 43% of controls were in the lowest risk group compared with 13% of cases. Conclusion We have identified a composite score representing an estimate of relative risk and successfully validated this score in an independent dataset. Impact This score may be a useful tool to inform members of the public about their melanoma risk. PMID:25713022

  7. Risk model for estimating the 1-year risk of deferred lesion intervention following deferred revascularization after fractional flow reserve assessment.

    PubMed

    Depta, Jeremiah P; Patel, Jayendrakumar S; Novak, Eric; Gage, Brian F; Masrani, Shriti K; Raymer, David; Facey, Gabrielle; Patel, Yogesh; Zajarias, Alan; Lasala, John M; Amin, Amit P; Kurz, Howard I; Singh, Jasvindar; Bach, Richard G

    2015-02-21

    Although lesions deferred revascularization following fractional flow reserve (FFR) assessment have a low risk of adverse cardiac events, variability in risk for deferred lesion intervention (DLI) has not been previously evaluated. The aim of this study was to develop a prediction model to estimate 1-year risk of DLI for coronary lesions where revascularization was not performed following FFR assessment. A prediction model for DLI was developed from a cohort of 721 patients with 882 coronary lesions where revascularization was deferred based on FFR between 10/2002 and 7/2010. Deferred lesion intervention was defined as any revascularization of a lesion previously deferred following FFR. The final DLI model was developed using stepwise Cox regression and validated using bootstrapping techniques. An algorithm was constructed to predict the 1-year risk of DLI. During a mean (±SD) follow-up period of 4.0 ± 2.3 years, 18% of lesions deferred after FFR underwent DLI; the 1-year incidence of DLI was 5.3%, while the predicted risk of DLI varied from 1 to 40%. The final Cox model included the FFR value, age, current or former smoking, history of coronary artery disease (CAD) or prior percutaneous coronary intervention, multi-vessel CAD, and serum creatinine. The c statistic for the DLI prediction model was 0.66 (95% confidence interval, CI: 0.61-0.70). Patients deferred revascularization based on FFR have variation in their risk for DLI. A clinical prediction model consisting of five clinical variables and the FFR value can help predict the risk of DLI in the first year following FFR assessment. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.

  8. Joint Estimation of Cardiac Toxicity and Recurrence Risks After Comprehensive Nodal Photon Versus Proton Therapy for Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stick, Line B., E-mail: line.bjerregaard.stick@regionh.dk; Niels Bohr Institute, Faculty of Science, University of Copenhagen, Copenhagen; Yu, Jen

    Purpose: The study aims to perform joint estimation of the risk of recurrence caused by inadequate radiation dose coverage of lymph node targets and the risk of cardiac toxicity caused by radiation exposure to the heart. Delivered photon plans are compared with realistic proton plans, thereby providing evidence-based estimates of the heterogeneity of treatment effects in consecutive cases for the 2 radiation treatment modalities. Methods and Materials: Forty-one patients referred for postlumpectomy comprehensive nodal photon irradiation for left-sided breast cancer were included. Comparative proton plans were optimized by a spot scanning technique with single-field optimization from 2 en face beams.more » Cardiotoxicity risk was estimated with the model of Darby et al, and risk of recurrence following a compromise of lymph node coverage was estimated by a linear dose-response model fitted to the recurrence data from the recently published EORTC (European Organisation for Research and Treatment of Cancer) 22922/10925 and NCIC-CTG (National Cancer Institute of Canada Clinical Trials Group) MA.20 randomized controlled trials. Results: Excess absolute risk of cardiac morbidity was small with photon therapy at an attained age of 80 years, with median values of 1.0% (range, 0.2%-2.9%) and 0.5% (range, 0.03%-1.0%) with and without cardiac risk factors, respectively, but even lower with proton therapy (0.13% [range, 0.02%-0.5%] and 0.06% [range, 0.004%-0.3%], respectively). The median estimated excess absolute risk of breast cancer recurrence after 10 years was 0.10% (range, 0.0%-0.9%) with photons and 0.02% (range, 0.0%-0.07%) with protons. The association between age of the patient and benefit from proton therapy was weak, almost non-existing (Spearman rank correlations of −0.15 and −0.30 with and without cardiac risk factors, respectively). Conclusions: Modern photon therapy yields limited risk of cardiac toxicity in most patients, but proton therapy can reduce the predicted risk of cardiac toxicity by up to 2.9% and the risk of breast cancer recurrence by 0.9% in individual patients. Predicted benefit correlates weakly with age. Combined assessment of the risk from cardiac exposure and inadequate target coverage is desirable for rational consideration of competing photon and proton therapy plans.« less

  9. Environmental Risk Assessment of dredging processes - application to Marin harbour (NW Spain)

    NASA Astrophysics Data System (ADS)

    Gómez, A. G.; García Alba, J.; Puente, A.; Juanes, J. A.

    2014-04-01

    A methodological procedure to estimate the environmental risk of dredging operations in aquatic systems has been developed. Environmental risk estimations are based on numerical models results, which provide an appropriated spatio-temporal framework analysis to guarantee an effective decision-making process. The methodological procedure has been applied on a real dredging operation in the port of Marin (NW Spain). Results from Marin harbour confirmed the suitability of the developed methodology and the conceptual approaches as a comprehensive and practical management tool.

  10. A fault tree model to assess probability of contaminant discharge from shipwrecks.

    PubMed

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I

    2014-11-15

    Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Breeding objectives for pigs in Kenya. II: economic values incorporating risks in different smallholder production systems.

    PubMed

    Mbuthia, Jackson Mwenda; Rewe, Thomas Odiwuor; Kahi, Alexander Kigunzu

    2015-02-01

    This study estimated economic values for production traits (dressing percentage (DP), %; live weight for growers (LWg), kg; live weight for sows (LWs), kg) and functional traits (feed intake for growers (FEEDg), feed intake for sow (FEEDs), preweaning survival rate (PrSR), %; postweaning survival (PoSR), %; sow survival rate (SoSR), %, total number of piglets born (TNB) and farrowing interval (FI), days) under different smallholder pig production systems in Kenya. Economic values were estimated considering two production circumstances: fixed-herd and fixed-feed. Under the fixed-herd scenario, economic values were estimated assuming a situation where the herd cannot be increased due to other constraints apart from feed resources. The fixed-feed input scenario assumed that the herd size is restricted by limitation of feed resources available. In addition to the tradition profit model, a risk-rated bio-economic model was used to derive risk-rated economic values. This model accounted for imperfect knowledge concerning risk attitude of farmers and variance of input and output prices. Positive economic values obtained for traits DP, LWg, LWs, PoSR, PrSR, SoSR and TNB indicate that targeting them in improvement would positively impact profitability in pig breeding programmes. Under the fixed-feed basis, the risk-rated economic values for DP, LWg, LWs and SoSR were similar to those obtained under the fixed-herd situation. Accounting for risks in the EVs did not yield errors greater than ±50 % in all the production systems and basis of evaluation meaning there would be relatively little effect on the real genetic gain of a selection index. Therefore, both traditional and risk-rated models can be satisfactorily used to predict profitability in pig breeding programmes.

  12. Model-based approach for quantitative estimates of skin, heart, and lung toxicity risk for left-side photon and proton irradiation after breast-conserving surgery.

    PubMed

    Tommasino, Francesco; Durante, Marco; D'Avino, Vittoria; Liuzzi, Raffaele; Conson, Manuel; Farace, Paolo; Palma, Giuseppe; Schwarz, Marco; Cella, Laura; Pacelli, Roberto

    2017-05-01

    Proton beam therapy represents a promising modality for left-side breast cancer (BC) treatment, but concerns have been raised about skin toxicity and poor cosmesis. The aim of this study is to apply skin normal tissue complication probability (NTCP) model for intensity modulated proton therapy (IMPT) optimization in left-side BC. Ten left-side BC patients undergoing photon irradiation after breast-conserving surgery were randomly selected from our clinical database. Intensity modulated photon (IMRT) and IMPT plans were calculated with iso-tumor-coverage criteria and according to RTOG 1005 guidelines. Proton plans were computed with and without skin optimization. Published NTCP models were employed to estimate the risk of different toxicity endpoints for skin, lung, heart and its substructures. Acute skin NTCP evaluation suggests a lower toxicity level with IMPT compared to IMRT when the skin is included in proton optimization strategy (0.1% versus 1.7%, p < 0.001). Dosimetric results show that, with the same level of tumor coverage, IMPT attains significant heart and lung dose sparing compared with IMRT. By NTCP model-based analysis, an overall reduction in the cardiopulmonary toxicity risk prediction can be observed for all IMPT compared to IMRT plans: the relative risk reduction from protons varies between 0.1 and 0.7 depending on the considered toxicity endpoint. Our analysis suggests that IMPT might be safely applied without increasing the risk of severe acute radiation induced skin toxicity. The quantitative risk estimates also support the potential clinical benefits of IMPT for left-side BC irradiation due to lower risk of cardiac and pulmonary morbidity. The applied approach might be relevant on the long term for the setup of cost-effectiveness evaluation strategies based on NTCP predictions.

  13. Effect of increased concentrations of atmospheric carbon dioxide on the global threat of zinc deficiency: a modelling study.

    PubMed

    Myers, Samuel S; Wessells, K Ryan; Kloog, Itai; Zanobetti, Antonella; Schwartz, Joel

    2015-10-01

    Increasing concentrations of atmospheric carbon dioxide (CO2) lower the content of zinc and other nutrients in important food crops. Zinc deficiency is currently responsible for large burdens of disease globally, and the populations who are at highest risk of zinc deficiency also receive most of their dietary zinc from crops. By modelling dietary intake of bioavailable zinc for the populations of 188 countries under both an ambient CO2 and elevated CO2 scenario, we sought to estimate the effect of anthropogenic CO2 emissions on the global risk of zinc deficiency. We estimated per capita per day bioavailable intake of zinc for the populations of 188 countries at ambient CO2 concentrations (375-384 ppm) using food balance sheet data for 2003-07 from the Food and Agriculture Organization. We then used previously published data from free air CO2 enrichment and open-top chamber experiments to model zinc intake at elevated CO2 concentrations (550 ppm, which is the concentration expected by 2050). Estimates developed by the International Zinc Nutrition Consultative Group were used for country-specific theoretical mean daily per-capita physiological requirements for zinc. Finally, we used these data on zinc bioavailability and population-weighted estimated average zinc requirements to estimate the risk of inadequate zinc intake among the populations of the different nations under the two scenarios (ambient and elevated CO2). The difference between the population at risk at elevated and ambient CO2 concentrations (ie, population at new risk of zinc deficiency) was our measure of impact. The total number of people estimated to be placed at new risk of zinc deficiency by 2050 was 138 million (95% CI 120-156). The people likely to be most affected live in Africa and South Asia, with nearly 48 million (32-63) residing in India alone. Global maps of increased risk show significant heterogeneity. Our results indicate that one heretofore unquantified human health effect associated with anthropogenic CO2 emissions will be a significant increase in the human population at risk of zinc deficiency. Our country-specific findings can be used to help guide interventions aimed at reducing this vulnerability. Bill & Melinda Gates Foundation, Winslow Foundation. Copyright © 2015 Myers et al. Open access article published under the terms of CC BY-NC-ND. Published by Elsevier Ltd.. All rights reserved.

  14. Ecological risk assessment of depleted uranium in the environment at Aberdeen Proving Ground

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clements, W.H.; Kennedy, P.L.; Myers, O.B.

    1993-01-01

    A preliminary ecological risk assessment was conducted to evaluate the effects of depleted uranium (DU) in the Aberdeen Proving Ground (APG) ecosystem and its potential for human health effects. An ecological risk assessment of DU should include the processes of hazard identification, dose-response assessment, exposure assessment, and risk characterization. Ecological risk assessments also should explicitly examine risks incurred by nonhuman as well as human populations, because risk assessments based only on human health do not always protect other species. To begin to assess the potential ecological risk of DU release to the environment we modeled DU transport through the principalmore » components of the aquatic ecosystem at APG. We focused on the APG aquatic system because of the close proximity of the Chesapeake Bay and concerns about potential impacts on this ecosystem. Our objective in using a model to estimate environmental fate of DU is to ultimately reduce the uncertainty about predicted ecological risks due to DU from APG. The model functions to summarize information on the structure and functional properties of the APG aquatic system, to provide an exposure assessment by estimating the fate of DU in the environment, and to evaluate the sources of uncertainty about DU transport.« less

  15. Ecological risk assessment of depleted uranium in the environment at Aberdeen Proving Ground. Annual report, 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clements, W.H.; Kennedy, P.L.; Myers, O.B.

    1993-03-01

    A preliminary ecological risk assessment was conducted to evaluate the effects of depleted uranium (DU) in the Aberdeen Proving Ground (APG) ecosystem and its potential for human health effects. An ecological risk assessment of DU should include the processes of hazard identification, dose-response assessment, exposure assessment, and risk characterization. Ecological risk assessments also should explicitly examine risks incurred by nonhuman as well as human populations, because risk assessments based only on human health do not always protect other species. To begin to assess the potential ecological risk of DU release to the environment we modeled DU transport through the principalmore » components of the aquatic ecosystem at APG. We focused on the APG aquatic system because of the close proximity of the Chesapeake Bay and concerns about potential impacts on this ecosystem. Our objective in using a model to estimate environmental fate of DU is to ultimately reduce the uncertainty about predicted ecological risks due to DU from APG. The model functions to summarize information on the structure and functional properties of the APG aquatic system, to provide an exposure assessment by estimating the fate of DU in the environment, and to evaluate the sources of uncertainty about DU transport.« less

  16. Skin cancer incidence among atomic bomb survivors from 1958 to 1996.

    PubMed

    Sugiyama, Hiromi; Misumi, Munechika; Kishikawa, Masao; Iseki, Masachika; Yonehara, Shuji; Hayashi, Tomayoshi; Soda, Midori; Tokuoka, Shoji; Shimizu, Yukiko; Sakata, Ritsu; Grant, Eric J; Kasagi, Fumiyoshi; Mabuchi, Kiyohiko; Suyama, Akihiko; Ozasa, Kotaro

    2014-05-01

    The radiation risk of skin cancer by histological types has been evaluated in the atomic bomb survivors. We examined 80,158 of the 120,321 cohort members who had their radiation dose estimated by the latest dosimetry system (DS02). Potential skin tumors diagnosed from 1958 to 1996 were reviewed by a panel of pathologists, and radiation risk of the first primary skin cancer was analyzed by histological types using a Poisson regression model. A significant excess relative risk (ERR) of basal cell carcinoma (BCC) (n = 123) was estimated at 1 Gy (0.74, 95% confidence interval (CI): 0.26, 1.6) for those age 30 at exposure and age 70 at observation based on a linear-threshold model with a threshold dose of 0.63 Gy (95% CI: 0.32, 0.89) and a slope of 2.0 (95% CI: 0.69, 4.3). The estimated risks were 15, 5.7, 1.3 and 0.9 for age at exposure of 0-9, 10-19, 20-39, over 40 years, respectively, and the risk increased 11% with each one-year decrease in age at exposure. The ERR for squamous cell carcinoma (SCC) in situ (n = 64) using a linear model was estimated as 0.71 (95% CI: 0.063, 1.9). However, there were no significant dose responses for malignant melanoma (n = 10), SCC (n = 114), Paget disease (n = 10) or other skin cancers (n = 15). The significant linear radiation risk for BCC with a threshold at 0.63 Gy suggested that the basal cells of the epidermis had a threshold sensitivity to ionizing radiation, especially for young persons at the time of exposure.

  17. Impact of risk factors on cardiovascular risk: a perspective on risk estimation in a Swiss population.

    PubMed

    Chrubasik, Sigrun A; Chrubasik, Cosima A; Piper, Jörg; Schulte-Moenting, Juergen; Erne, Paul

    2015-01-01

    In models and scores for estimating cardiovascular risk (CVR), the relative weightings given to blood pressure measurements (BPMs), and biometric and laboratory variables are such that even large differences in blood pressure lead to rather low differences in the resulting total risk when compared with other concurrent risk factors. We evaluated this phenomenon based on the PROCAM score, using BPMs made by volunteer subjects at home (HBPMs) and automated ambulatory BPMs (ABPMs) carried out in the same subjects. A total of 153 volunteers provided the data needed to estimate their CVR by means of the PROCAM formula. Differences (deltaCVR) between the risk estimated by entering the ABPM and that estimated with the HBPM were compared with the differences (deltaBPM) between the ABPM and the corresponding HBPM. In addition to the median values (= second quartile), the first and third quartiles of blood pressure profiles were also considered. PROCAM risk values were converted to European Society of Cardiology (ESC) risk values and all participants were assigned to the risk groups low, medium and high. Based on the PROCAM score, 132 participants had a low risk for suffering myocardial infarction, 16 a medium risk and 5 a high risk. The calculated ESC scores classified 125 participants into the low-risk group, 26 into the medium- and 2 into the high-risk group for death from a cardiovascular event. Mean ABPM tended to be higher than mean HBPM. Use of mean systolic ABPM or HBPM in the PROCAM formula had no major impact on the risk level. Our observations are in agreement with the rather low weighting of blood pressure as risk determinant in the PROCAM score. BPMs assessed with different methods had relatively little impact on estimation of cardiovascular risk in the given context of other important determinants. The risk calculations in our unselected population reflect the given classification of Switzerland as a so-called cardiovascular "low risk country".

  18. Assessing uncertainty in published risk estimates using ...

    EPA Pesticide Factsheets

    Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective is to characterize model uncertainty by evaluating estimates across published epidemiologic studies of the same cohort.Methods: This analysis was based on 5 studies analyzing a cohort of 2,357 workers employed from 1950-74 in a chromate production plant in Maryland. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability within and between model forms. A total of 5 similarly parameterized analyses were considered across model form, and 16 analyses with alternative parameterizations were considered within model form (10 Cox; 6 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients (betas) for 5 similar analyses ranged from 2.47 to 4.33 (mean=2.97, σ2=0.63). Within the 10 Cox models, coefficients ranged from 2.53 to 4.42 (mean=3.29, σ2=0.

  19. NASA Models of Space Radiation Induced Cancer, Circulatory Disease, and Central Nervous System Effects

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Chappell, Lori J.; Kim, Myung-Hee Y.

    2013-01-01

    The risks of late effects from galactic cosmic rays (GCR) and solar particle events (SPE) are potentially a limitation to long-term space travel. The late effects of highest concern have significant lethality including cancer, effects to the central nervous system (CNS), and circulatory diseases (CD). For cancer and CD the use of age and gender specific models with uncertainty assessments based on human epidemiology data for low LET radiation combined with relative biological effectiveness factors (RBEs) and dose- and dose-rate reduction effectiveness factors (DDREF) to extrapolate these results to space radiation exposures is considered the current "state-of-the-art". The revised NASA Space Risk Model (NSRM-2014) is based on recent radio-epidemiology data for cancer and CD, however a key feature of the NSRM-2014 is the formulation of particle fluence and track structure based radiation quality factors for solid cancer and leukemia risk estimates, which are distinct from the ICRP quality factors, and shown to lead to smaller uncertainties in risk estimates. Many persons exposed to radiation on earth as well as astronauts are life-time never-smokers, which is estimated to significantly modify radiation cancer and CD risk estimates. A key feature of the NASA radiation protection model is the classification of radiation workers by smoking history in setting dose limits. Possible qualitative differences between GCR and low LET radiation increase uncertainties and are not included in previous risk estimates. Two important qualitative differences are emerging from research studies. The first is the increased lethality of tumors observed in animal models compared to low LET radiation or background tumors. The second are Non- Targeted Effects (NTE), which include bystander effects and genomic instability, which has been observed in cell and animal models of cancer risks. NTE's could lead to significant changes in RBE and DDREF estimates for GCR particles, and the potential effectiveness of radiation mitigator's. The NSRM- 2014 approaches to model radiation quality dependent lethality and NTE's will be described. CNS effects include both early changes that may occur during long space missions and late effects such as Alzheimer's disease (AD). AD effects 50% of the population above age 80-yr, is a degenerative disease that worsens with time after initial onset leading to death, and has no known cure. AD is difficult to detect at early stages and the small number of low LET epidemiology studies undertaken have not identified an association with low dose radiation. However experimental studies in mice suggest GCR may lead to early onset AD. We discuss modeling approaches to consider mechanisms whereby radiation would lead to earlier onset of occurrence of AD. Biomarkers of AD include amyloid beta (A(Beta)) plaques, and neurofibrillary tangles (NFT) made up of aggregates of the hyperphosphorylated form of the micro-tubule associated, tau protein. Related markers include synaptic degeneration, dentritic spine loss, and neuronal cell loss through apoptosis. Radiation may affect these processes by causing oxidative stress, aberrant signaling following DNA damage, and chronic neuroinflammation. Cell types to be considered in multi-scale models are neurons, astrocytes, and microglia. We developed biochemical and cell kinetics models of DNA damage signaling related to glycogen synthase kinase-3(Beta) (GSK3(Beta)) and neuroinflammation, and considered multi-scale modeling approaches to develop computer simulations of cell interactions and their relationships to A(Beta) plaques and NFTs. Comparison of model results to experimental data for the age specific development of A(Beta) plaques in transgenic mice will be discussed.

  20. Connecting the Dots: Linking Environmental Justice Indicators to Daily Dose Model Estimates

    EPA Science Inventory

    Many different quantitative techniques have been developed to either assess Environmental Justice (EJ) issues or estimate exposure and dose for risk assessment. However, very few approaches have been applied to link EJ factors to exposure dose estimate and identify potential impa...

  1. Competing risks regression for clustered data

    PubMed Central

    Zhou, Bingqing; Fine, Jason; Latouche, Aurelien; Labopin, Myriam

    2012-01-01

    A population average regression model is proposed to assess the marginal effects of covariates on the cumulative incidence function when there is dependence across individuals within a cluster in the competing risks setting. This method extends the Fine–Gray proportional hazards model for the subdistribution to situations, where individuals within a cluster may be correlated due to unobserved shared factors. Estimators of the regression parameters in the marginal model are developed under an independence working assumption where the correlation across individuals within a cluster is completely unspecified. The estimators are consistent and asymptotically normal, and variance estimation may be achieved without specifying the form of the dependence across individuals. A simulation study evidences that the inferential procedures perform well with realistic sample sizes. The practical utility of the methods is illustrated with data from the European Bone Marrow Transplant Registry. PMID:22045910

  2. Space Radiation Cancer Risks

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.

    2007-01-01

    Space radiation presents major challenges to astronauts on the International Space Station and for future missions to the Earth s moon or Mars. Methods used to project risks on Earth need to be modified because of the large uncertainties in projecting cancer risks from space radiation, and thus impact safety factors. We describe NASA s unique approach to radiation safety that applies uncertainty based criteria within the occupational health program for astronauts: The two terrestrial criteria of a point estimate of maximum acceptable level of risk and application of the principle of As Low As Reasonably Achievable (ALARA) are supplemented by a third requirement that protects against risk projection uncertainties using the upper 95% confidence level (CL) in the radiation cancer projection model. NASA s acceptable level of risk for ISS and their new lunar program have been set at the point-estimate of a 3-percent risk of exposure induced death (REID). Tissue-averaged organ dose-equivalents are combined with age at exposure and gender-dependent risk coefficients to project the cumulative occupational radiation risks incurred by astronauts. The 95% CL criteria in practice is a stronger criterion than ALARA, but not an absolute cut-off as is applied to a point projection of a 3% REID. We describe the most recent astronaut dose limits, and present a historical review of astronaut organ doses estimates from the Mercury through the current ISS program, and future projections for lunar and Mars missions. NASA s 95% CL criteria is linked to a vibrant ground based radiobiology program investigating the radiobiology of high-energy protons and heavy ions. The near-term goal of research is new knowledge leading to the reduction of uncertainties in projection models. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. The current model for projecting space radiation cancer risk relies on the three assumptions of linearity, additivity, and scaling along with the use of population averages. We describe uncertainty estimates for this model, and new experimental data that sheds light on the accuracy of the underlying assumptions. These methods make it possible to express risk management objectives in terms of quantitative metrics, i.e., the number of days in space without exceeding a given risk level within well defined confidence limits. The resulting methodology is applied to several human space exploration mission scenarios including lunar station, deep space outpost, and a Mars mission. Factors that dominate risk projection uncertainties and application of this approach to assess candidate mitigation approaches are described.

  3. Second cancer risk after 3D-CRT, IMRT and VMAT for breast cancer.

    PubMed

    Abo-Madyan, Yasser; Aziz, Muhammad Hammad; Aly, Moamen M O M; Schneider, Frank; Sperk, Elena; Clausen, Sven; Giordano, Frank A; Herskind, Carsten; Steil, Volker; Wenz, Frederik; Glatting, Gerhard

    2014-03-01

    Second cancer risk after breast conserving therapy is becoming more important due to improved long term survival rates. In this study, we estimate the risks for developing a solid second cancer after radiotherapy of breast cancer using the concept of organ equivalent dose (OED). Computer-tomography scans of 10 representative breast cancer patients were selected for this study. Three-dimensional conformal radiotherapy (3D-CRT), tangential intensity modulated radiotherapy (t-IMRT), multibeam intensity modulated radiotherapy (m-IMRT), and volumetric modulated arc therapy (VMAT) were planned to deliver a total dose of 50 Gy in 2 Gy fractions. Differential dose volume histograms (dDVHs) were created and the OEDs calculated. Second cancer risks of ipsilateral, contralateral lung and contralateral breast cancer were estimated using linear, linear-exponential and plateau models for second cancer risk. Compared to 3D-CRT, cumulative excess absolute risks (EAR) for t-IMRT, m-IMRT and VMAT were increased by 2 ± 15%, 131 ± 85%, 123 ± 66% for the linear-exponential risk model, 9 ± 22%, 82 ± 96%, 71 ± 82% for the linear and 3 ± 14%, 123 ± 78%, 113 ± 61% for the plateau model, respectively. Second cancer risk after 3D-CRT or t-IMRT is lower than for m-IMRT or VMAT by about 34% for the linear model and 50% for the linear-exponential and plateau models, respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Venous thromboembolism prevention guidelines for medical inpatients: mind the (implementation) gap.

    PubMed

    Maynard, Greg; Jenkins, Ian H; Merli, Geno J

    2013-10-01

    Hospital-associated nonsurgical venous thromboembolism (VTE) is an important problem addressed by new guidelines from the American College of Physicians (ACP) and American College of Chest Physicians (AT9). Narrative review and critique. Both guidelines discount asymptomatic VTE outcomes and caution against overprophylaxis, but have different methodologies and estimates of risk/benefit. Guideline complexity and lack of consensus on VTE risk assessment contribute to an implementation gap. Methods to estimate prophylaxis benefit have significant limitations because major trials included mostly screening-detected events. AT9 relies on a single Italian cohort study to conclude that those with a Padua score ≥4 have a very high VTE risk, whereas patients with a score <4 (60% of patients) have a very small risk. However, the cohort population has less comorbidity than US inpatients, and over 1% of patients with a score of 3 suffered pulmonary emboli. The ACP guideline does not endorse any risk-assessment model. AT9 includes the Padua model and Caprini point-based system for nonsurgical inpatients and surgical inpatients, respectively, but there is no evidence they are more effective than simpler risk-assessment models. New VTE prevention guidelines provide varied guidance on important issues including risk assessment. If Padua is used, a threshold of 3, as well as 4, should be considered. Simpler VTE risk-assessment models may be superior to complicated point-based models in environments without sophisticated clinical decision support. © 2013 Society of Hospital Medicine.

  5. Spatial analysis and risk mapping of soil-transmitted helminth infections in Brazil, using Bayesian geostatistical models.

    PubMed

    Scholte, Ronaldo G C; Schur, Nadine; Bavia, Maria E; Carvalho, Edgar M; Chammartin, Frédérique; Utzinger, Jürg; Vounatsou, Penelope

    2013-11-01

    Soil-transmitted helminths (Ascaris lumbricoides, Trichuris trichiura and hookworm) negatively impact the health and wellbeing of hundreds of millions of people, particularly in tropical and subtropical countries, including Brazil. Reliable maps of the spatial distribution and estimates of the number of infected people are required for the control and eventual elimination of soil-transmitted helminthiasis. We used advanced Bayesian geostatistical modelling, coupled with geographical information systems and remote sensing to visualize the distribution of the three soil-transmitted helminth species in Brazil. Remotely sensed climatic and environmental data, along with socioeconomic variables from readily available databases were employed as predictors. Our models provided mean prevalence estimates for A. lumbricoides, T. trichiura and hookworm of 15.6%, 10.1% and 2.5%, respectively. By considering infection risk and population numbers at the unit of the municipality, we estimate that 29.7 million Brazilians are infected with A. lumbricoides, 19.2 million with T. trichiura and 4.7 million with hookworm. Our model-based maps identified important risk factors related to the transmission of soiltransmitted helminths and confirm that environmental variables are closely associated with indices of poverty. Our smoothed risk maps, including uncertainty, highlight areas where soil-transmitted helminthiasis control interventions are most urgently required, namely in the North and along most of the coastal areas of Brazil. We believe that our predictive risk maps are useful for disease control managers for prioritising control interventions and for providing a tool for more efficient surveillance-response mechanisms.

  6. Risk assessment of fungal spoilage: A case study of Aspergillus niger on yogurt.

    PubMed

    Gougouli, Maria; Koutsoumanis, Konstantinos P

    2017-08-01

    A quantitative risk assessment model of yogurt spoilage by Aspergillus niger was developed based on a stochastic modeling approach for mycelium growth by taking into account the important sources of variability such as time-temperature conditions during the different stages of chill chain and individual spore behavior. Input parameters were fitted to the appropriate distributions and A. niger colony's diameter at each stage of the chill chain was estimated using Monte Carlo simulation. By combining the output of the growth model with the fungus prevalence, that can be estimated by the industry using challenge tests, the risk of spoilage translated to number of yogurt cups in which a visible mycelium of A. niger is being formed at the time of consumption was assessed. The risk assessment output showed that for a batch of 100,000 cups in which the percentage of contaminated cups with A. niger was 1% the predicted numbers (median (5 th , 95 th percentiles)) of the cups with a visible mycelium at consumption time were 8 (5, 14). For higher percentages of 3, 5 and 10 the predicted numbers (median (5 th , 95 th percentiles)) of the spoiled cups at consumption time were estimated to be 24 (16, 35), 39 (29, 52) and 80 (64, 94), respectively. The developed model can lead to a more effective risk-based quality management of yogurt and support the decision making in yogurt production. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. People's Risk Recognition Preceding Evacuation and Its Role in Demand Modeling and Planning.

    PubMed

    Urata, Junji; Pel, Adam J

    2018-05-01

    Evacuation planning and management involves estimating the travel demand in the event that such action is required. This is usually done as a function of people's decision to evacuate, which we show is strongly linked to their risk awareness. We use an empirical data set, which shows tsunami evacuation behavior, to demonstrate that risk recognition is not synonymous with objective risk, but is instead determined by a combination of factors including risk education, information, and sociodemographics, and that it changes dynamically over time. Based on these findings, we formulate an ordered logit model to describe risk recognition combined with a latent class model to describe evacuation choices. Our proposed evacuation choice model along with a risk recognition class can evaluate quantitatively the influence of disaster mitigation measures, risk education, and risk information. The results obtained from the risk recognition model show that risk information has a greater impact in the sense that people recognize their high risk. The results of the evacuation choice model show that people who are unaware of their risk take a longer time to evacuate. © 2017 Society for Risk Analysis.

  8. Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.

    PubMed

    Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih

    2016-10-01

    In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.

  9. A spatially explicit model for estimating risks of pesticide exposure on bird populations

    EPA Science Inventory

    Product Description (FY17 Key Product): Current ecological risk assessment for pesticides under FIFRA relies on risk quotients (RQs), which suffer from significant methodological shortcomings. For example, RQs do not integrate adverse effects arising from multiple demographic pr...

  10. Development and validation of Prediction models for Risks of complications in Early-onset Pre-eclampsia (PREP): a prospective cohort study.

    PubMed

    Thangaratinam, Shakila; Allotey, John; Marlin, Nadine; Mol, Ben W; Von Dadelszen, Peter; Ganzevoort, Wessel; Akkermans, Joost; Ahmed, Asif; Daniels, Jane; Deeks, Jon; Ismail, Khaled; Barnard, Ann Marie; Dodds, Julie; Kerry, Sally; Moons, Carl; Riley, Richard D; Khan, Khalid S

    2017-04-01

    The prognosis of early-onset pre-eclampsia (before 34 weeks' gestation) is variable. Accurate prediction of complications is required to plan appropriate management in high-risk women. To develop and validate prediction models for outcomes in early-onset pre-eclampsia. Prospective cohort for model development, with validation in two external data sets. Model development: 53 obstetric units in the UK. Model transportability: PIERS (Pre-eclampsia Integrated Estimate of RiSk for mothers) and PETRA (Pre-Eclampsia TRial Amsterdam) studies. Pregnant women with early-onset pre-eclampsia. Nine hundred and forty-six women in the model development data set and 850 women (634 in PIERS, 216 in PETRA) in the transportability (external validation) data sets. The predictors were identified from systematic reviews of tests to predict complications in pre-eclampsia and were prioritised by Delphi survey. The primary outcome was the composite of adverse maternal outcomes established using Delphi surveys. The secondary outcome was the composite of fetal and neonatal complications. We developed two prediction models: a logistic regression model (PREP-L) to assess the overall risk of any maternal outcome until postnatal discharge and a survival analysis model (PREP-S) to obtain individual risk estimates at daily intervals from diagnosis until 34 weeks. Shrinkage was used to adjust for overoptimism of predictor effects. For internal validation (of the full models in the development data) and external validation (of the reduced models in the transportability data), we computed the ability of the models to discriminate between those with and without poor outcomes ( c -statistic), and the agreement between predicted and observed risk (calibration slope). The PREP-L model included maternal age, gestational age at diagnosis, medical history, systolic blood pressure, urine protein-to-creatinine ratio, platelet count, serum urea concentration, oxygen saturation, baseline treatment with antihypertensive drugs and administration of magnesium sulphate. The PREP-S model additionally included exaggerated tendon reflexes and serum alanine aminotransaminase and creatinine concentration. Both models showed good discrimination for maternal complications, with anoptimism-adjusted c -statistic of 0.82 [95% confidence interval (CI) 0.80 to 0.84] for PREP-L and 0.75 (95% CI 0.73 to 0.78) for the PREP-S model in the internal validation. External validation of the reduced PREP-L model showed good performance with a c -statistic of 0.81 (95% CI 0.77 to 0.85) in PIERS and 0.75 (95% CI 0.64 to 0.86) in PETRA cohorts for maternal complications, and calibrated well with slopes of 0.93 (95% CI 0.72 to 1.10) and 0.90 (95% CI 0.48 to 1.32), respectively. In the PIERS data set, the reduced PREP-S model had a c -statistic of 0.71 (95% CI 0.67 to 0.75) and a calibration slope of 0.67 (95% CI 0.56 to 0.79). Low gestational age at diagnosis, high urine protein-to-creatinine ratio, increased serum urea concentration, treatment with antihypertensive drugs, magnesium sulphate, abnormal uterine artery Doppler scan findings and estimated fetal weight below the 10th centile were associated with fetal complications. The PREP-L model provided individualised risk estimates in early-onset pre-eclampsia to plan management of high- or low-risk individuals. The PREP-S model has the potential to be used as a triage tool for risk assessment. The impacts of the model use on outcomes need further evaluation. Current Controlled Trials ISRCTN40384046. The National Institute for Health Research Health Technology Assessment programme.

  11. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  12. A Nuclear Interaction Model for Understanding Results of Single Event Testing with High Energy Protons

    NASA Technical Reports Server (NTRS)

    Culpepper, William X.; ONeill, Pat; Nicholson, Leonard L.

    2000-01-01

    An internuclear cascade and evaporation model has been adapted to estimate the LET spectrum generated during testing with 200 MeV protons. The model-generated heavy ion LET spectrum is compared to the heavy ion LET spectrum seen on orbit. This comparison is the basis for predicting single event failure rates from heavy ions using results from a single proton test. Of equal importance, this spectra comparison also establishes an estimate of the risk of encountering a failure mode on orbit that was not detected during proton testing. Verification of the general results of the model is presented based on experiments, individual part test results, and flight data. Acceptance of this model and its estimate of remaining risk opens the hardware verification philosophy to the consideration of radiation testing with high energy protons at the board and box level instead of the more standard method of individual part testing with low energy heavy ions.

  13. Risk analysis for autonomous underwater vehicle operations in extreme environments.

    PubMed

    Brito, Mario Paulo; Griffiths, Gwyn; Challenor, Peter

    2010-12-01

    Autonomous underwater vehicles (AUVs) are used increasingly to explore hazardous marine environments. Risk assessment for such complex systems is based on subjective judgment and expert knowledge as much as on hard statistics. Here, we describe the use of a risk management process tailored to AUV operations, the implementation of which requires the elicitation of expert judgment. We conducted a formal judgment elicitation process where eight world experts in AUV design and operation were asked to assign a probability of AUV loss given the emergence of each fault or incident from the vehicle's life history of 63 faults and incidents. After discussing methods of aggregation and analysis, we show how the aggregated risk estimates obtained from the expert judgments were used to create a risk model. To estimate AUV survival with mission distance, we adopted a statistical survival function based on the nonparametric Kaplan-Meier estimator. We present theoretical formulations for the estimator, its variance, and confidence limits. We also present a numerical example where the approach is applied to estimate the probability that the Autosub3 AUV would survive a set of missions under Pine Island Glacier, Antarctica in January-March 2009. © 2010 Society for Risk Analysis.

  14. Evaluation and Enhancement of Calibration in the American College of Surgeons NSQIP Surgical Risk Calculator.

    PubMed

    Liu, Yaoming; Cohen, Mark E; Hall, Bruce L; Ko, Clifford Y; Bilimoria, Karl Y

    2016-08-01

    The American College of Surgeon (ACS) NSQIP Surgical Risk Calculator has been widely adopted as a decision aid and informed consent tool by surgeons and patients. Previous evaluations showed excellent discrimination and combined discrimination and calibration, but model calibration alone, and potential benefits of recalibration, were not explored. Because lack of calibration can lead to systematic errors in assessing surgical risk, our objective was to assess calibration and determine whether spline-based adjustments could improve it. We evaluated Surgical Risk Calculator model calibration, as well as discrimination, for each of 11 outcomes modeled from nearly 3 million patients (2010 to 2014). Using independent random subsets of data, we evaluated model performance for the Development (60% of records), Validation (20%), and Test (20%) datasets, where prediction equations from the Development dataset were recalibrated using restricted cubic splines estimated from the Validation dataset. We also evaluated performance on data subsets composed of higher-risk operations. The nonrecalibrated Surgical Risk Calculator performed well, but there was a slight tendency for predicted risk to be overestimated for lowest- and highest-risk patients and underestimated for moderate-risk patients. After recalibration, this distortion was eliminated, and p values for miscalibration were most often nonsignificant. Calibration was also excellent for subsets of higher-risk operations, though observed calibration was reduced due to instability associated with smaller sample sizes. Performance of NSQIP Surgical Risk Calculator models was shown to be excellent and improved with recalibration. Surgeons and patients can rely on the calculator to provide accurate estimates of surgical risk. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  15. Estimating the number of injecting drug users in Scotland's HCV-diagnosed population using capture-recapture methods.

    PubMed

    McDonald, S A; Hutchinson, S J; Schnier, C; McLeod, A; Goldberg, D J

    2014-01-01

    In countries maintaining national hepatitis C virus (HCV) surveillance systems, a substantial proportion of individuals report no risk factors for infection. Our goal was to estimate the proportion of diagnosed HCV antibody-positive persons in Scotland (1991-2010) who probably acquired infection through injecting drug use (IDU), by combining data on IDU risk from four linked data sources using log-linear capture-recapture methods. Of 25,521 HCV-diagnosed individuals, 14,836 (58%) reported IDU risk with their HCV diagnosis. Log-linear modelling estimated a further 2484 HCV-diagnosed individuals with IDU risk, giving an estimated prevalence of 83. Stratified analyses indicated variation across birth cohort, with estimated prevalence as low as 49% in persons born before 1960 and greater than 90% for those born since 1960. These findings provide public-health professionals with a more complete profile of Scotland's HCV-infected population in terms of transmission route, which is essential for targeting educational, prevention and treatment interventions.

  16. Estimation of Second Primary Cancer Risk After Treatment with Radioactive Iodine for Differentiated Thyroid Carcinoma.

    PubMed

    Corrêa, Nilton Lavatori; de Sá, Lidia Vasconcellos; de Mello, Rossana Corbo Ramalho

    2017-02-01

    An increase in the incidence of second primary cancers is the late effect of greatest concern that could occur in differentiated thyroid carcinoma (DTC) patients treated with radioactive iodine (RAI). The decision to treat a patient with RAI should therefore incorporate a careful risk-benefit analysis. The objective of this work was to adapt the risk-estimation models developed by the Biological Effects of Ionizing Radiation Committee to local epidemiological characteristics in order to assess the carcinogenesis risk from radiation in a population of Brazilian DTC patients treated with RAI. Absorbed radiation doses in critical organs were also estimated to determine whether they exceeded the thresholds for deterministic effects. A total of 416 DTC patients treated with RAI were retrospectively studied. Four organs were selected for absorbed dose estimation and subsequent calculation of carcinogenic risk: the kidney, stomach, salivary glands, and bone marrow. Absorbed doses were calculated by dose factors (absorbed dose per unit activity administered) previously established and based on standard human models. The lifetime attributable risk (LAR) of incidence of cancer as a function of age, sex, and organ-specific dose was estimated, relating it to the activity of RAI administered in the initial treatment. The salivary glands received the greatest absorbed doses of radiation, followed by the stomach, kidney, and bone marrow. None of these, however, surpassed the threshold for deterministic effects for a single administration of RAI. Younger patients received the same level of absorbed dose in the critical organs as older patients did. The lifetime attributable risk for stomach cancer incidence was by far the highest, followed in descending order by salivary-gland cancer, leukemia, and kidney cancer. RAI in a single administration is safe in terms of deterministic effects because even high-administered activities do not result in absorbed doses that exceed the thresholds for significant tissue reactions. The Biological Effects of Ionizing Radiation Committee mathematical models are a practical method of quantifying the risks of a second primary cancer, demonstrating a marked decrease in risk for younger patients with the administration of lower RAI activities and suggesting that only the smallest activities necessary to promote an effective ablation should be administered in low-risk DTC patients.

  17. Aggregate exposure modelling of zinc pyrithione in rinse-off personal cleansing products using a person-orientated approach with market share refinement.

    PubMed

    Tozer, Sarah A; Kelly, Seamus; O'Mahony, Cian; Daly, E J; Nash, J F

    2015-09-01

    Realistic estimates of chemical aggregate exposure are needed to ensure consumer safety. As exposure estimates are a critical part of the equation used to calculate acceptable "safe levels" and conduct quantitative risk assessments, methods are needed to produce realistic exposure estimations. To this end, a probabilistic aggregate exposure model was developed to estimate consumer exposure from several rinse off personal cleansing products containing the anti-dandruff preservative zinc pyrithione. The model incorporates large habits and practices surveys, containing data on frequency of use, amount applied, co-use along with market share, and combines these data at the level of the individual based on subject demographics to better estimate exposure. The daily-applied exposure (i.e., amount applied to the skin) was 3.79 mg/kg/day for the 95th percentile consumer. The estimated internal dose for the 95th percentile exposure ranged from 0.01-1.29 μg/kg/day after accounting for retention following rinsing and dermal penetration of ZnPt. This probabilistic aggregate exposure model can be used in the human safety assessment of ingredients in multiple rinse-off technologies (e.g., shampoo, bar soap, body wash, and liquid hand soap). In addition, this model may be used in other situations where refined exposure assessment is required to support a chemical risk assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Assessing evidence for behaviour change affecting the course of HIV epidemics: a new mathematical modelling approach and application to data from Zimbabwe.

    PubMed

    Hallett, Timothy B; Gregson, Simon; Mugurungi, Owen; Gonese, Elizabeth; Garnett, Geoff P

    2009-06-01

    Determining whether interventions to reduce HIV transmission have worked is essential, but complicated by the potential for generalised epidemics to evolve over time without individuals changing risk behaviour. We aimed to develop a method to evaluate evidence for changes in risk behaviour altering the course of an HIV epidemic. We developed a mathematical model of HIV transmission, incorporating the potential for natural changes in the epidemic as it matures and the introduction of antiretroviral treatment, and applied a Bayesian Melding framework, in which the model and observed trends in prevalence can be compared. We applied the model to Zimbabwe, using HIV prevalence estimates from antenatal clinic surveillance and house-hold based surveys, and basing model parameters on data from sexual behaviour surveys. There was strong evidence for reductions in risk behaviour stemming HIV transmission. We estimate these changes occurred between 1999 and 2004 and averted 660,000 (95% credible interval: 460,000-860,000) infections by 2008. The model and associated analysis framework provide a robust way to evaluate the evidence for changes in risk behaviour affecting the course of HIV epidemics, avoiding confounding by the natural evolution of HIV epidemics.

  19. A spatially-dynamic preliminary risk assessment of the American peregrine falcon at the Los Alamos National Laboratory (version 1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallegos, A.F.; Gonzales, G.J.; Bennett, K.D.

    1997-06-01

    The Endangered Species Act and the Record of Decision on the Dual Axis Radiographic Hydrodynamic Test Facility at the Los Alamos National Laboratory require protection of the American peregrine falcon. A preliminary risk assessment of the peregrine was performed using a custom FORTRAN model and a geographical information system. Estimated doses to the falcon were compared against toxicity reference values to generate hazard indices. Hazard index results indicated no unacceptable risk to the falcon from the soil ingestion pathway, including a measure of cumulative effects from multiple contaminants that assumes a linear additive toxicity type. Scaling home ranges on themore » basis of maximizing falcon height for viewing prey decreased estimated risk by 69% in a canyons-based home range and increased estimated risk by 40% in a river-based home range. Improving model realism by weighting simulated falcon foraging based on distance from potential nest sites decreased risk by 93% in one exposure unit and by 82% in a second exposure unit. It was demonstrated that choice of toxicity reference values can have a substantial impact on risk estimates. Adding bioaccumulation factors for several organics increased partial hazard quotients by a factor of 110, but increased the mean hazard index by only 0.02 units. Adding a food consumption exposure pathway in the form of biomagnification factors for 15 contaminants of potential ecological concern increased the mean hazard index to 1.16 ({+-} 1.0), which is above the level of acceptability (1.0). Aroclor-1254, dichlorodiphenyltrichlorethane (DDT) and dichlorodiphenylethelyne (DDE) accounted for 81% of the estimated risk that includes soil ingestion and food consumption Contaminant pathways and a biomagnification component. Information on risk by specific geographical location was generated, which can be used to manage contaminated areas, falcon habitat, facility siting, and/or facility operations. 123 refs., 10 figs., 2 tabs.« less

  20. On the validity of within-nuclear-family genetic association analysis in samples of extended families.

    PubMed

    Bureau, Alexandre; Duchesne, Thierry

    2015-12-01

    Splitting extended families into their component nuclear families to apply a genetic association method designed for nuclear families is a widespread practice in familial genetic studies. Dependence among genotypes and phenotypes of nuclear families from the same extended family arises because of genetic linkage of the tested marker with a risk variant or because of familial specificity of genetic effects due to gene-environment interaction. This raises concerns about the validity of inference conducted under the assumption of independence of the nuclear families. We indeed prove theoretically that, in a conditional logistic regression analysis applicable to disease cases and their genotyped parents, the naive model-based estimator of the variance of the coefficient estimates underestimates the true variance. However, simulations with realistic effect sizes of risk variants and variation of this effect from family to family reveal that the underestimation is negligible. The simulations also show the greater efficiency of the model-based variance estimator compared to a robust empirical estimator. Our recommendation is therefore, to use the model-based estimator of variance for inference on effects of genetic variants.

  1. An administrative claims model for profiling hospital 30-day mortality rates for pneumonia patients.

    PubMed

    Bratzler, Dale W; Normand, Sharon-Lise T; Wang, Yun; O'Donnell, Walter J; Metersky, Mark; Han, Lein F; Rapp, Michael T; Krumholz, Harlan M

    2011-04-12

    Outcome measures for patients hospitalized with pneumonia may complement process measures in characterizing quality of care. We sought to develop and validate a hierarchical regression model using Medicare claims data that produces hospital-level, risk-standardized 30-day mortality rates useful for public reporting for patients hospitalized with pneumonia. Retrospective study of fee-for-service Medicare beneficiaries age 66 years and older with a principal discharge diagnosis of pneumonia. Candidate risk-adjustment variables included patient demographics, administrative diagnosis codes from the index hospitalization, and all inpatient and outpatient encounters from the year before admission. The model derivation cohort included 224,608 pneumonia cases admitted to 4,664 hospitals in 2000, and validation cohorts included cases from each of years 1998-2003. We compared model-derived state-level standardized mortality estimates with medical record-derived state-level standardized mortality estimates using data from the Medicare National Pneumonia Project on 50,858 patients hospitalized from 1998-2001. The final model included 31 variables and had an area under the Receiver Operating Characteristic curve of 0.72. In each administrative claims validation cohort, model fit was similar to the derivation cohort. The distribution of standardized mortality rates among hospitals ranged from 13.0% to 23.7%, with 25(th), 50(th), and 75(th) percentiles of 16.5%, 17.4%, and 18.3%, respectively. Comparing model-derived risk-standardized state mortality rates with medical record-derived estimates, the correlation coefficient was 0.86 (Standard Error = 0.032). An administrative claims-based model for profiling hospitals for pneumonia mortality performs consistently over several years and produces hospital estimates close to those using a medical record model.

  2. An Administrative Claims Model for Profiling Hospital 30-Day Mortality Rates for Pneumonia Patients

    PubMed Central

    Bratzler, Dale W.; Normand, Sharon-Lise T.; Wang, Yun; O'Donnell, Walter J.; Metersky, Mark; Han, Lein F.; Rapp, Michael T.; Krumholz, Harlan M.

    2011-01-01

    Background Outcome measures for patients hospitalized with pneumonia may complement process measures in characterizing quality of care. We sought to develop and validate a hierarchical regression model using Medicare claims data that produces hospital-level, risk-standardized 30-day mortality rates useful for public reporting for patients hospitalized with pneumonia. Methodology/Principal Findings Retrospective study of fee-for-service Medicare beneficiaries age 66 years and older with a principal discharge diagnosis of pneumonia. Candidate risk-adjustment variables included patient demographics, administrative diagnosis codes from the index hospitalization, and all inpatient and outpatient encounters from the year before admission. The model derivation cohort included 224,608 pneumonia cases admitted to 4,664 hospitals in 2000, and validation cohorts included cases from each of years 1998–2003. We compared model-derived state-level standardized mortality estimates with medical record-derived state-level standardized mortality estimates using data from the Medicare National Pneumonia Project on 50,858 patients hospitalized from 1998–2001. The final model included 31 variables and had an area under the Receiver Operating Characteristic curve of 0.72. In each administrative claims validation cohort, model fit was similar to the derivation cohort. The distribution of standardized mortality rates among hospitals ranged from 13.0% to 23.7%, with 25th, 50th, and 75th percentiles of 16.5%, 17.4%, and 18.3%, respectively. Comparing model-derived risk-standardized state mortality rates with medical record-derived estimates, the correlation coefficient was 0.86 (Standard Error = 0.032). Conclusions/Significance An administrative claims-based model for profiling hospitals for pneumonia mortality performs consistently over several years and produces hospital estimates close to those using a medical record model. PMID:21532758

  3. Estimating the effects of Cry1F Bt-maize pollen on non-target Lepidoptera using a mathematical model of exposure

    PubMed Central

    Perry, Joe N; Devos, Yann; Arpaia, Salvatore; Bartsch, Detlef; Ehlert, Christina; Gathmann, Achim; Hails, Rosemary S; Hendriksen, Niels B; Kiss, Jozsef; Messéan, Antoine; Mestdagh, Sylvie; Neemann, Gerd; Nuti, Marco; Sweet, Jeremy B; Tebbe, Christoph C

    2012-01-01

    In farmland biodiversity, a potential risk to the larvae of non-target Lepidoptera from genetically modified (GM) Bt-maize expressing insecticidal Cry1 proteins is the ingestion of harmful amounts of pollen deposited on their host plants. A previous mathematical model of exposure quantified this risk for Cry1Ab protein. We extend this model to quantify the risk for sensitive species exposed to pollen containing Cry1F protein from maize event 1507 and to provide recommendations for management to mitigate this risk. A 14-parameter mathematical model integrating small- and large-scale exposure was used to estimate the larval mortality of hypothetical species with a range of sensitivities, and under a range of simulated mitigation measures consisting of non-Bt maize strips of different widths placed around the field edge. The greatest source of variability in estimated mortality was species sensitivity. Before allowance for effects of large-scale exposure, with moderate within-crop host-plant density and with no mitigation, estimated mortality locally was <10% for species of average sensitivity. For the worst-case extreme sensitivity considered, estimated mortality locally was 99·6% with no mitigation, although this estimate was reduced to below 40% with mitigation of 24-m-wide strips of non-Bt maize. For highly sensitive species, a 12-m-wide strip reduced estimated local mortality under 1·5%, when within-crop host-plant density was zero. Allowance for large-scale exposure effects would reduce these estimates of local mortality by a highly variable amount, but typically of the order of 50-fold. Mitigation efficacy depended critically on assumed within-crop host-plant density; if this could be assumed negligible, then the estimated effect of mitigation would reduce local mortality below 1% even for very highly sensitive species. Synthesis and applications. Mitigation measures of risks of Bt-maize to sensitive larvae of non-target lepidopteran species can be effective, but depend on host-plant densities which are in turn affected by weed-management regimes. We discuss the relevance for management of maize events where cry1F is combined (stacked) with a herbicide-tolerance trait. This exemplifies how interactions between biota may occur when different traits are stacked irrespective of interactions between the proteins themselves and highlights the importance of accounting for crop management in the assessment of the ecological impact of GM plants. PMID:22496596

  4. Salmonella risk to consumers via pork is related to the Salmonella prevalence in pig feed.

    PubMed

    Rönnqvist, M; Välttilä, V; Ranta, J; Tuominen, P

    2018-05-01

    Pigs are an important source of human infections with Salmonella, one of the most common causes of sporadic gastrointestinal infections and foodborne outbreaks in the European region. Feed has been estimated to be a significant source of Salmonella in piggeries in countries of a low Salmonella prevalence. To estimate Salmonella risk to consumers via the pork production chain, including feed production, a quantitative risk assessment model was constructed. The Salmonella prevalence in feeds and in animals was estimated to be generally low in Finland, but the relative importance of feed as a source of Salmonella in pigs was estimated as potentially high. Discontinuation of the present strict Salmonella control could increase the risk of Salmonella in slaughter pigs and consequent infections in consumers. The increased use of low risk and controlled feed ingredients could result in a consistently lower residual contamination in pigs and help the tracing and control of the sources of infections. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  6. Radiation exposure and circulatory disease risk: Hiroshima and Nagasaki atomic bomb survivor data, 1950-2003.

    PubMed

    Shimizu, Yukiko; Kodama, Kazunori; Nishi, Nobuo; Kasagi, Fumiyoshi; Suyama, Akihiko; Soda, Midori; Grant, Eric J; Sugiyama, Hiromi; Sakata, Ritsu; Moriwaki, Hiroko; Hayashi, Mikiko; Konda, Manami; Shore, Roy E

    2010-01-14

    To investigate the degree to which ionising radiation confers risk of mortality from heart disease and stroke. Prospective cohort study with more than 50 years of follow-up. Atomic bomb survivors in Hiroshima and Nagasaki, Japan. 86 611 Life Span Study cohort members with individually estimated radiation doses from 0 to >3 Gy (86% received <0.2 Gy). Mortality from stroke or heart disease as the underlying cause of death and dose-response relations with atomic bomb radiation. About 9600 participants died of stroke and 8400 died of heart disease between 1950 and 2003. For stroke, the estimated excess relative risk per gray was 9% (95% confidence interval 1% to 17%, P=0.02) on the basis of a linear dose-response model, but an indication of possible upward curvature suggested relatively little risk at low doses. For heart disease, the estimated excess relative risk per gray was 14% (6% to 23%, P<0.001); a linear model provided the best fit, suggesting excess risk even at lower doses. However, the dose-response effect over the restricted dose range of 0 to 0.5 Gy was not significant. Prospective data on smoking, alcohol intake, education, occupation, obesity, and diabetes had almost no impact on the radiation risk estimates for either stroke or heart disease, and misdiagnosis of cancers as circulatory diseases could not account for the associations seen. Doses above 0.5 Gy are associated with an elevated risk of both stroke and heart disease, but the degree of risk at lower doses is unclear. Stroke and heart disease together account for about one third as many radiation associated excess deaths as do cancers among atomic bomb survivors.

  7. Serum uric acid and cancer mortality and incidence: a systematic review and meta-analysis.

    PubMed

    Dovell, Frances; Boffetta, Paolo

    2018-07-01

    Elevated serum uric acid (SUA) is a marker of chronic inflammation and has been suggested to be associated with increased risk of cancer, but its antioxidant capacity would justify an anticancer effect. Previous meta-analyses did not include all available results. We conducted a systematic review of prospective studies on SUA level and risk of all cancers and specific cancers, a conducted a meta-analysis based on random-effects models for high versus low SUA level as well as for an increase in 1 mg/dl SUA. The relative risk of all cancers for high versus low SUA level was 1.11 (95% confidence interval: 0.94-1.27; 11 risk estimates); that for a mg/dl increase in SUA level was 1.03 (95% confidence interval: 0.99-1.07). Similar results were obtained for lung cancer (six risk estimates) and colon cancer (four risk estimates). Results for other cancers were sparse. Elevated SUA levels appear to be associated with a modest increase in overall cancer risk, although the combined risk estimate did not reach the formal level of statistical significance. Results for specific cancers were limited and mainly negative.

  8. Latent Trajectories of Common Mental Health Disorder Risk Across 3 Decades of Adulthood in a Population-Based Cohort.

    PubMed

    Paksarian, Diana; Cui, Lihong; Angst, Jules; Ajdacic-Gross, Vladeta; Rössler, Wulf; Merikangas, Kathleen R

    2016-10-01

    Epidemiologic evidence indicates that most of the general population will experience a mental health disorder at some point in their lives. However, few prospective population-based studies have estimated trajectories of risk for mental disorders from young through middle adulthood to estimate the proportion of individuals who experience persistent mental disorder across this age period. To describe the proportion of the population who experience persistent mental disorder across adulthood and to estimate latent trajectories of disorder risk across this age period. A population-based, prospective cohort study was conducted between 1979 and 2008 in the canton of Zurich, Switzerland. A stratified random sample of 591 Swiss citizens was enrolled in 1978 at ages 19 years (men) and 20 years (women); 7 interviews were performed during a 29-year period. Men were sampled from military enrollment records and women from electoral records. From those initially enrolled, participants with high levels of psychiatric symptoms were oversampled for follow-up. Data analysis was performed from July 28, 2015, to June 8, 2016. Latent trajectories, estimated using growth mixture modeling, of past-year mood/anxiety disorder (ie, major depressive episode, phobias, panic, generalized anxiety disorder, and obsessive-compulsive disorder), substance use disorder (ie, drug abuse or dependence and alcohol abuse or dependence), and any mental disorder (ie, any of the above) assessed during in-person semistructured interviews at each wave. Diagnoses were based on DSM-III, DSM-III-R, and DSM-IV criteria. Of the 591 participants at baseline, 299 (50.6%) were female. Persistent mental health disorder across multiple study waves was rare. Among 252 individuals (42.6%) who participated in all 7 study waves, only 1.2% met criteria for disorder every time. Growth mixture modeling identified 3 classes of risk for any disorder across adulthood: low (estimated prevalence, 40.0%; 95% CI, -8.7% to 88.9%), increasing-decreasing (estimated prevalence, 15.3%; 95% CI, 1.0% to 29.6%), and increasing (estimated prevalence, 44.7%; 95% CI, -0.9% to 90.1%). Although no classes were characterized by persistently high disorder risk, for those in the increasing-decreasing class, risk was high from the late 20s to early 40s. Sex-specific models indicated 4 trajectory classes for women but only 3 for men. Persistently high mental health disorder risk across 3 decades of adulthood was rare in this population-based sample. Identifying early determinants of sex-specific risk trajectories would benefit prevention efforts.

  9. Incidence and Residual Risk of HIV, HBV and HCV Infections Among Blood Donors in Tehran.

    PubMed

    Saber, Hamid Reza; Tabatabaee, Seyed Morteza; Abasian, Ali; Jamali, Mostafa; SalekMoghadam, Ebadollah; Hajibeigi, Bashir; Alavian, Seyed Moayed; Mirrezaie, Seyed Mohammad

    2017-09-01

    Estimation of residual risk is essential to monitor and improve blood safety. Our epidemiologic knowledge in the Iranian donor population regarding transfusion transmitted viral infections (TTIs), is confined to a few studies based on prevalence rate. There are no reports on residual risk of TTIs in Iran. In present survey, a software database of donor records of Tehran Blood Transfusion Center (TBTC) was used to estimate the incidence and residual risk of hepatitis B virus (HBV), hepatitis C virus (HCV) and human immunodeficiency virus (HIV) infections, by applying the incidence rate/window period (IR-WP) model. A total of 1,207,155 repeat donations was included in the analysis and represented a mean of 8.4 donations per donor over 6 years. The incidence amongst repeat donors was estimated by dividing the number of confirmed seroconverting donors by the total number of person-years at risk. The residual risk was calculated using the incidence/window period model. Incidence rate and residual risk for HBV, HCV and HIV infections were calculated for total (2005-2010) and two consecutive periods (2005-2007 and 2008-2010) of the study. According to the IR-WP model, overall residual risk for HIV and HCV in the total study period was 0.4 and 12.5 per million units, respectively and for HBV 4.57/100,000 donations. The incidence and residual risk of TTIs, calculated on TBTC's blood supply was low and comparable with developed countries for HIV infection but high for HCV and HBV infections. Blood safety may therefore be better managed by applying other techniques like nucleic acid amplification tests.

  10. Risk of African swine fever introduction into the European Union through transport-associated routes: returning trucks and waste from international ships and planes.

    PubMed

    Mur, Lina; Martínez-López, Beatriz; Sánchez-Vizcaíno, José Manuel

    2012-08-30

    The uncontrolled presence of African swine fever (ASF) in Russian Federation (RF) poses a serious risk to the whole European Union (EU) pig industry. Although trade of pigs and their products is banned since the official notification in June 2007, the potential introduction of ASF virus (ASFV) may occur by other routes, which are very frequent in ASF, and more difficult to control, such as contaminated waste or infected vehicles. This study was intended to estimate the risk of ASFV introduction into the EU through three types of transport routes: returning trucks, waste from international ships and waste from international planes, which will be referred here as transport-associated routes (TAR). Since no detailed and official information was available for these routes, a semi-quantitative model based on the weighted combination of risk factors was developed to estimate the risk of ASFV introduction by TAR. Relative weights for combination of different risk factors as well as validation of the model results were obtained by an expert opinion elicitation. Model results indicate that the relative risk for ASFV introduction through TAR in most of the EU countries (16) is low, although some countries, specifically Poland and Lithuania, concentrate high levels of risk, the returning trucks route being the analyzed TAR that currently poses the highest risk for ASFV introduction into the EU. The spatial distribution of the risk of ASFV introduction varies importantly between the analyzed introduction routes. Results also highlight the need to increase the awareness and precautions for ASF prevention, particularly ensuring truck disinfection, to minimize the potential risk of entrance into the EU. This study presents the first assessment of ASF introduction into the EU through TAR. The innovative model developed here could be used in data scarce situations for estimating the relative risk associated to each EU country. This simple methodology provides a rapid and easy to interpret results on risk that may be used for a target and cost-effective allocation of resources to prevent disease introduction.

  11. Risk of African swine fever introduction into the European Union through transport-associated routes: returning trucks and waste from international ships and planes

    PubMed Central

    2012-01-01

    Background The uncontrolled presence of African swine fever (ASF) in Russian Federation (RF) poses a serious risk to the whole European Union (EU) pig industry. Although trade of pigs and their products is banned since the official notification in June 2007, the potential introduction of ASF virus (ASFV) may occur by other routes, which are very frequent in ASF, and more difficult to control, such as contaminated waste or infected vehicles. This study was intended to estimate the risk of ASFV introduction into the EU through three types of transport routes: returning trucks, waste from international ships and waste from international planes, which will be referred here as transport-associated routes (TAR). Since no detailed and official information was available for these routes, a semi-quantitative model based on the weighted combination of risk factors was developed to estimate the risk of ASFV introduction by TAR. Relative weights for combination of different risk factors as well as validation of the model results were obtained by an expert opinion elicitation. Results Model results indicate that the relative risk for ASFV introduction through TAR in most of the EU countries (16) is low, although some countries, specifically Poland and Lithuania, concentrate high levels of risk, the returning trucks route being the analyzed TAR that currently poses the highest risk for ASFV introduction into the EU. The spatial distribution of the risk of ASFV introduction varies importantly between the analyzed introduction routes. Results also highlight the need to increase the awareness and precautions for ASF prevention, particularly ensuring truck disinfection, to minimize the potential risk of entrance into the EU. Conclusions This study presents the first assessment of ASF introduction into the EU through TAR. The innovative model developed here could be used in data scarce situations for estimating the relative risk associated to each EU country. This simple methodology provides a rapid and easy to interpret results on risk that may be used for a target and cost-effective allocation of resources to prevent disease introduction. PMID:22935221

  12. Bayesian averaging over Decision Tree models for trauma severity scoring.

    PubMed

    Schetinin, V; Jakaite, L; Krzanowski, W

    2018-01-01

    Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Modeling the survival kinetics of Salmonella in tree nuts for use in risk assessment.

    PubMed

    Santillana Farakos, Sofia M; Pouillot, Régis; Anderson, Nathan; Johnson, Rhoma; Son, Insook; Van Doren, Jane

    2016-06-16

    Salmonella has been shown to survive in tree nuts over long periods of time. This survival capacity and its variability are key elements for risk assessment of Salmonella in tree nuts. The aim of this study was to develop a mathematical model to predict survival of Salmonella in tree nuts at ambient storage temperatures that considers variability and uncertainty separately and can easily be incorporated into a risk assessment model. Data on Salmonella survival on raw almonds, pecans, pistachios and walnuts were collected from the peer reviewed literature. The Weibull model was chosen as the baseline model and various fixed effect and mixed effect models were fit to the data. The best model identified through statistical analysis testing was then used to develop a hierarchical Bayesian model. Salmonella in tree nuts showed slow declines at temperatures ranging from 21°C to 24°C. A high degree of variability in survival was observed across tree nut studies reported in the literature. Statistical analysis results indicated that the best applicable model was a mixed effect model that included a fixed and random variation of δ per tree nut (which is the time it takes for the first log10 reduction) and a fixed variation of ρ per tree nut (parameter which defines the shape of the curve). Higher estimated survival rates (δ) were obtained for Salmonella on pistachios, followed in decreasing order by pecans, almonds and walnuts. The posterior distributions obtained from Bayesian inference were used to estimate the variability in the log10 decrease levels in survival for each tree nut, and the uncertainty of these estimates. These modeled uncertainty and variability distributions of the estimates can be used to obtain a complete exposure assessment of Salmonella in tree nuts when including time-temperature parameters for storage and consumption data. The statistical approach presented in this study may be applied to any studies that aim to develop predictive models to be implemented in a probabilistic exposure assessment or a quantitative microbial risk assessment. Published by Elsevier B.V.

  14. Lifetime Incidence of CKD Stages 3–5 in the United States

    PubMed Central

    Grams, Morgan E.; Chow, Eric K.H.; Segev, Dorry L.; Coresh, Josef

    2013-01-01

    Background Lifetime risk estimates of chronic kidney disease (CKD) can motivate preventative behaviors at the individual level and forecast disease burden and health care utilization at the population level. Study Design Markov Monte Carlo model simulation study. Setting & Population Current U.S. black and white population. Model, Perspective, & Timeframe Markov models simulating kidney disease development, using an individual perspective and lifetime horizon. Outcomes Age-, sex- and race-specific residual lifetime risks of CKD stages 3a+ (eGFR<60 ml/min/1.73m2), 3b+ (eGFR<45 ml/min/1.73 m2), and 4+ (eGFR<30 ml/min/1.73m2), and end stage renal disease (ESRD). Measurements State transition probabilities of developing CKD and of dying prior to its development were modeled using: 1) mortality rates from National Vital Statistics Report, 2) mortality risk estimates from a 2-million person meta-analysis, and 3) CKD prevalence from National Health and Nutrition Examination Surveys. Incidence, prevalence, and mortality related to ESRD were supplied by the US Renal Disease System. Results At birth, the overall lifetime risks of CKD stages 3a+, 3b+, 4+, and ESRD were 59.1%, 33.6%, 11.5%, and 3.6%, respectively. Women experienced greater CKD risk yet lower ESRD risk than men; blacks of both sexes had markedly higher CKD stage 4+ and ESRD risk (lifetime risks for white men, white women, black men, and black women, respectively: 53.6%, 64.9%, 51.8%, and 63.6% [CKD stage 3a+]; 29.0%, 36.7%, 33.7%, and 40.2% [CKD stage 3b+]; 9.3%, 11.4%, 15.8%, and 18.5% [CKD stage 4+]; and 3.3%, 2.2%, 8.5%, and 7.8% [ESRD]). Risk of CKD increased with age, with approximately one-half of CKD stage 3a+ cases developing after 70 years of age. Limitations CKD incidence estimates were modeled from prevalence in the U.S. population. Conclusions In the U.S., the lifetime risk of developing CKD stage 3a+ is high, underscoring the importance of primary prevention and effective therapy to reduce CKD-related morbidity and mortality. PMID:23566637

  15. Effectiveness and cost-effectiveness of sentinel lymph node biopsy compared with axillary node dissection in patients with early-stage breast cancer: a decision model analysis.

    PubMed

    Verry, H; Lord, S J; Martin, A; Gill, G; Lee, C K; Howard, K; Wetzig, N; Simes, J

    2012-03-13

    Sentinel lymph node biopsy (SLNB) is less invasive than axillary lymph node dissection (ALND) for staging early breast cancer, and has a lower risk of arm lymphoedema and similar rates of locoregional recurrence up to 8 years. This study estimates the longer-term effectiveness and cost-effectiveness of SLNB. A Markov decision model was developed to estimate the incremental quality-adjusted life years (QALYs) and costs of an SLNB-based staging and management strategy compared with ALND over 20 years' follow-up. The probability and quality-of-life weighting (utility) of outcomes were estimated from published data and population statistics. Costs were estimated from the perspective of the Australian health care system. The model was used to identify key factors affecting treatment decisions. The SLNB was more effective and less costly than the ALND over 20 years, with 8 QALYs gained and $883,000 saved per 1000 patients. The SLNB was less effective when: SLNB false negative (FN) rate >13%; 5-year incidence of axillary recurrence after an SLNB FN>19%; risk of an SLNB-positive result >48%; lymphoedema prevalence after ALND <14%; or lymphoedema utility decrement <0.012. The long-term advantage of SLNB over ALND was modest and sensitive to variations in key assumptions, indicating a need for reliable information on lymphoedema incidence and disutility following SLNB. In addition to awaiting longer-term trial data, risk models to better identify patients at high risk of axillary metastasis will be valuable to inform decision-making.

  16. Improvement in spine bone density and reduction in risk of vertebral fractures during treatment with antiresorptive drugs.

    PubMed

    Cummings, Steven R; Karpf, David B; Harris, Fran; Genant, Harry K; Ensrud, Kristine; LaCroix, Andrea Z; Black, Dennis M

    2002-03-01

    To estimate how much the improvement in bone mass accounts for the reduction in risk of vertebral fracture that has been observed in randomized trials of antiresorptive treatments for osteoporosis. After a systematic search, we conducted a meta-analysis of 12 trials to describe the relation between improvement in spine bone mineral density and reduction in risk of vertebral fracture in postmenopausal women. We also used logistic models to estimate the proportion of the reduction in risk of vertebral fracture observed with alendronate in the Fracture Intervention Trial that was due to improvement in bone mineral density. Across the 12 trials, a 1% improvement in spine bone mineral density was associated with a 0.03 decrease (95% confidence interval [CI]: 0.02 to 0.05) in the relative risk (RR) of vertebral fracture. The reductions in risk were greater than predicted from improvement in bone mineral density; for example, the model estimated that treatments predicted to reduce fracture risk by 20% (RR = 0.80), based on improvement in bone mineral density, actually reduce the risk of fracture by about 45% (RR = 0.55). In the Fracture Intervention Trial, improvement in spine bone mineral density explained 16% (95% CI: 11% to 27%) of the reduction in the risk of vertebral fracture with alendronate. Improvement in spine bone mineral density during treatment with antiresorptive drugs accounts for a predictable but small part of the observed reduction in the risk of vertebral fracture.

  17. A tool for efficient, model-independent management optimization under uncertainty

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  18. Ecological risk assessment in a large river-reservoir. 5: Aerial insectivorous wildlife

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baron, L.A.; Sample, B.E.; Suter, G.W. II

    Risks to aerial insectivores (e.g., rough-winged swallows, little brown bats, and endangered gray bats) were assessed for the remedial investigation of the Clinch River/Poplar Creek (CR/PC) system. Adult mayflies and sediment were collected from three locations and analyzed for contaminants. Sediment-to-mayfly contaminant uptake factors were generated from these data and used to estimate contaminant concentrations in mayflies from 13 additional locations. Contaminants of potential ecological concern (COPECs) were identified by comparing exposure estimates generated using point estimates of parameter values to NOAELs. To incorporate the variation in exposure parameters and to provide a better estimate of the potential exposure, themore » exposure model was recalculated using Monte Carlo methods. The potential for adverse effects was estimated based on the comparison of exposure distribution and the LOAEL. The results of this assessment suggested that population-level effects to rough-winged swallows and little brown bats are considered unlikely. However, because gray bats are endangered, effects on individuals may be significant from foraging in limited subreaches of the CR/PC system. This assessment illustrates the advantage of an iterative approach to ecological risk assessments, using fewer conservative assumptions and more realistic modeling of exposure.« less

  19. Using the Bivariate Dale Model to jointly estimate predictors of frequency and quantity of alcohol use.

    PubMed

    McMillan, Garnett P; Hanson, Tim; Bedrick, Edward J; Lapham, Sandra C

    2005-09-01

    This study demonstrates the usefulness of the Bivariate Dale Model (BDM) as a method for estimating the relationship between risk factors and the quantity and frequency of alcohol use, as well as the degree of association between these highly correlated drinking measures. The BDM is used to evaluate childhood sexual abuse, along with age and gender, as risk factors for the quantity and frequency of beer consumption in a sample of driving-while-intoxicated (DWI) offenders (N = 1,964; 1,612 men). The BDM allows one to estimate the relative odds of drinking up to each level of ordinal-scaled quantity and frequency of alcohol use, as well as model the degree of association between quantity and frequency of alcohol consumption as a function of covariates. Individuals who experienced childhood sexual abuse have increased risks of higher quantity and frequency of beer consumption. History of childhood sexual abuse has a greater effect on women, causing them to drink higher quantities of beer per drinking occasion. The BDM is a useful method for evaluating predictors of the quantity-frequency of alcohol consumption. SAS macrocode for fitting the BDM model is provided.

  20. Estimate of Space Radiation-Induced Cancer Risks for International Space Station Orbits

    NASA Technical Reports Server (NTRS)

    Wu, Honglu; Atwell, William; Cucinotta, Francis A.; Yang, Chui-hsu

    1996-01-01

    Excess cancer risks from exposures to space radiation are estimated for various orbits of the International Space Station (ISS). Organ exposures are computed with the transport codes, BRYNTRN and HZETRN, and the computerized anatomical male and computerized anatomical female models. Cancer risk coefficients in the National Council on Radiation Protection and Measurements report No. 98 are used to generate lifetime excess cancer incidence and cancer mortality after a one-month mission to ISS. The generated data are tabulated to serve as a quick reference for assessment of radiation risk to astronauts on ISS missions.

  1. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.

  2. Logistic regression of family data from retrospective study designs.

    PubMed

    Whittemore, Alice S; Halpern, Jerry

    2003-11-01

    We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.

  3. Estimating associations of mobile phone use and brain tumours taking into account laterality: a comparison and theoretical evaluation of applied methods.

    PubMed

    Frederiksen, Kirsten; Deltour, Isabelle; Schüz, Joachim

    2012-12-10

    Estimating exposure-outcome associations using laterality information on exposure and on outcome is an issue, when estimating associations of mobile phone use and brain tumour risk. The exposure is localized; therefore, a potential risk is expected to exist primarily on the side of the head, where the phone is usually held (ipsilateral exposure), and to a lesser extent at the opposite side of the head (contralateral exposure). Several measures of the associations with ipsilateral and contralateral exposure, dealing with different sampling designs, have been presented in the literature. This paper presents a general framework for the analysis of such studies using a likelihood-based approach in a competing risks model setting. The approach clarifies the implicit assumptions required for the validity of the presented estimators, particularly that in some approaches the risk with contralateral exposure is assumed to be zero. The performance of the estimators is illustrated in a simulation study showing for instance that while in some scenarios there is a loss of statistical power, others - in case of a positive ipsilateral exposure-outcome association - would result in a negatively biased estimate of the contralateral exposure parameter, irrespective of any additional recall bias. In conclusion, our theoretical evaluations and results from the simulation study emphasize the importance of setting up a formal model, which furthermore allows for estimation in more complicated and perhaps more realistic exposure settings, such as taking into account exposure to both sides of the head. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Probabilistic, meso-scale flood loss modelling

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  5. Crossover effect of spouse weekly working hours on estimated 10-years risk of cardiovascular disease.

    PubMed

    Kang, Mo-Yeol; Hong, Yun-Chul

    2017-01-01

    To investigate the association between spouse weekly working hours (SWWH) and the estimated 10-years risk of cardiovascular disease (CVD). This cross-sectional study was based on the data obtained from the Korean National Health and Nutrition Examination Survey 2007-2012. Data of 16,917 participants (8,330 husbands, 8,587 wives) were used for this analysis. The participants' clinical data were collected to estimate the 10-years risk of CVD, as well as weekly working hours. Multiple logistic regression was conducted to investigate the association between SWWH and the estimated 10-years risk of CVD. We also performed a stratified analysis according to each participant's and their spouse's employment status. Compared to those whose spouses worked 30 hours per week, estimated 10-years risk of CVD was significantly higher as SWWH increase among those whose spouses worked >30 hours per week. After adjusting for covariates, the odds ratio for high CVD risk was found to increase as SWWH increased, up to 2.52 among husbands and 2.43 among wives. We also found that the association between SWWH and the estimated 10-years risk of CVD varied according to the employment status. Analysis of each component included in the CVD appraisal model showed that SWWH had close relationship with diabetes in men, and smoking habits in women. Spouse's long working hours are associated with individual's risk of CVD in future, especially among husbands.

  6. Estimating Wisconsin asthma prevalence using clinical electronic health records and public health data.

    PubMed

    Tomasallo, Carrie D; Hanrahan, Lawrence P; Tandias, Aman; Chang, Timothy S; Cowan, Kelly J; Guilbert, Theresa W

    2014-01-01

    We compared a statewide telephone health survey with electronic health record (EHR) data from a large Wisconsin health system to estimate asthma prevalence in Wisconsin. We developed frequency tables and logistic regression models using Wisconsin Behavioral Risk Factor Surveillance System and University of Wisconsin primary care clinic data. We compared adjusted odds ratios (AORs) from each model. Between 2007 and 2009, the EHR database contained 376,000 patients (30,000 with asthma), and 23,000 (1850 with asthma) responded to the Behavioral Risk Factor Surveillance System telephone survey. AORs for asthma were similar in magnitude and direction for the majority of covariates, including gender, age, and race/ethnicity, between survey and EHR models. The EHR data had greater statistical power to detect associations than did survey data, especially in pediatric and ethnic populations, because of larger sample sizes. EHRs can be used to estimate asthma prevalence in Wisconsin adults and children. EHR data may improve public health chronic disease surveillance using high-quality data at the local level to better identify areas of disparity and risk factors and guide education and health care interventions.

  7. iCARE

    Cancer.gov

    The iCARE R Package allows researchers to quickly build models for absolute risk, and apply them to estimate an individual's risk of developing disease during a specifed time interval, based on a set of user defined input parameters.

  8. Estimating the Return on Investment From a Health Risk Management Program Offered to Small Colorado-Based Employers

    PubMed Central

    Goetzel, Ron Z.; Tabrizi, Maryam; Henke, Rachel Mosher; Benevent, Richele; Brockbank, Claire v. S.; Stinson, Kaylan; Trotter, Margo; Newman, Lee S.

    2015-01-01

    Objective To determine whether changes in health risks for workers in small businesses can produce medical and productivity cost savings. Methods A 1-year pre- and posttest study tracked changes in 10 modifiable health risks for 2458 workers at 121 Colorado businesses that participated in a comprehensive worksite health promotion program. Risk reductions were entered into a return-on-investment (ROI) simulation model. Results Reductions were recorded in 10 risk factors examined, including obesity (−2.0%), poor eating habits (−5.8%), poor physical activity (−6.5%), tobacco use (−1.3%), high alcohol consumption (−1.7%), high stress (−3.5%), depression (−2.3%), high blood pressure (−0.3%), high total cholesterol (−0.9%), and high blood glucose (−0.2%). The ROI model estimated medical and productivity savings of $2.03 for every $1.00 invested. Conclusions Pooled data suggest that small businesses can realize a positive ROI from effective risk reduction programs. PMID:24806569

  9. Projecting Sexual and Injecting HIV Risks into Future Outcomes with Agent-Based Modeling

    NASA Astrophysics Data System (ADS)

    Bobashev, Georgiy V.; Morris, Robert J.; Zule, William A.

    Longitudinal studies of health outcomes for HIV could be very costly cumbersome and not representative of the risk population. Conversely, cross-sectional approaches could be representative but rely on the retrospective information to estimate prevalence and incidence. We present an Agent-based Modeling (ABM) approach where we use behavioral data from a cross-sectional representative study and project the behavior into the future so that the risks of acquiring HIV could be studied in a dynamical/temporal sense. We show how the blend of behavior and contact network factors (sexual, injecting) play the role in the risk of future HIV acquisition and time till obtaining HIV. We show which subjects are the most likely persons to get HIV in the next year, and whom they are likely to infect. We examine how different behaviors are related to the increase or decrease of HIV risks and how to estimate the quantifiable risk measures such as survival HIV free.

  10. Developing ecologically based PCB, pesticide, and metal remedial goals for an impacted northeast wooded swamp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rury, P.M.; Turton, D.J.

    Historically, remedial goals at hazardous waste sites have been developed based on human health risk estimates. As the disciplines of remedial investigation, risk assessment, and remedial design have evolved, there has been a shift toward the development of remedial goals that are protective of both human health and the environment. This has increased the need for sound quantitative ecological risk methodologies from which to derive ecologically protective remedial goals. The foundation of many ecological risk assessment models is the bioconcentration or bioaccumulation factor that estimates the partitioning of the compound of concern between the media (e.g., water, soil, or food)more » and the organism. Simple dietary food-chain models are then used to estimate the dose and resulting risk to higher trophic levels. For a Superfund site that encompassed a northeastern wooded swamp, a PCB pesticide and metal uptake and toxicity study was conducted on the earthworm commonly known as the red wiggler (Eisenea foetida). The study resulted in site-specific sediment to earthworm bioconcentration factors for PCBs and a range of pesticides and metals. In addition, largemouth bass and yellow perch were collected from an impacted pond to identify PCB and pesticide concentrations in mink (Mustela vison) prey. Utilizing the empirical data and site-specific bioconcentration factors in food-chain models, potential risks to the American woodcock (Scolopax minor) and mink were assessed, and ecologically protective PCB, pesticide, and metal remedial goals for the sediments of the wooded swamp were developed.« less

  11. The conversion of exposures due to radon into the effective dose: the epidemiological approach.

    PubMed

    Beck, T R

    2017-11-01

    The risks and dose conversion coefficients for residential and occupational exposures due to radon were determined with applying the epidemiological risk models to ICRP representative populations. The dose conversion coefficient for residential radon was estimated with a value of 1.6 mSv year -1 per 100 Bq m -3 (3.6 mSv per WLM), which is significantly lower than the corresponding value derived from the biokinetic and dosimetric models. The dose conversion coefficient for occupational exposures with applying the risk models for miners was estimated with a value of 14 mSv per WLM, which is in good accordance with the results of the dosimetric models. To resolve the discrepancy regarding residential radon, the ICRP approaches for the determination of risks and doses were reviewed. It could be shown that ICRP overestimates the risk for lung cancer caused by residential radon. This can be attributed to a wrong population weighting of the radon-induced risks in its epidemiological approach. With the approach in this work, the average risks for lung cancer were determined, taking into account the age-specific risk contributions of all individuals in the population. As a result, a lower risk coefficient for residential radon was obtained. The results from the ICRP biokinetic and dosimetric models for both, the occupationally exposed working age population and the whole population exposed to residential radon, can be brought in better accordance with the corresponding results of the epidemiological approach, if the respective relative radiation detriments and a radiation-weighting factor for alpha particles of about ten are used.

  12. Spatial variability of excess mortality during prolonged dust events in a high-density city: a time-stratified spatial regression approach.

    PubMed

    Wong, Man Sing; Ho, Hung Chak; Yang, Lin; Shi, Wenzhong; Yang, Jinxin; Chan, Ta-Chien

    2017-07-24

    Dust events have long been recognized to be associated with a higher mortality risk. However, no study has investigated how prolonged dust events affect the spatial variability of mortality across districts in a downwind city. In this study, we applied a spatial regression approach to estimate the district-level mortality during two extreme dust events in Hong Kong. We compared spatial and non-spatial models to evaluate the ability of each regression to estimate mortality. We also compared prolonged dust events with non-dust events to determine the influences of community factors on mortality across the city. The density of a built environment (estimated by the sky view factor) had positive association with excess mortality in each district, while socioeconomic deprivation contributed by lower income and lower education induced higher mortality impact in each territory planning unit during a prolonged dust event. Based on the model comparison, spatial error modelling with the 1st order of queen contiguity consistently outperformed other models. The high-risk areas with higher increase in mortality were located in an urban high-density environment with higher socioeconomic deprivation. Our model design shows the ability to predict spatial variability of mortality risk during an extreme weather event that is not able to be estimated based on traditional time-series analysis or ecological studies. Our spatial protocol can be used for public health surveillance, sustainable planning and disaster preparation when relevant data are available.

  13. Combining operational models and data into a dynamic vessel risk assessment tool for coastal regions

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Braunschweig, F.; Lourenço, F.; Neves, R.

    2015-07-01

    The technological evolution in terms of computational capacity, data acquisition systems, numerical modelling and operational oceanography is supplying opportunities for designing and building holistic approaches and complex tools for newer and more efficient management (planning, prevention and response) of coastal water pollution risk events. A combined methodology to dynamically estimate time and space variable shoreline risk levels from ships has been developed, integrating numerical metocean forecasts and oil spill simulations with vessel tracking automatic identification systems (AIS). The risk rating combines the likelihood of an oil spill occurring from a vessel navigating in a study area - Portuguese Continental shelf - with the assessed consequences to the shoreline. The spill likelihood is based on dynamic marine weather conditions and statistical information from previous accidents. The shoreline consequences reflect the virtual spilled oil amount reaching shoreline and its environmental and socio-economic vulnerabilities. The oil reaching shoreline is quantified with an oil spill fate and behaviour model running multiple virtual spills from vessels along time. Shoreline risks can be computed in real-time or from previously obtained data. Results show the ability of the proposed methodology to estimate the risk properly sensitive to dynamic metocean conditions and to oil transport behaviour. The integration of meteo-oceanic + oil spill models with coastal vulnerability and AIS data in the quantification of risk enhances the maritime situational awareness and the decision support model, providing a more realistic approach in the assessment of shoreline impacts. The risk assessment from historical data can help finding typical risk patterns, "hot spots" or developing sensitivity analysis to specific conditions, whereas real time risk levels can be used in the prioritization of individual ships, geographical areas, strategic tug positioning and implementation of dynamic risk-based vessel traffic monitoring.

  14. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  15. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  16. Modelling preventive effectiveness to estimate the equity tipping point: at what coverage can individual preventive interventions reduce socioeconomic disparities in diabetes risk?

    PubMed

    Manuel, D G; Ho, T H; Harper, S; Anderson, G M; Lynch, J; Rosella, L C

    2014-07-01

    Most individual preventive therapies potentially narrow or widen health disparities depending on the difference in community effectiveness across socioeconomic position (SEP). The equity tipping point (defined as the point at which health disparities become larger) can be calculated by varying components of community effectiveness such as baseline risk of disease, intervention coverage and/or intervention efficacy across SEP. We used a simple modelling approach to estimate the community effectiveness of diabetes prevention across SEP in Canada under different scenarios of intervention coverage. Five-year baseline diabetes risk differed between the lowest and highest income groups by 1.76%. Assuming complete coverage across all income groups, the difference was reduced to 0.90% (144 000 cases prevented) with lifestyle interventions and 1.24% (88 100 cases prevented) with pharmacotherapy. The equity tipping point was estimated to be a coverage difference of 30% for preventive interventions (100% and 70% coverage among the highest and lowest income earners, respectively). Disparities in diabetes risk could be measurably reduced if existing interventions were equally adopted across SEP. However, disparities in coverage could lead to increased inequity in risk. Simple modelling approaches can be used to examine the community effectiveness of individual preventive interventions and their potential to reduce (or increase) disparities. The equity tipping point can be used as a critical threshold for disparities analyses.

  17. Using the negative exponential distribution to quantitatively review the evidence on how rapidly the excess risk of ischaemic heart disease declines following quitting smoking.

    PubMed

    Lee, Peter N; Fry, John S; Hamling, Jan S

    2012-10-01

    No previous review has formally modelled the decline in IHD risk following quitting smoking. From PubMed searches and other sources we identified 15 prospective and eight case-control studies that compared IHD risk in current smokers, never smokers, and quitters by time period of quit, some studies providing separate blocks of results by sex, age or amount smoked. For each of 41 independent blocks, we estimated, using the negative exponential model, the time, H, when the excess risk reduced to half that caused by smoking. Goodness-of-fit to the model was adequate for 35 blocks, others showing a non-monotonic pattern of decline following quitting, with a variable pattern of misfit. After omitting one block with a current smoker RR 1.0, the combined H estimate was 4.40 (95% CI 3.26-5.95) years. There was considerable heterogeneity, H being <2years for 10 blocks and >10years for 12. H increased (p<0.001) with mean age at study start, but not clearly with other factors. Sensitivity analyses allowing for reverse causation, or varying assumed midpoint times for the final open-ended quitting period little affected goodness-of-fit of the combined estimate. The US Surgeon-General's view that excess risk approximately halves after a year's abstinence seems over-optimistic. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Cost-effectiveness analysis of the use of high-flow oxygen through nasal cannula in intensive care units in NHS England.

    PubMed

    Eaton Turner, Emily; Jenks, Michelle

    2018-06-01

    To estimate the cost-effectiveness of Nasal High Flow (NHF) in the intensive care unit (ICU) compared with standard oxygen or non-invasive ventilation (NIV) from a UK NHS perspective. Three cost-effectiveness models were developed to reflect scenarios of NHF use: first-line therapy (pre-intubation model); post-extubation in low-risk, and high-risk patients. All models used randomized control trial data on the incidence of intubation/re-intubation, events leading to intubation/re-intubation, mortality and complications. NHS reference costs were primarily used. Sensitivity analyses were conducted. When used as first-line therapy, Optiflow™ NHF gives an estimated cost-saving of £469 per patient compared with standard oxygen and £611 versus NIV. NHF cost-savings for high severity sub-group were £727 versus standard oxygen, and £1,011 versus NIV. For low-risk post-intubation patients, NHF generates estimated cost-saving of £156 versus standard oxygen. NHF decreases the number of re-intubations required in these scenarios. Results were robust in most sensitivity analyses. For high-risk post-intubation patients, NHF cost-savings were £104 versus NIV. NHF results in a non-significant increase in re-intubations required. However, reduction in respiratory failure offsets this. For patients in ICU who are at risk of intubation or re-intubation, NHF cannula is likely to be cost-saving.

  19. Applying the Seattle Heart Failure Model in the Office Setting in the Era of Electronic Medical Records.

    PubMed

    Williams, Brent A; Agarwal, Shikhar

    2018-02-23

    Prediction models such as the Seattle Heart Failure Model (SHFM) can help guide management of heart failure (HF) patients, but the SHFM has not been validated in the office environment. This retrospective cohort study assessed the predictive performance of the SHFM among patients with new or pre-existing HF in the context of an office visit.Methods and Results:SHFM elements were ascertained through electronic medical records at an office visit. The primary outcome was all-cause mortality. A "warranty period" for the baseline SHFM risk estimate was sought by examining predictive performance over time through a series of landmark analyses. Discrimination and calibration were estimated according to the proposed warranty period. Low- and high-risk thresholds were proposed based on the distribution of SHFM estimates. Among 26,851 HF patients, 14,380 (54%) died over a mean 4.7-year follow-up period. The SHFM lost predictive performance over time, with C=0.69 and C<0.65 within 3 and beyond 12 months from baseline respectively. The diminishing predictive value was attributed to modifiable SHFM elements. Discrimination (C=0.66) and calibration for 12-month mortality were acceptable. A low-risk threshold of ∼5% mortality risk within 12 months reflects the 10% of HF patients in the office setting with the lowest risk. The SHFM has utility in the office environment.

  20. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-17

    ... Evaluation and Research (CBER) and suggestions for further development. The public workshop will include... Evaluation and Research (HFM-210), Food and Drug Administration, 1401 Rockville Pike, suite 200N, Rockville... models to generate quantitative estimates of the benefits and risks of influenza vaccination. The public...

Top