Sample records for precise risk estimates

  1. The 2006 William Feinberg lecture: shifting the paradigm from stroke to global vascular risk estimation.

    PubMed

    Sacco, Ralph L

    2007-06-01

    By the year 2010, it is estimated that 18.1 million people worldwide will die annually because of cardiovascular diseases and stroke. "Global vascular risk" more broadly includes the multiple overlapping disease silos of stroke, myocardial infarction, peripheral arterial disease, and vascular death. Estimation of global vascular risk requires consideration of a variety of variables including demographics, environmental behaviors, and risk factors. Data from multiple studies suggest continuous linear relationships between the physiological vascular risk modulators of blood pressure, lipids, and blood glucose rather than treating these conditions as categorical risk factors. Constellations of risk factors may be more relevant than individual categorical components. Exciting work with novel risk factors may also have predictive value in estimates of global vascular risk. Advances in imaging have led to the measurement of subclinical conditions such as carotid intima-media thickness and subclinical brain conditions such as white matter hyperintensities and silent infarcts. These subclinical measurements may be intermediate stages in the transition from asymptomatic to symptomatic vascular events, appear to be associated with the fundamental vascular risk factors, and represent opportunities to more precisely quantitate disease progression. The expansion of studies in molecular epidemiology and detection of genetic markers underlying vascular risks also promises to extend our precision of global vascular risk estimation. Global vascular risk estimation will require quantitative methods that bundle these multi-dimensional data into more precise estimates of future risk. The power of genetic information coupled with data on demographics, risk-inducing behaviors, vascular risk modulators, biomarkers, and measures of subclinical conditions should provide the most realistic approximation of an individual's future global vascular risk. The ultimate public health benefit, however, will depend on not only identification of global vascular risk but also the realization that we can modify this risk and prove the prediction models wrong.

  2. Effect of follow-up period on minimal-significant dose in the atomic-bomb survivor studies.

    PubMed

    Cologne, John; Preston, Dale L; Grant, Eric J; Cullings, Harry M; Ozasa, Kotaro

    2018-03-01

    It was recently suggested that earlier reports on solid-cancer mortality and incidence in the Life Span Study of atomic-bomb survivors contain still-useful information about low-dose risk that should not be ignored, because longer follow-up may lead to attenuated estimates of low-dose risk due to longer time since exposure. Here it is demonstrated, through the use of all follow-up data and risk models stratified on period of follow-up (as opposed to sub-setting the data by follow-up period), that the appearance of risk attenuation over time may be the result of less-precise risk estimation-in particular, imprecise estimation of effect-modification parameters-in the earlier periods. Longer follow-up, in addition to allowing more-precise estimation of risk due to larger numbers of radiation-related cases, provides more-precise adjustment for background mortality or incidence and more-accurate assessment of risk modification by age at exposure and attained age. It is concluded that the latest follow-up data are most appropriate for inferring low-dose risk. Furthermore, if researchers are interested in effects of time since exposure, the most-recent follow-up data should be considered rather than the results of earlier reports.

  3. Estimating Skin Cancer Risk: Evaluating Mobile Computer-Adaptive Testing.

    PubMed

    Djaja, Ngadiman; Janda, Monika; Olsen, Catherine M; Whiteman, David C; Chien, Tsair-Wei

    2016-01-22

    Response burden is a major detriment to questionnaire completion rates. Computer adaptive testing may offer advantages over non-adaptive testing, including reduction of numbers of items required for precise measurement. Our aim was to compare the efficiency of non-adaptive (NAT) and computer adaptive testing (CAT) facilitated by Partial Credit Model (PCM)-derived calibration to estimate skin cancer risk. We used a random sample from a population-based Australian cohort study of skin cancer risk (N=43,794). All 30 items of the skin cancer risk scale were calibrated with the Rasch PCM. A total of 1000 cases generated following a normal distribution (mean [SD] 0 [1]) were simulated using three Rasch models with three fixed-item (dichotomous, rating scale, and partial credit) scenarios, respectively. We calculated the comparative efficiency and precision of CAT and NAT (shortening of questionnaire length and the count difference number ratio less than 5% using independent t tests). We found that use of CAT led to smaller person standard error of the estimated measure than NAT, with substantially higher efficiency but no loss of precision, reducing response burden by 48%, 66%, and 66% for dichotomous, Rating Scale Model, and PCM models, respectively. CAT-based administrations of the skin cancer risk scale could substantially reduce participant burden without compromising measurement precision. A mobile computer adaptive test was developed to help people efficiently assess their skin cancer risk.

  4. Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.

    PubMed

    Ellner, Stephen P; Holmes, Elizabeth E

    2008-08-01

    We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.

  5. Risk assessment in the 21st century: roadmap and matrix.

    PubMed

    Embry, Michelle R; Bachman, Ammie N; Bell, David R; Boobis, Alan R; Cohen, Samuel M; Dellarco, Michael; Dewhurst, Ian C; Doerrer, Nancy G; Hines, Ronald N; Moretto, Angelo; Pastoor, Timothy P; Phillips, Richard D; Rowlands, J Craig; Tanir, Jennifer Y; Wolf, Douglas C; Doe, John E

    2014-08-01

    Abstract The RISK21 integrated evaluation strategy is a problem formulation-based exposure-driven risk assessment roadmap that takes advantage of existing information to graphically represent the intersection of exposure and toxicity data on a highly visual matrix. This paper describes in detail the process for using the roadmap and matrix. The purpose of this methodology is to optimize the use of prior information and testing resources (animals, time, facilities, and personnel) to efficiently and transparently reach a risk and/or safety determination. Based on the particular problem, exposure and toxicity data should have sufficient precision to make such a decision. Estimates of exposure and toxicity, bounded by variability and/or uncertainty, are plotted on the X- and Y-axes of the RISK21 matrix, respectively. The resulting intersection is a highly visual representation of estimated risk. Decisions can then be made to increase precision in the exposure or toxicity estimates or declare that the available information is sufficient. RISK21 represents a step forward in the goal to introduce new methodologies into 21st century risk assessment. Indeed, because of its transparent and visual process, RISK21 has the potential to widen the scope of risk communication beyond those with technical expertise.

  6. Estimating Rates of Motor Vehicle Crashes Using Medical Encounter Data: A Feasibility Study

    DTIC Science & Technology

    2015-11-05

    used to develop more detailed predictive risk models as well as strategies for preventing specific types of MVCs. Systematic Review of Evidence... used to estimate rates of accident-related injuries more generally,9 but not with specific reference to MVCs. For the present report, rates of...precise rate estimates based on person-years rather than active duty strength, (e) multivariable effects of specific risk /protective factors after

  7. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  8. Use of Longitudinal Data in Genetic Studies in the Genome-wide Association Studies Era: Summary of Group 14

    PubMed Central

    Kerner, Berit; North, Kari E; Fallin, M Daniele

    2010-01-01

    Participants analyzed actual and simulated longitudinal data from the Framingham Heart Study for various metabolic and cardiovascular traits. The genetic information incorporated into these investigations ranged from selected single-nucleotide polymorphisms to genome-wide association arrays. Genotypes were incorporated using a broad range of methodological approaches including conditional logistic regression, linear mixed models, generalized estimating equations, linear growth curve estimation, growth modeling, growth mixture modeling, population attributable risk fraction based on survival functions under the proportional hazards models, and multivariate adaptive splines for the analysis of longitudinal data. The specific scientific questions addressed by these different approaches also varied, ranging from a more precise definition of the phenotype, bias reduction in control selection, estimation of effect sizes and genotype associated risk, to direct incorporation of genetic data into longitudinal modeling approaches and the exploration of population heterogeneity with regard to longitudinal trajectories. The group reached several overall conclusions: 1) The additional information provided by longitudinal data may be useful in genetic analyses. 2) The precision of the phenotype definition as well as control selection in nested designs may be improved, especially if traits demonstrate a trend over time or have strong age-of-onset effects. 3) Analyzing genetic data stratified for high-risk subgroups defined by a unique development over time could be useful for the detection of rare mutations in common multi-factorial diseases. 4) Estimation of the population impact of genomic risk variants could be more precise. The challenges and computational complexity demanded by genome-wide single-nucleotide polymorphism data were also discussed. PMID:19924713

  9. Evidence-based Guidelines for Precision Risk Stratification-Based Screening (PRSBS) for Colorectal Cancer: Lessons learned from the US Armed Forces: Consensus and Future Directions

    PubMed Central

    Avital, Itzhak; Langan, Russell C.; Summers, Thomas A.; Steele, Scott R.; Waldman, Scott A.; Backman, Vadim; Yee, Judy; Nissan, Aviram; Young, Patrick; Womeldorph, Craig; Mancusco, Paul; Mueller, Renee; Noto, Khristian; Grundfest, Warren; Bilchik, Anton J.; Protic, Mladjan; Daumer, Martin; Eberhardt, John; Man, Yan Gao; Brücher, Björn LDM; Stojadinovic, Alexander

    2013-01-01

    Colorectal cancer (CRC) is the third most common cause of cancer-related death in the United States (U.S.), with estimates of 143,460 new cases and 51,690 deaths for the year 2012. Numerous organizations have published guidelines for CRC screening; however, these numerical estimates of incidence and disease-specific mortality have remained stable from years prior. Technological, genetic profiling, molecular and surgical advances in our modern era should allow us to improve risk stratification of patients with CRC and identify those who may benefit from preventive measures, early aggressive treatment, alternative treatment strategies, and/or frequent surveillance for the early detection of disease recurrence. To better negotiate future economic constraints and enhance patient outcomes, ultimately, we propose to apply the principals of personalized and precise cancer care to risk-stratify patients for CRC screening (Precision Risk Stratification-Based Screening, PRSBS). We believe that genetic, molecular, ethnic and socioeconomic disparities impact oncological outcomes in general, those related to CRC, in particular. This document highlights evidence-based screening recommendations and risk stratification methods in response to our CRC working group private-public consensus meeting held in March 2012. Our aim was to address how we could improve CRC risk stratification-based screening, and to provide a vision for the future to achieving superior survival rates for patients diagnosed with CRC. PMID:23459409

  10. The impact of different strategies to handle missing data on both precision and bias in a drug safety study: a multidatabase multinational population-based cohort study

    PubMed Central

    Martín-Merino, Elisa; Calderón-Larrañaga, Amaia; Hawley, Samuel; Poblador-Plou, Beatriz; Llorente-García, Ana; Petersen, Irene; Prieto-Alhambra, Daniel

    2018-01-01

    Background Missing data are often an issue in electronic medical records (EMRs) research. However, there are many ways that people deal with missing data in drug safety studies. Aim To compare the risk estimates resulting from different strategies for the handling of missing data in the study of venous thromboembolism (VTE) risk associated with antiosteoporotic medications (AOM). Methods New users of AOM (alendronic acid, other bisphosphonates, strontium ranelate, selective estrogen receptor modulators, teriparatide, or denosumab) aged ≥50 years during 1998–2014 were identified in two Spanish (the Base de datos para la Investigación Farmacoepidemiológica en Atención Primaria [BIFAP] and EpiChron cohort) and one UK (Clinical Practice Research Datalink [CPRD]) EMR. Hazard ratios (HRs) according to AOM (with alendronic acid as reference) were calculated adjusting for VTE risk factors, body mass index (that was missing in 61% of patients included in the three databases), and smoking (that was missing in 23% of patients) in the year of AOM therapy initiation. HRs and standard errors obtained using cross-sectional multiple imputation (MI) (reference method) were compared to complete case (CC) analysis – using only patients with complete data – and longitudinal MI – adding to the cross-sectional MI model the body mass index/smoking values as recorded in the year before and after therapy initiation. Results Overall, 422/95,057 (0.4%), 19/12,688 (0.1%), and 2,051/161,202 (1.3%) VTE cases/participants were seen in BIFAP, EpiChron, and CPRD, respectively. HRs moved from 100.00% underestimation to 40.31% overestimation in CC compared with cross-sectional MI, while longitudinal MI methods provided similar risk estimates compared with cross-sectional MI. Precision for HR improved in cross-sectional MI versus CC by up to 160.28%, while longitudinal MI improved precision (compared with cross-sectional) only minimally (up to 0.80%). Conclusion CC may substantially affect relative risk estimation in EMR-based drug safety studies, since missing data are not often completely at random. Little improvement was seen in these data in terms of power with the inclusion of longitudinal MI compared with cross-sectional MI. The strategy for handling missing data in drug safety studies can have a large impact on both risk estimates and precision.

  11. The impact of different strategies to handle missing data on both precision and bias in a drug safety study: a multidatabase multinational population-based cohort study.

    PubMed

    Martín-Merino, Elisa; Calderón-Larrañaga, Amaia; Hawley, Samuel; Poblador-Plou, Beatriz; Llorente-García, Ana; Petersen, Irene; Prieto-Alhambra, Daniel

    2018-01-01

    Missing data are often an issue in electronic medical records (EMRs) research. However, there are many ways that people deal with missing data in drug safety studies. To compare the risk estimates resulting from different strategies for the handling of missing data in the study of venous thromboembolism (VTE) risk associated with antiosteoporotic medications (AOM). New users of AOM (alendronic acid, other bisphosphonates, strontium ranelate, selective estrogen receptor modulators, teriparatide, or denosumab) aged ≥50 years during 1998-2014 were identified in two Spanish (the Base de datos para la Investigación Farmacoepidemiológica en Atención Primaria [BIFAP] and EpiChron cohort) and one UK (Clinical Practice Research Datalink [CPRD]) EMR. Hazard ratios (HRs) according to AOM (with alendronic acid as reference) were calculated adjusting for VTE risk factors, body mass index (that was missing in 61% of patients included in the three databases), and smoking (that was missing in 23% of patients) in the year of AOM therapy initiation. HRs and standard errors obtained using cross-sectional multiple imputation (MI) (reference method) were compared to complete case (CC) analysis - using only patients with complete data - and longitudinal MI - adding to the cross-sectional MI model the body mass index/smoking values as recorded in the year before and after therapy initiation. Overall, 422/95,057 (0.4%), 19/12,688 (0.1%), and 2,051/161,202 (1.3%) VTE cases/participants were seen in BIFAP, EpiChron, and CPRD, respectively. HRs moved from 100.00% underestimation to 40.31% overestimation in CC compared with cross-sectional MI, while longitudinal MI methods provided similar risk estimates compared with cross-sectional MI. Precision for HR improved in cross-sectional MI versus CC by up to 160.28%, while longitudinal MI improved precision (compared with cross-sectional) only minimally (up to 0.80%). CC may substantially affect relative risk estimation in EMR-based drug safety studies, since missing data are not often completely at random. Little improvement was seen in these data in terms of power with the inclusion of longitudinal MI compared with cross-sectional MI. The strategy for handling missing data in drug safety studies can have a large impact on both risk estimates and precision.

  12. A precision medicine approach for psychiatric disease based on repeated symptom scores.

    PubMed

    Fojo, Anthony T; Musliner, Katherine L; Zandi, Peter P; Zeger, Scott L

    2017-12-01

    For psychiatric diseases, rich information exists in the serial measurement of mental health symptom scores. We present a precision medicine framework for using the trajectories of multiple symptoms to make personalized predictions about future symptoms and related psychiatric events. Our approach fits a Bayesian hierarchical model that estimates a population-average trajectory for all symptoms and individual deviations from the average trajectory, then fits a second model that uses individual symptom trajectories to estimate the risk of experiencing an event. The fitted models are used to make clinically relevant predictions for new individuals. We demonstrate this approach on data from a study of antipsychotic therapy for schizophrenia, predicting future scores for positive, negative, and general symptoms, and the risk of treatment failure in 522 schizophrenic patients with observations over 8 weeks. While precision medicine has focused largely on genetic and molecular data, the complementary approach we present illustrates that innovative analytic methods for existing data can extend its reach more broadly. The systematic use of repeated measurements of psychiatric symptoms offers the promise of precision medicine in the field of mental health. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Leveraging prognostic baseline variables to gain precision in randomized trials

    PubMed Central

    Colantuoni, Elizabeth; Rosenblum, Michael

    2015-01-01

    We focus on estimating the average treatment effect in a randomized trial. If baseline variables are correlated with the outcome, then appropriately adjusting for these variables can improve precision. An example is the analysis of covariance (ANCOVA) estimator, which applies when the outcome is continuous, the quantity of interest is the difference in mean outcomes comparing treatment versus control, and a linear model with only main effects is used. ANCOVA is guaranteed to be at least as precise as the standard unadjusted estimator, asymptotically, under no parametric model assumptions and also is locally semiparametric efficient. Recently, several estimators have been developed that extend these desirable properties to more general settings that allow any real-valued outcome (e.g., binary or count), contrasts other than the difference in mean outcomes (such as the relative risk), and estimators based on a large class of generalized linear models (including logistic regression). To the best of our knowledge, we give the first simulation study in the context of randomized trials that compares these estimators. Furthermore, our simulations are not based on parametric models; instead, our simulations are based on resampling data from completed randomized trials in stroke and HIV in order to assess estimator performance in realistic scenarios. We provide practical guidance on when these estimators are likely to provide substantial precision gains and describe a quick assessment method that allows clinical investigators to determine whether these estimators could be useful in their specific trial contexts. PMID:25872751

  14. An approach for filtering hyperbolically positioned underwater acoustic telemetry data with position precision estimates

    USGS Publications Warehouse

    Meckley, Trevor D.; Holbrook, Christopher M.; Wagner, C. Michael; Binder, Thomas R.

    2014-01-01

    The use of position precision estimates that reflect the confidence in the positioning process should be considered prior to the use of biological filters that rely on a priori expectations of the subject’s movement capacities and tendencies. Position confidence goals should be determined based upon the needs of the research questions and analysis requirements versus arbitrary selection, in which filters of previous studies are adopted. Data filtering with this approach ensures that data quality is sufficient for the selected analyses and presents the opportunity to adjust or identify a different analysis in the event that the requisite precision was not attained. Ignoring these steps puts a practitioner at risk of reporting errant findings.

  15. Genetic markers enhance coronary risk prediction in men: the MORGAM prospective cohorts.

    PubMed

    Hughes, Maria F; Saarela, Olli; Stritzke, Jan; Kee, Frank; Silander, Kaisa; Klopp, Norman; Kontto, Jukka; Karvanen, Juha; Willenborg, Christina; Salomaa, Veikko; Virtamo, Jarmo; Amouyel, Phillippe; Arveiler, Dominique; Ferrières, Jean; Wiklund, Per-Gunner; Baumert, Jens; Thorand, Barbara; Diemert, Patrick; Trégouët, David-Alexandre; Hengstenberg, Christian; Peters, Annette; Evans, Alun; Koenig, Wolfgang; Erdmann, Jeanette; Samani, Nilesh J; Kuulasmaa, Kari; Schunkert, Heribert

    2012-01-01

    More accurate coronary heart disease (CHD) prediction, specifically in middle-aged men, is needed to reduce the burden of disease more effectively. We hypothesised that a multilocus genetic risk score could refine CHD prediction beyond classic risk scores and obtain more precise risk estimates using a prospective cohort design. Using data from nine prospective European cohorts, including 26,221 men, we selected in a case-cohort setting 4,818 healthy men at baseline, and used Cox proportional hazards models to examine associations between CHD and risk scores based on genetic variants representing 13 genomic regions. Over follow-up (range: 5-18 years), 1,736 incident CHD events occurred. Genetic risk scores were validated in men with at least 10 years of follow-up (632 cases, 1361 non-cases). Genetic risk score 1 (GRS1) combined 11 SNPs and two haplotypes, with effect estimates from previous genome-wide association studies. GRS2 combined 11 SNPs plus 4 SNPs from the haplotypes with coefficients estimated from these prospective cohorts using 10-fold cross-validation. Scores were added to a model adjusted for classic risk factors comprising the Framingham risk score and 10-year risks were derived. Both scores improved net reclassification (NRI) over the Framingham score (7.5%, p = 0.017 for GRS1, 6.5%, p = 0.044 for GRS2) but GRS2 also improved discrimination (c-index improvement 1.11%, p = 0.048). Subgroup analysis on men aged 50-59 (436 cases, 603 non-cases) improved net reclassification for GRS1 (13.8%) and GRS2 (12.5%). Net reclassification improvement remained significant for both scores when family history of CHD was added to the baseline model for this male subgroup improving prediction of early onset CHD events. Genetic risk scores add precision to risk estimates for CHD and improve prediction beyond classic risk factors, particularly for middle aged men.

  16. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    USGS Publications Warehouse

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.

  17. Sonographic estimation of fetal weight: comparison of bias, precision and consistency using 12 different formulae.

    PubMed

    Anderson, N G; Jolley, I J; Wells, J E

    2007-08-01

    To determine the major sources of error in ultrasonographic assessment of fetal weight and whether they have changed over the last decade. We performed a prospective observational study in 1991 and again in 2000 of a mixed-risk pregnancy population, estimating fetal weight within 7 days of delivery. In 1991, the Rose and McCallum formula was used for 72 deliveries. Inter- and intraobserver agreement was assessed within this group. Bland-Altman measures of agreement from log data were calculated as ratios. We repeated the study in 2000 in 208 consecutive deliveries, comparing predicted and actual weights for 12 published equations using Bland-Altman and percentage error methods. We compared bias (mean percentage error), precision (SD percentage error), and their consistency across the weight ranges. 95% limits of agreement ranged from - 4.4% to + 3.3% for inter- and intraobserver estimates, but were - 18.0% to 24.0% for estimated and actual birth weight. There was no improvement in accuracy between 1991 and 2000. In 2000 only six of the 12 published formulae had overall bias within 7% and precision within 15%. There was greater bias and poorer precision in nearly all equations if the birth weight was < 1,000 g. Observer error is a relatively minor component of the error in estimating fetal weight; error due to the equation is a larger source of error. Improvements in ultrasound technology have not improved the accuracy of estimating fetal weight. Comparison of methods of estimating fetal weight requires statistical methods that can separate out bias, precision and consistency. Estimating fetal weight in the very low birth weight infant is subject to much greater error than it is in larger babies. Copyright (c) 2007 ISUOG. Published by John Wiley & Sons, Ltd.

  18. Limits of Risk Predictability in a Cascading Alternating Renewal Process Model.

    PubMed

    Lin, Xin; Moussawi, Alaa; Korniss, Gyorgy; Bakdash, Jonathan Z; Szymanski, Boleslaw K

    2017-07-27

    Most risk analysis models systematically underestimate the probability and impact of catastrophic events (e.g., economic crises, natural disasters, and terrorism) by not taking into account interconnectivity and interdependence of risks. To address this weakness, we propose the Cascading Alternating Renewal Process (CARP) to forecast interconnected global risks. However, assessments of the model's prediction precision are limited by lack of sufficient ground truth data. Here, we establish prediction precision as a function of input data size by using alternative long ground truth data generated by simulations of the CARP model with known parameters. We illustrate the approach on a model of fires in artificial cities assembled from basic city blocks with diverse housing. The results confirm that parameter recovery variance exhibits power law decay as a function of the length of available ground truth data. Using CARP, we also demonstrate estimation using a disparate dataset that also has dependencies: real-world prediction precision for the global risk model based on the World Economic Forum Global Risk Report. We conclude that the CARP model is an efficient method for predicting catastrophic cascading events with potential applications to emerging local and global interconnected risks.

  19. Using speeding detections and numbers of fatalities to estimate relative risk of a fatality for motorcyclists and car drivers.

    PubMed

    Huggins, Richard

    2013-10-01

    Precise estimation of the relative risk of motorcyclists being involved in a fatal accident compared to car drivers is difficult. Simple estimates based on the proportions of licenced drivers or riders that are killed in a fatal accident are biased as they do not take into account the exposure to risk. However, exposure is difficult to quantify. Here we adapt the ideas behind the well known induced exposure methods and use available summary data on speeding detections and fatalities for motorcycle riders and car drivers to estimate the relative risk of a fatality for motorcyclists compared to car drivers under mild assumptions. The method is applied to data on motorcycle riders and car drivers in Victoria, Australia in 2010 and a small simulation study is conducted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Estimation of the Standardized Risk Difference and Ratio in a Competing Risks Framework: Application to Injection Drug Use and Progression to AIDS After Initiation of Antiretroviral Therapy

    PubMed Central

    Cole, Stephen R.; Lau, Bryan; Eron, Joseph J.; Brookhart, M. Alan; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.; Cole, Stephen R.; Brookhart, M. Alan; Lau, Bryan; Eron, Joseph J.; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.

    2015-01-01

    There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. PMID:24966220

  1. Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients

    PubMed Central

    Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong

    2015-01-01

    ♦ Objectives: To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Methods: Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ Results: The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Conclusions: Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. PMID:26293839

  2. Novel Equations for Estimating Lean Body Mass in Peritoneal Dialysis Patients.

    PubMed

    Dong, Jie; Li, Yan-Jun; Xu, Rong; Yang, Zhi-Kai; Zheng, Ying-Dong

    2015-12-01

    ♦ To develop and validate equations for estimating lean body mass (LBM) in peritoneal dialysis (PD) patients. ♦ Two equations for estimating LBM, one based on mid-arm muscle circumference (MAMC) and hand grip strength (HGS), i.e., LBM-M-H, and the other based on HGS, i.e., LBM-H, were developed and validated with LBM obtained by dual-energy X-ray absorptiometry (DEXA). The developed equations were compared to LBM estimated from creatinine kinetics (LBM-CK) and anthropometry (LBM-A) in terms of bias, precision, and accuracy. The prognostic values of LBM estimated from the equations in all-cause mortality risk were assessed. ♦ The developed equations incorporated gender, height, weight, and dialysis duration. Compared to LBM-DEXA, the bias of the developed equations was lower than that of LBM-CK and LBM-A. Additionally, LBM-M-H and LBM-H had better accuracy and precision. The prognostic values of LBM in all-cause mortality risk based on LBM-M-H, LBM-H, LBM-CK, and LBM-A were similar. ♦ Lean body mass estimated by the new equations based on MAMC and HGS was correlated with LBM obtained by DEXA and may serve as practical surrogate markers of LBM in PD patients. Copyright © 2015 International Society for Peritoneal Dialysis.

  3. Suicide Risk among Violent and Sexual Criminal Offenders

    ERIC Educational Resources Information Center

    Webb, Roger T.; Shaw, Jenny; Stevens, Hanne; Mortensen, Preben B.; Appleby, Louis; Qin, Ping

    2012-01-01

    Risk of suicide in people who have perpetrated specific forms of violent or sexual criminal offenses has not been quantified accurately or precisely. Also, gender comparisons have not been possible due to sparse data problems in the smaller studies that have been conducted to date. We therefore aimed to estimate these effects in the whole Danish…

  4. Modelling characteristics to predict Legionella contamination risk - Surveillance of drinking water plumbing systems and identification of risk areas.

    PubMed

    Völker, Sebastian; Schreiber, Christiane; Kistemann, Thomas

    2016-01-01

    For the surveillance of drinking water plumbing systems (DWPS) and the identification of risk factors, there is a need for an early estimation of the risk of Legionella contamination within a building, using efficient and assessable parameters to estimate hazards and to prioritize risks. The precision, accuracy and effectiveness of ways of estimating the risk of higher Legionella numbers (temperature, stagnation, pipe materials, etc.) have only rarely been empirically assessed in practice, although there is a broad consensus about the impact of these risk factors. We collected n = 807 drinking water samples from 9 buildings which had had Legionella spp. occurrences of >100 CFU/100mL within the last 12 months, and tested for Legionella spp., L. pneumophila, HPC 20°C and 36°C (culture-based). Each building was sampled for 6 months under standard operating conditions in the DWPS. We discovered high variability (up to 4 log(10) steps) in the presence of Legionella spp. (CFU/100 mL) within all buildings over a half year period as well as over the course of a day. Occurrences were significantly correlated with temperature, pipe length measures, and stagnation. Logistic regression modelling revealed three parameters (temperature after flushing until no significant changes in temperatures can be obtained, stagnation (low withdrawal, qualitatively assessed), pipe length proportion) to be the best predictors of Legionella contamination (>100 CFU/100 mL) at single outlets (precision = 66.7%; accuracy = 72.1%; F(0.5) score = 0.59). Copyright © 2015 Elsevier GmbH. All rights reserved.

  5. Precise estimates of gaming-related harm should guide regulation of gaming.

    PubMed

    Starcevic, Vladan; Billieux, Joël

    2018-06-13

    Regulation of gaming is largely based on the perception of gaming-related harm. This perception varies from one country to another and does not necessarily correspond to the real gaming-related harm. It is argued that there is a crucial need to define and assess domains of this harm in order to introduce policies that regulate gaming. Such policies would ideally be targeted at individuals at risk for problematic gaming and would be based more on educational efforts than on restrictive measures. The role of gaming industry in the regulation of gaming would depend on the more precise estimates of gaming-related harm.

  6. Comparing the cohort design and the nested case–control design in the presence of both time-invariant and time-dependent treatment and competing risks: bias and precision

    PubMed Central

    Austin, Peter C; Anderson, Geoffrey M; Cigsar, Candemir; Gruneir, Andrea

    2012-01-01

    Purpose Observational studies using electronic administrative healthcare databases are often used to estimate the effects of treatments and exposures. Traditionally, a cohort design has been used to estimate these effects, but increasingly, studies are using a nested case–control (NCC) design. The relative statistical efficiency of these two designs has not been examined in detail. Methods We used Monte Carlo simulations to compare these two designs in terms of the bias and precision of effect estimates. We examined three different settings: (A) treatment occurred at baseline, and there was a single outcome of interest; (B) treatment was time varying, and there was a single outcome; and C treatment occurred at baseline, and there was a secondary event that competed with the primary event of interest. Comparisons were made of percentage bias, length of 95% confidence interval, and mean squared error (MSE) as a combined measure of bias and precision. Results In Setting A, bias was similar between designs, but the cohort design was more precise and had a lower MSE in all scenarios. In Settings B and C, the cohort design was more precise and had a lower MSE in all scenarios. In both Settings B and C, the NCC design tended to result in estimates with greater bias compared with the cohort design. Conclusions We conclude that in a range of settings and scenarios, the cohort design is superior in terms of precision and MSE. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22653805

  7. Estimates of Intraclass Correlation Coefficients from Longitudinal Group-Randomized Trials of Adolescent HIV/STI/Pregnancy Prevention Programs

    ERIC Educational Resources Information Center

    Glassman, Jill R.; Potter, Susan C.; Baumler, Elizabeth R.; Coyle, Karin K.

    2015-01-01

    Introduction: Group-randomized trials (GRTs) are one of the most rigorous methods for evaluating the effectiveness of group-based health risk prevention programs. Efficiently designing GRTs with a sample size that is sufficient for meeting the trial's power and precision goals while not wasting resources exceeding them requires estimates of the…

  8. A unified framework for constructing, tuning and assessing photometric redshift density estimates in a selection bias setting

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Izbicki, R.; Lee, A. B.

    2017-07-01

    Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.

  9. Controversial issues confronting the BEIR III committee: implications for radiation protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabrikant, J.I.

    1981-05-01

    This paper reviews the state-of-the-art for conducting risk assessment studies, especially known and unknown factors relative to radioinduced cancer or other diseases, sources of scientific and epidemiological data, dose-response models used, and uncertainties which limit precision of estimation of excess radiation risks. These are related to decision making for radiation protection policy. (PSB)

  10. Estimating micro area behavioural risk factor prevalence from large population-based surveys: a full Bayesian approach.

    PubMed

    Seliske, L; Norwood, T A; McLaughlin, J R; Wang, S; Palleschi, C; Holowaty, E

    2016-06-07

    An important public health goal is to decrease the prevalence of key behavioural risk factors, such as tobacco use and obesity. Survey information is often available at the regional level, but heterogeneity within large geographic regions cannot be assessed. Advanced spatial analysis techniques are demonstrated to produce sensible micro area estimates of behavioural risk factors that enable identification of areas with high prevalence. A spatial Bayesian hierarchical model was used to estimate the micro area prevalence of current smoking and excess bodyweight for the Erie-St. Clair region in southwestern Ontario. Estimates were mapped for male and female respondents of five cycles of the Canadian Community Health Survey (CCHS). The micro areas were 2006 Census Dissemination Areas, with an average population of 400-700 people. Two individual-level models were specified: one controlled for survey cycle and age group (model 1), and one controlled for survey cycle, age group and micro area median household income (model 2). Post-stratification was used to derive micro area behavioural risk factor estimates weighted to the population structure. SaTScan analyses were conducted on the granular, postal-code level CCHS data to corroborate findings of elevated prevalence. Current smoking was elevated in two urban areas for both sexes (Sarnia and Windsor), and an additional small community (Chatham) for males only. Areas of excess bodyweight were prevalent in an urban core (Windsor) among males, but not females. Precision of the posterior post-stratified current smoking estimates was improved in model 2, as indicated by narrower credible intervals and a lower coefficient of variation. For excess bodyweight, both models had similar precision. Aggregation of the micro area estimates to CCHS design-based estimates validated the findings. This is among the first studies to apply a full Bayesian model to complex sample survey data to identify micro areas with variation in risk factor prevalence, accounting for spatial correlation and other covariates. Application of micro area analysis techniques helps define areas for public health planning, and may be informative to surveillance and research modeling of relevant chronic disease outcomes.

  11. COBALT: A GN&C Payload for Testing ALHAT Capabilities in Closed-Loop Terrestrial Rocket Flights

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Amzajerdian, Farzin; Hines, Glenn D.; O'Neal, Travis V.; Robertson, Edward A.; Seubert, Carl; Trawny, Nikolas

    2016-01-01

    The COBALT (CoOperative Blending of Autonomous Landing Technology) payload is being developed within NASA as a risk reduction activity to mature, integrate and test ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) systems targeted for infusion into near-term robotic and future human space flight missions. The initial COBALT payload instantiation is integrating the third-generation ALHAT Navigation Doppler Lidar (NDL) sensor, for ultra high-precision velocity plus range measurements, with the passive-optical Lander Vision System (LVS) that provides Terrain Relative Navigation (TRN) global-position estimates. The COBALT payload will be integrated onboard a rocket-propulsive terrestrial testbed and will provide precise navigation estimates and guidance planning during two flight test campaigns in 2017 (one open-loop and closed- loop). The NDL is targeting performance capabilities desired for future Mars and Moon Entry, Descent and Landing (EDL). The LVS is already baselined for TRN on the Mars 2020 robotic lander mission. The COBALT platform will provide NASA with a new risk-reduction capability to test integrated EDL Guidance, Navigation and Control (GN&C) components in closed-loop flight demonstrations prior to the actual mission EDL.

  12. Regression discontinuity was a valid design for dichotomous outcomes in three randomized trials.

    PubMed

    van Leeuwen, Nikki; Lingsma, Hester F; Mooijaart, Simon P; Nieboer, Daan; Trompet, Stella; Steyerberg, Ewout W

    2018-06-01

    Regression discontinuity (RD) is a quasi-experimental design that may provide valid estimates of treatment effects in case of continuous outcomes. We aimed to evaluate validity and precision in the RD design for dichotomous outcomes. We performed validation studies in three large randomized controlled trials (RCTs) (Corticosteroid Randomization After Significant Head injury [CRASH], the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries [GUSTO], and PROspective Study of Pravastatin in elderly individuals at risk of vascular disease [PROSPER]). To mimic the RD design, we selected patients above and below a cutoff (e.g., age 75 years) randomized to treatment and control, respectively. Adjusted logistic regression models using restricted cubic splines (RCS) and polynomials and local logistic regression models estimated the odds ratio (OR) for treatment, with 95% confidence intervals (CIs) to indicate precision. In CRASH, treatment increased mortality with OR 1.22 [95% CI 1.06-1.40] in the RCT. The RD estimates were 1.42 (0.94-2.16) and 1.13 (0.90-1.40) with RCS adjustment and local regression, respectively. In GUSTO, treatment reduced mortality (OR 0.83 [0.72-0.95]), with more extreme estimates in the RD analysis (OR 0.57 [0.35; 0.92] and 0.67 [0.51; 0.86]). In PROSPER, similar RCT and RD estimates were found, again with less precision in RD designs. We conclude that the RD design provides similar but substantially less precise treatment effect estimates compared with an RCT, with local regression being the preferred method of analysis. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. The INHANCE consortium: toward a better understanding of the causes and mechanisms of head and neck cancer.

    PubMed

    Winn, D M; Lee, Y-C A; Hashibe, M; Boffetta, P

    2015-09-01

    The International Head and Neck Cancer Epidemiology (INHANCE) consortium is a collaboration of research groups leading large epidemiology studies to improve the understanding of the causes and mechanisms of head and neck cancer. The consortium includes investigators of 35 studies who have pooled their data on 25 500 patients with head and neck cancer (i.e., cancers of the oral cavity, oropharynx, hypopharynx, and larynx) and 37 100 controls. The INHANCE analyses have confirmed that tobacco use and alcohol intake are key risk factors of these diseases and have provided precise estimates of risk and dose response, the benefit of quitting, and the hazard of smoking even a few cigarettes per day. Other risk factors include short height, lean body mass, low education and income, and a family history of head and neck cancer. Risk factors are generally similar for oral cavity, pharynx, and larynx, although the magnitude of risk may vary. Some major strengths of pooling data across studies include more precise estimates of risk and the ability to control for potentially confounding factors and to examine factors that may interact with each other. The INHANCE consortium provides evidence of the scientific productivity and discoveries that can be obtained from data pooling projects. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Estimation of the standardized risk difference and ratio in a competing risks framework: application to injection drug use and progression to AIDS after initiation of antiretroviral therapy.

    PubMed

    Cole, Stephen R; Lau, Bryan; Eron, Joseph J; Brookhart, M Alan; Kitahata, Mari M; Martin, Jeffrey N; Mathews, William C; Mugavero, Michael J

    2015-02-15

    There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Consistently estimating absolute risk difference when translating evidence to jurisdictions of interest.

    PubMed

    Eckermann, Simon; Coory, Michael; Willan, Andrew R

    2011-02-01

    Economic analysis and assessment of net clinical benefit often requires estimation of absolute risk difference (ARD) for binary outcomes (e.g. survival, response, disease progression) given baseline epidemiological risk in a jurisdiction of interest and trial evidence of treatment effects. Typically, the assumption is made that relative treatment effects are constant across baseline risk, in which case relative risk (RR) or odds ratios (OR) could be applied to estimate ARD. The objective of this article is to establish whether such use of RR or OR allows consistent estimates of ARD. ARD is calculated from alternative framing of effects (e.g. mortality vs survival) applying standard methods for translating evidence with RR and OR. For RR, the RR is applied to baseline risk in the jurisdiction to estimate treatment risk; for OR, the baseline risk is converted to odds, the OR applied and the resulting treatment odds converted back to risk. ARD is shown to be consistently estimated with OR but changes with framing of effects using RR wherever there is a treatment effect and epidemiological risk differs from trial risk. Additionally, in indirect comparisons, ARD is shown to be consistently estimated with OR, while calculation with RR allows inconsistency, with alternative framing of effects in the direction, let alone the extent, of ARD. OR ensures consistent calculation of ARD in translating evidence from trial settings and across trials in direct and indirect comparisons, avoiding inconsistencies from RR with alternative outcome framing and associated biases. These findings are critical for consistently translating evidence to inform economic analysis and assessment of net clinical benefit, as translation of evidence is proposed precisely where the advantages of OR over RR arise.

  16. Chair rise transfer detection and analysis using a pendant sensor: an algorithm for fall risk assessment in older people.

    PubMed

    Zhang, Wei; Regterschot, G Ruben H; Wahle, Fabian; Geraedts, Hilde; Baldus, Heribert; Zijlstra, Wiebren

    2014-01-01

    Falls result in substantial disability, morbidity, and mortality among older people. Early detection of fall risks and timely intervention can prevent falls and injuries due to falls. Simple field tests, such as repeated chair rise, are used in clinical assessment of fall risks in older people. Development of on-body sensors introduces potential beneficial alternatives for traditional clinical methods. In this article, we present a pendant sensor based chair rise detection and analysis algorithm for fall risk assessment in older people. The recall and the precision of the transfer detection were 85% and 87% in standard protocol, and 61% and 89% in daily life activities. Estimation errors of chair rise performance indicators: duration, maximum acceleration, peak power and maximum jerk were tested in over 800 transfers. Median estimation error in transfer peak power ranged from 1.9% to 4.6% in various tests. Among all the performance indicators, maximum acceleration had the lowest median estimation error of 0% and duration had the highest median estimation error of 24% over all tests. The developed algorithm might be feasible for continuous fall risk assessment in older people.

  17. Assessing nanoparticle risk poses prodigious challenges.

    PubMed

    MacPhail, Robert C; Grulke, Eric A; Yokel, Robert A

    2013-01-01

    Risk assessment is used both formally and informally to estimate the likelihood of an adverse event occurring, for example, as a consequence of exposure to a hazardous chemical, drug, or other agent. Formal risk assessments in government regulatory agencies have a long history of practice. The precision with which risk can be estimated is inevitably constrained, however, by uncertainties arising from the lack of pertinent data. Developing accurate risk assessments for nanoparticles and nanoparticle-containing products may present further challenges because of the unique properties of the particles, uncertainties about their composition and the populations exposed to them, and how these may change throughout the particle's life cycle. This review introduces the evolving practice of risk assessment followed by some of the uncertainties that need to be addressed to improve our understanding of nanoparticle risks. Given the clarion call for life-cycle assessments of nanoparticles, an unprecedented degree of national and international coordination between scientific organizations, regulatory agencies, and stakeholders will be required to achieve this goal. Copyright © 2013 Wiley Periodicals, Inc.

  18. SOURCES OF VARIATION IN TOXICOLOGICAL STUDIES AND THEIR EFFECTS ON PRECISION OF RESULTS: INDIVIDUAL DIFFERENCES.

    EPA Science Inventory

    The ultimate goal of risk assessment is to estimate the adverse effects of exposures to environmental contaminants in the population. However, populations of humans and other species vary widely in many key factors such as age, genetic makeup, gender, and health status. Any or a...

  19. Status of risk-benefit analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Horn, A.J.; Wilson, R.

    1976-12-01

    The benefits and deficiencies of cost benefit analysis are reviewed. It is pointed out that, if decision making involving risks and benefits is to improve, more attention must be paid to the clear presentation of the assumptions, values, and results. Reports need to present concise summaries which convey the uncertainties and limitations of the analysis in addition to the matrix of costs, risks, and benefits. As the field of risk-benefit analysis advances the estimation of risks and benefits will become more precise and implicit valuations will be made more explicit. Corresponding improvements must also be made to enhance communications betweenmore » the risk-benefit analyst and the accountable decision maker.« less

  20. The Magnitude of Mortality from Ischemic Heart Disease Attributed to Occupational Factors in Korea - Attributable Fraction Estimation Using Meta-analysis.

    PubMed

    Ha, Jaehyeok; Kim, Soo-Geun; Paek, Domyung; Park, Jungsun

    2011-03-01

    Ischemic heart disease (IHD) is a major cause of death in Korea and known to result from several occupational factors. This study attempted to estimate the current magnitude of IHD mortality due to occupational factors in Korea. After selecting occupational risk factors by literature investigation, we calculated attributable fractions (AFs) from relative risks and exposure data for each factor. Relative risks were estimated using meta-analysis based on published research. Exposure data were collected from the 2006 Survey of Korean Working Conditions. Finally, we estimated 2006 occupation-related IHD mortality. FOR THE FACTORS CONSIDERED, WE ESTIMATED THE FOLLOWING RELATIVE RISKS: noise 1.06, environmental tobacco smoke 1.19 (men) and 1.22 (women), shift work 1.12, and low job control 1.15 (men) and 1.08 (women). Combined AFs of those factors in the IHD were estimated at 9.29% (0.3-18.51%) in men and 5.78% (-7.05-19.15%) in women. Based on these fractions, Korea's 2006 death toll from occupational IHD between the age of 15 and 69 was calculated at 353 in men (total 3,804) and 72 in women (total 1,246). We estimated occupational IHD mortality of Korea with updated data and more relevant evidence. Despite the efforts to obtain reliable estimates, there were many assumptions and limitations that must be overcome. Future research based on more precise design and reliable evidence is required for more accurate estimates.

  1. Impact of covariate models on the assessment of the air pollution-mortality association in a single- and multipollutant context.

    PubMed

    Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M

    2012-10-01

    With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.

  2. Impact of the 1980 BEIR-III report on low-level radiation risk assessment, radiation protection guides, and public health policy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabrikant, J.I.

    1981-06-01

    The author deals with the scientific basis for establishing appropriate radiation protection guides, and this effect on evaluation of societal activities concerned with the health effects in human populations exposed to low-level radiation. Methodology is discussed for estimating risks of radio-induced cancer and genetically related ill-health in man, the sources of data, the dose-response models used, and the precision ascribed to the process. (PSB)

  3. Deaths by Suicide While on Active Duty, Active and Reserve Components, U.S. Armed Forces, 1998-2011

    DTIC Science & Technology

    2012-06-01

    June 2012 Vol. 19 No. 6 M S M R Page 7 have suggested that deployment to these confl icts increases a soldier’s risk of suicide and have...estimated a proportion of suicides that may be related to deployment.6,10 Such relationships are plausible but diffi cult to characterize precisely...because many cor- relates of risk for suicide are closely associ- ated with wartime deployments (e.g., access to weapons, high operational tempos

  4. Validation study in four health-care databases: upper gastrointestinal bleeding misclassification affects precision but not magnitude of drug-related upper gastrointestinal bleeding risk.

    PubMed

    Valkhoff, Vera E; Coloma, Preciosa M; Masclee, Gwen M C; Gini, Rosa; Innocenti, Francesco; Lapi, Francesco; Molokhia, Mariam; Mosseveld, Mees; Nielsson, Malene Schou; Schuemie, Martijn; Thiessard, Frantz; van der Lei, Johan; Sturkenboom, Miriam C J M; Trifirò, Gianluca

    2014-08-01

    To evaluate the accuracy of disease codes and free text in identifying upper gastrointestinal bleeding (UGIB) from electronic health-care records (EHRs). We conducted a validation study in four European electronic health-care record (EHR) databases such as Integrated Primary Care Information (IPCI), Health Search/CSD Patient Database (HSD), ARS, and Aarhus, in which we identified UGIB cases using free text or disease codes: (1) International Classification of Disease (ICD)-9 (HSD, ARS); (2) ICD-10 (Aarhus); and (3) International Classification of Primary Care (ICPC) (IPCI). From each database, we randomly selected and manually reviewed 200 cases to calculate positive predictive values (PPVs). We employed different case definitions to assess the effect of outcome misclassification on estimation of risk of drug-related UGIB. PPV was 22% [95% confidence interval (CI): 16, 28] and 21% (95% CI: 16, 28) in IPCI for free text and ICPC codes, respectively. PPV was 91% (95% CI: 86, 95) for ICD-9 codes and 47% (95% CI: 35, 59) for free text in HSD. PPV for ICD-9 codes in ARS was 72% (95% CI: 65, 78) and 77% (95% CI: 69, 83) for ICD-10 codes (Aarhus). More specific definitions did not have significant impact on risk estimation of drug-related UGIB, except for wider CIs. ICD-9-CM and ICD-10 disease codes have good PPV in identifying UGIB from EHR; less granular terminology (ICPC) may require additional strategies. Use of more specific UGIB definitions affects precision, but not magnitude, of risk estimates. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Bayesian Power Prior Analysis and Its Application to Operational Risk and Rasch Model

    ERIC Educational Resources Information Center

    Zhang, Honglian

    2010-01-01

    When sample size is small, informative priors can be valuable in increasing the precision of estimates. Pooling historical data and current data with equal weights under the assumption that both of them are from the same population may be misleading when heterogeneity exists between historical data and current data. This is particularly true when…

  6. Noninvasive control of dental calculus removal: qualification of two fluorescence methods

    NASA Astrophysics Data System (ADS)

    Gonchukov, S.; Sukhinina, A.; Bakhmutov, D.; Biryukova, T.

    2013-02-01

    The main condition of periodontitis prevention is the full calculus removal from the teeth surface. This procedure should be fulfilled without harming adjacent unaffected tooth tissues. Nevertheless the problem of sensitive and precise estimating of tooth-calculus interface exists and potential risk of hard tissue damage remains. In this work it was shown that fluorescence diagnostics during calculus removal can be successfully used for precise noninvasive detection of calculus-tooth interface. In so doing the simple implementation of this method free from the necessity of spectrometer using can be employed. Such a simple implementation of calculus detection set-up can be aggregated with the devices of calculus removing.

  7. Environmental risk assessment of water quality in harbor areas: a new methodology applied to European ports.

    PubMed

    Gómez, Aina G; Ondiviela, Bárbara; Puente, Araceli; Juanes, José A

    2015-05-15

    This work presents a standard and unified procedure for assessment of environmental risks at the contaminant source level in port aquatic systems. Using this method, port managers and local authorities will be able to hierarchically classify environmental hazards and proceed with the most suitable management actions. This procedure combines rigorously selected parameters and indicators to estimate the environmental risk of each contaminant source based on its probability, consequences and vulnerability. The spatio-temporal variability of multiple stressors (agents) and receptors (endpoints) is taken into account to provide accurate estimations for application of precisely defined measures. The developed methodology is tested on a wide range of different scenarios via application in six European ports. The validation process confirms its usefulness, versatility and adaptability as a management tool for port water quality in Europe and worldwide. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Lung Cancer Risk Associated with Regulated and Unregulated Chrysotile Asbestos Fibers.

    PubMed

    Hamra, Ghassan B; Richardson, David B; Dement, John; Loomis, Dana

    2017-03-01

    Regulation of asbestos fibers in the workplace is partly determined by which fibers can be visually counted. However, a majority of fibers are too short and thin to count this way and are, consequently, not subject to regulation. We estimate lung cancer risk associated with asbestos fibers of varying length and width. We apply an order-constrained prior both to leverage external information from toxicological studies of asbestos health effects. This prior assumes that risk from asbestos fibers increases with increasing length and decreases with increasing width. When we apply a shared mean for the effect of all asbestos fiber exposure groups, the rate ratios for each fiber group per unit exposure appear mostly equal. Rate ratio estimates for fibers of diameter <0.25 μm and length <1.5 and 1.5-5.0 μm are the most precise. When applying an order-constrained prior, we find that estimates of lung cancer rate ratio per unit of exposure to unregulated fibers 20-40 and >40 μm in the thinnest fiber group are similar in magnitude to estimates of risk associated with long fibers in the regulated fraction of airborne asbestos fibers. Rate ratio estimates for longer fibers are larger than those for shorter fibers, but thicker and thinner fibers do not differ as the toxicologically derived prior had expected. Credible intervals for fiber size-specific risk estimates overlap; thus, we cannot conclude that there are substantial differences in effect by fiber size. Nonetheless, our results suggest that some unregulated asbestos fibers may be associated with increased incidence of lung cancer.

  9. Historical estimation of diesel exhaust exposure in a cohort study of U.S. railroad workers and lung cancer.

    PubMed

    Laden, Francine; Hart, Jaime E; Eschenroeder, Alan; Smith, Thomas J; Garshick, Eric

    2006-09-01

    We have previously shown an elevated risk of lung cancer mortality in diesel exhaust exposed railroad workers. To reduce exposure misclassification, we obtained extensive historical information on diesel locomotives used by each railroad. Starting in 1945, we calculated the rate each railroad converted from steam to diesel, creating annual railroad-specific weighting factors for the probability of diesel exposure. We also estimated the average annual exposure intensity based on emission factors. The U.S. Railroad Retirement Board provided railroad assignment and work histories for 52,812 workers hired between 1939-1949, for whom we ascertained mortality from 1959-1996. Among workers hired after 1945, as diesel locomotives were introduced, the relative risk of lung cancer for any exposure was 1.77 (95% CI = 1.50-2.09), and there was evidence of an exposure-response relationship with exposure duration. Exposed workers hired before 1945 had a relative risk of 1.30 (95% CI = 1.19-1.43) for any exposure and there was no evidence of a dose response with duration. There was no evidence of increasing risk using estimated measures of intensity although the overall lung cancer risk remained elevated. In conclusion, although precise historical estimates of exposure are not available, weighting factors helped better define the exposure-response relationship of diesel exhaust with lung cancer mortality.

  10. Comparative precision of age estimates from two southern reservoir populations of paddlefish [Polyodon spathula (Walbaum, 1792)

    USGS Publications Warehouse

    Long, James M.; Nealis, Ashley

    2017-01-01

    The aim of the study was to determine whether location and sex affected the age precision estimates between two southern, reservoir populations of paddlefish [Polyodon spathula (Walbaum, 1792)]. From 589 paddlefish collected in Grand Lake and Keystone Lake, Oklahoma in 2011, ages from dentaries were estimated using three independent readers and precision was compared with coefficient of variation between locations and sexes. Ages were more precisely estimated from Grand Lake and from females.

  11. Identifying optimal dosage regimes under safety constraints: An application to long term opioid treatment of chronic pain.

    PubMed

    Laber, Eric B; Wu, Fan; Munera, Catherine; Lipkovich, Ilya; Colucci, Salvatore; Ripa, Steve

    2018-04-30

    There is growing interest and investment in precision medicine as a means to provide the best possible health care. A treatment regime formalizes precision medicine as a sequence of decision rules, one per clinical intervention period, that specify if, when and how current treatment should be adjusted in response to a patient's evolving health status. It is standard to define a regime as optimal if, when applied to a population of interest, it maximizes the mean of some desirable clinical outcome, such as efficacy. However, in many clinical settings, a high-quality treatment regime must balance multiple competing outcomes; eg, when a high dose is associated with substantial symptom reduction but a greater risk of an adverse event. We consider the problem of estimating the most efficacious treatment regime subject to constraints on the risk of adverse events. We combine nonparametric Q-learning with policy-search to estimate a high-quality yet parsimonious treatment regime. This estimator applies to both observational and randomized data, as well as settings with variable, outcome-dependent follow-up, mixed treatment types, and multiple time points. This work is motivated by and framed in the context of dosing for chronic pain; however, the proposed framework can be applied generally to estimate a treatment regime which maximizes the mean of one primary outcome subject to constraints on one or more secondary outcomes. We illustrate the proposed method using data pooled from 5 open-label flexible dosing clinical trials for chronic pain. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons, Ltd.

  12. Evaluating cost-efficiency and accuracy of hunter harvest survey designs

    USGS Publications Warehouse

    Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.

    2011-01-01

    Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.

  13. Refined estimates of local recurrence risks by DCIS score adjusting for clinicopathological features: a combined analysis of ECOG-ACRIN E5194 and Ontario DCIS cohort studies.

    PubMed

    Rakovitch, E; Gray, R; Baehner, F L; Sutradhar, R; Crager, M; Gu, S; Nofech-Mozes, S; Badve, S S; Hanna, W; Hughes, L L; Wood, W C; Davidson, N E; Paszat, L; Shak, S; Sparano, J A; Solin, L J

    2018-06-01

    Better tools are needed to estimate local recurrence (LR) risk after breast-conserving surgery (BCS) for DCIS. The DCIS score (DS) was validated as a predictor of LR in E5194 and Ontario DCIS cohort (ODC) after BCS. We combined data from E5194 and ODC adjusting for clinicopathological factors to provide refined estimates of the 10-year risk of LR after treatment by BCS alone. Data from E5194 and ODC were combined. Patients with positive margins or multifocality were excluded. Identical Cox regression models were fit for each study. Patient-specific meta-analysis was used to calculate precision-weighted estimates of 10-year LR risk by DS, age, tumor size and year of diagnosis. The combined cohort includes 773 patients. The DS and age at diagnosis, tumor size and year of diagnosis provided independent prognostic information on the 10-year LR risk (p ≤ 0.009). Hazard ratios from E5194 and ODC cohorts were similar for the DS (2.48, 1.95 per 50 units), tumor size ≤ 1 versus  > 1-2.5 cm (1.45, 1.47), age ≥ 50 versus < 50 year (0.61, 0.84) and year ≥ 2000 (0.67, 0.49). Utilization of DS combined with tumor size and age at diagnosis predicted more women with very low (≤ 8%) or higher (> 15%) 10-year LR risk after BCS alone compared to utilization of DS alone or clinicopathological factors alone. The combined analysis provides refined estimates of 10-year LR risk after BCS for DCIS. Adding information on tumor size and age at diagnosis to the DS adjusting for year of diagnosis provides improved LR risk estimates to guide treatment decision making.

  14. Performance of Prognostic Risk Scores in Chronic Heart Failure Patients Enrolled in the European Society of Cardiology Heart Failure Long-Term Registry.

    PubMed

    Canepa, Marco; Fonseca, Candida; Chioncel, Ovidiu; Laroche, Cécile; Crespo-Leiro, Maria G; Coats, Andrew J S; Mebazaa, Alexandre; Piepoli, Massimo F; Tavazzi, Luigi; Maggioni, Aldo P

    2018-06-01

    This study compared the performance of major heart failure (HF) risk models in predicting mortality and examined their utilization using data from a contemporary multinational registry. Several prognostic risk scores have been developed for ambulatory HF patients, but their precision is still inadequate and their use limited. This registry enrolled patients with HF seen in participating European centers between May 2011 and April 2013. The following scores designed to estimate 1- to 2-year all-cause mortality were calculated in each participant: CHARM (Candesartan in Heart Failure-Assessment of Reduction in Mortality), GISSI-HF (Gruppo Italiano per lo Studio della Streptochinasi nell'Infarto Miocardico-Heart Failure), MAGGIC (Meta-analysis Global Group in Chronic Heart Failure), and SHFM (Seattle Heart Failure Model). Patients with hospitalized HF (n = 6,920) and ambulatory HF patients missing any variable needed to estimate each score (n = 3,267) were excluded, leaving a final sample of 6,161 patients. At 1-year follow-up, 5,653 of 6,161 patients (91.8%) were alive. The observed-to-predicted survival ratios (CHARM: 1.10, GISSI-HF: 1.08, MAGGIC: 1.03, and SHFM: 0.98) suggested some overestimation of mortality by all scores except the SHFM. Overprediction occurred steadily across levels of risk using both the CHARM and the GISSI-HF, whereas the SHFM underpredicted mortality in all risk groups except the highest. The MAGGIC showed the best overall accuracy (area under the curve [AUC] = 0.743), similar to the GISSI-HF (AUC = 0.739; p = 0.419) but better than the CHARM (AUC = 0.729; p = 0.068) and particularly better than the SHFM (AUC = 0.714; p = 0.018). Less than 1% of patients received a prognostic estimate from their enrolling physician. Performance of prognostic risk scores is still limited and physicians are reluctant to use them in daily practice. The need for contemporary, more precise prognostic tools should be considered. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  15. Estimating maneuvers for precise relative orbit determination using GPS

    NASA Astrophysics Data System (ADS)

    Allende-Alba, Gerardo; Montenbruck, Oliver; Ardaens, Jean-Sébastien; Wermuth, Martin; Hugentobler, Urs

    2017-01-01

    Precise relative orbit determination is an essential element for the generation of science products from distributed instrumentation of formation flying satellites in low Earth orbit. According to the mission profile, the required formation is typically maintained and/or controlled by executing maneuvers. In order to generate consistent and precise orbit products, a strategy for maneuver handling is mandatory in order to avoid discontinuities or precision degradation before, after and during maneuver execution. Precise orbit determination offers the possibility of maneuver estimation in an adjustment of single-satellite trajectories using GPS measurements. However, a consistent formulation of a precise relative orbit determination scheme requires the implementation of a maneuver estimation strategy which can be used, in addition, to improve the precision of maneuver estimates by drawing upon the use of differential GPS measurements. The present study introduces a method for precise relative orbit determination based on a reduced-dynamic batch processing of differential GPS pseudorange and carrier phase measurements, which includes maneuver estimation as part of the relative orbit adjustment. The proposed method has been validated using flight data from space missions with different rates of maneuvering activity, including the GRACE, TanDEM-X and PRISMA missions. The results show the feasibility of obtaining precise relative orbits without degradation in the vicinity of maneuvers as well as improved maneuver estimates that can be used for better maneuver planning in flight dynamics operations.

  16. A flexible Bayesian hierarchical model of preterm birth risk among US Hispanic subgroups in relation to maternal nativity and education

    PubMed Central

    2011-01-01

    Background Previous research has documented heterogeneity in the effects of maternal education on adverse birth outcomes by nativity and Hispanic subgroup in the United States. In this article, we considered the risk of preterm birth (PTB) using 9 years of vital statistics birth data from New York City. We employed finer categorizations of exposure than used previously and estimated the risk dose-response across the range of education by nativity and ethnicity. Methods Using Bayesian random effects logistic regression models with restricted quadratic spline terms for years of completed maternal education, we calculated and plotted the estimated posterior probabilities of PTB (gestational age < 37 weeks) for each year of education by ethnic and nativity subgroups adjusted for only maternal age, as well as with more extensive covariate adjustments. We then estimated the posterior risk difference between native and foreign born mothers by ethnicity over the continuous range of education exposures. Results The risk of PTB varied substantially by education, nativity and ethnicity. Native born groups showed higher absolute risk of PTB and declining risk associated with higher levels of education beyond about 10 years, as did foreign-born Puerto Ricans. For most other foreign born groups, however, risk of PTB was flatter across the education range. For Mexicans, Central Americans, Dominicans, South Americans and "Others", the protective effect of foreign birth diminished progressively across the educational range. Only for Puerto Ricans was there no nativity advantage for the foreign born, although small numbers of foreign born Cubans limited precision of estimates for that group. Conclusions Using flexible Bayesian regression models with random effects allowed us to estimate absolute risks without strong modeling assumptions. Risk comparisons for any sub-groups at any exposure level were simple to calculate. Shrinkage of posterior estimates through the use of random effects allowed for finer categorization of exposures without restricting joint effects to follow a fixed parametric scale. Although foreign born Hispanic women with the least education appeared to generally have low risk, this seems likely to be a marker for unmeasured environmental and behavioral factors, rather than a causally protective effect of low education itself. PMID:21504612

  17. A flexible Bayesian hierarchical model of preterm birth risk among US Hispanic subgroups in relation to maternal nativity and education.

    PubMed

    Kaufman, Jay S; MacLehose, Richard F; Torrone, Elizabeth A; Savitz, David A

    2011-04-19

    Previous research has documented heterogeneity in the effects of maternal education on adverse birth outcomes by nativity and Hispanic subgroup in the United States. In this article, we considered the risk of preterm birth (PTB) using 9 years of vital statistics birth data from New York City. We employed finer categorizations of exposure than used previously and estimated the risk dose-response across the range of education by nativity and ethnicity. Using Bayesian random effects logistic regression models with restricted quadratic spline terms for years of completed maternal education, we calculated and plotted the estimated posterior probabilities of PTB (gestational age < 37 weeks) for each year of education by ethnic and nativity subgroups adjusted for only maternal age, as well as with more extensive covariate adjustments. We then estimated the posterior risk difference between native and foreign born mothers by ethnicity over the continuous range of education exposures. The risk of PTB varied substantially by education, nativity and ethnicity. Native born groups showed higher absolute risk of PTB and declining risk associated with higher levels of education beyond about 10 years, as did foreign-born Puerto Ricans. For most other foreign born groups, however, risk of PTB was flatter across the education range. For Mexicans, Central Americans, Dominicans, South Americans and "Others", the protective effect of foreign birth diminished progressively across the educational range. Only for Puerto Ricans was there no nativity advantage for the foreign born, although small numbers of foreign born Cubans limited precision of estimates for that group. Using flexible Bayesian regression models with random effects allowed us to estimate absolute risks without strong modeling assumptions. Risk comparisons for any sub-groups at any exposure level were simple to calculate. Shrinkage of posterior estimates through the use of random effects allowed for finer categorization of exposures without restricting joint effects to follow a fixed parametric scale. Although foreign born Hispanic women with the least education appeared to generally have low risk, this seems likely to be a marker for unmeasured environmental and behavioral factors, rather than a causally protective effect of low education itself.

  18. Gene expression during blow fly development: improving the precision of age estimates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2011-01-01

    Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.

  19. Estimating the economic impact of seismic activity in Kyrgyzstan

    NASA Astrophysics Data System (ADS)

    Pittore, Massimiliano; Sousa, Luis; Grant, Damian; Fleming, Kevin; Parolai, Stefano; Free, Matthew; Moldobekov, Bolot; Takeuchi, Ko

    2017-04-01

    Estimating the short and long-term economical impact of large-scale damaging events such as earthquakes, tsunamis or tropical storms is an important component of risk assessment, whose outcomes are routinely used to improve risk awareness, optimize investments in prevention and mitigation actions, as well as to customize insurance and reinsurance rates to specific geographical regions or single countries. Such estimations can be carried out by modeling the whole causal process, from hazard assessment to the estimation of loss for specific categories of assets. This approach allows a precise description of the various physical mechanisms contributing to direct seismic losses. However, it should reflect the underlying epistemic and random uncertainties in all involved components in a meaningful way. Within a project sponsored by the World Bank, a seismic risk study for the Kyrgyz Republic has been conducted, focusing on the assessment of social and economical impacts assessed in terms of direct losses of the residential and public building stocks. Probabilistic estimates based on stochastic event catalogs have been computed and integrated with the simulation of specific earthquake scenarios. Although very few relevant data are available in the region on the economic consequences of past damaging events, the proposed approach sets a benchmark for decision makers and policy holders to better understand the short and long term consequences of earthquakes in the region. The presented results confirm the high level of seismic risk of the Kyrgyz Republic territory, outlining the most affected regions; thus advocating for significant Disaster Risk Reduction (DRR) measures to be implemented by local decision- and policy-makers.

  20. [Fluorescence control of dental calculus removal].

    PubMed

    Bakhmutov, D N; Gonchukov, S A; Lonkina, T V; Sukhinina, A V

    2012-01-01

    The main condition of periodontitis prevention is the full calculus removal from the teeth surface. This procedure should be fulfilled without harming adjacent unaffected tooth tissues. Nevertheless the problem of sensitive and precise estimating of tooth-calculus interface exists and potential risk of hard tissue damage remains. In the frames of this work it was shown that fluorescence diagnostics during calculus removal can be successfully used for precise detection of tooth-calculus interface. In so doing the simple implementation of this method free from the necessity of spectrometer using can be employed. Such a simple implementation of calculus detection set-up can be aggregated with the devices of calculus removing (as ultrasonic or laser devices).

  1. Carbon Monoxide Toxicity

    PubMed Central

    Aniol, Michael J.

    1992-01-01

    Of all fatal poisonings in the United States, an estimated half are due to carbon monoxide. The number of non-lethal poisonings due to carbon monoxide is difficult to estimate because signs and symptoms of carbon monoxide poisoning cover a wide spectrum and mimic other disorders. Misdiagnosis is serious, as the patient often returns to the contaminated environment. Those not receiving proper treatment are at significant risk, as high as 10% to 12%, of developing late neurological sequelae. The diagnosis of carbon monoxide poisoning depends upon precise history taking, careful physical examination, and a high index of suspicion. ImagesFigure 2 PMID:21221282

  2. Decision-making in an era of cancer prevention via aspirin: New Zealand needs updated guidelines and risk calculators.

    PubMed

    Wilson, Nick; Selak, Vanessa; Blakely, Tony; Leung, William; Clarke, Philip; Jackson, Rod; Knight, Josh; Nghiem, Nhung

    2016-03-11

    Based on new systematic reviews of the evidence, the US Preventive Services Task Force has drafted updated guidelines on the use of low-dose aspirin for the primary prevention of both cardiovascular disease (CVD) and cancer. The Task Force generally recommends consideration of aspirin in adults aged 50-69 years with 10-year CVD risk of at least 10%, in who absolute health gain (reduction of CVD and cancer) is estimated to exceed absolute health loss (increase in bleeds). With the ongoing decline in CVD, current risk calculators for New Zealand are probably outdated, so it is difficult to be precise about what proportion of the population is in this risk category (roughly equivalent to 5-year CVD risk ≥5%). Nevertheless, we suspect that most smokers aged 50-69 years, and some non-smokers, would probably meet the new threshold for taking low-dose aspirin. The country therefore needs updated guidelines and risk calculators that are ideally informed by estimates of absolute net health gain (in quality-adjusted life-years (QALYs) per person) and cost-effectiveness. Other improvements to risk calculators include: epidemiological rigour (eg, by addressing competing mortality); providing enhanced graphical display of risk to enhance risk communication; and possibly capturing the issues of medication disutility and comparison with lifestyle changes.

  3. Working postures of dental students: ergonomic analysis using the Ovako Working Analysis System and rapid upper limb assessment.

    PubMed

    Petromilli Nordi Sasso Garcia, Patrícia; Polli, Gabriela Scatimburgo; Campos, Juliana Alvares Duarte Bonini

    2013-01-01

    As dentistry is a profession that demands a manipulative precision of hand movements, musculoskeletal disorders are among the most common occupational diseases. This study estimated the risk of musculoskeletal disorders developing in dental students using the Ovako Working Analysis System (OWAS) and Rapid Upper Limb Assessment (RULA) methods, and estimated the diagnostic agreement between the 2 methods. Students (n = 75), enrolled in the final undergraduate year at the Araraquara School of Dentistry--UNESP--were studied. Photographs were taken of students while performing diverse clinical procedures (n = 283) using a digital camera, which were assessed using OWAS and RULA. A risk score was attributed following each procedure performed by the student. The prevalence of the risk of musculoskeletal disorders was estimated per point and for a 95% CI. To assess the agreement between the 2 methods, Kappa statistics with linear weighting were used. The level of significance adopted was 5%. There was a high prevalence of the mean score for risk of musculoskeletal disorders in the dental students evaluated according to the OWAS method (p = 97.88%; 95% CI: 96.20-99.56%), and a high prevalence of the high score (p = 40.6; 95% CI: 34.9-46.4%) and extremely high risk (p = 59.4%; 95% CI: 53.6-65.1%) according to RULA method Null agreement was verified (k = 0) in the risk di agnosis of the tested methods. The risk of musculoskeletal disorders in dental students estimated by the OWAS method was medium, whereas the same risk by the RULA method was extremely high. There was no diagnostic agreement between the OWAS and RULA methods.

  4. Mapping and Modelling Malaria Risk Areas Using Climate, Socio-Demographic and Clinical Variables in Chimoio, Mozambique.

    PubMed

    Ferrao, Joao L; Niquisse, Sergio; Mendes, Jorge M; Painho, Marco

    2018-04-19

    Background : Malaria continues to be a major public health concern in Africa. Approximately 3.2 billion people worldwide are still at risk of contracting malaria, and 80% of deaths caused by malaria are concentrated in only 15 countries, most of which are in Africa. These high-burden countries have achieved a lower than average reduction of malaria incidence and mortality, and Mozambique is among these countries. Malaria eradication is therefore one of Mozambique’s main priorities. Few studies on malaria have been carried out in Chimoio, and there is no malaria map risk of the area. This map is important to identify areas at risk for application of Public Precision Health approaches. By using GIS-based spatial modelling techniques, the research goal of this article was to map and model malaria risk areas using climate, socio-demographic and clinical variables in Chimoio, Mozambique. Methods : A 30 m × 30 m Landsat image, ArcGIS 10.2 and BioclimData were used. A conceptual model for spatial problems was used to create the final risk map. The risks factors used were: the mean temperature, precipitation, altitude, slope, distance to water bodies, distance to roads, NDVI, land use and land cover, malaria prevalence and population density. Layers were created in a raster dataset. For class value comparisons between layers, numeric values were assigned to classes within each map layer, giving them the same importance. The input dataset were ranked, with different weights according to their suitability. The reclassified outputs of the data were combined. Results : Chimoio presented 96% moderate risk and 4% high-risk areas. The map showed that the central and south-west “Residential areas”, namely, Centro Hipico, Trangapsso, Bairro 5 and 1° de Maio, had a high risk of malaria, while the rest of the residential areas had a moderate risk. Conclusions : The entire Chimoio population is at risk of contracting malaria, and the precise estimation of malaria risk, therefore, has important precision public health implications and for the planning of effective control measures, such as the proper time and place to spray to combat vectors, distribution of bed nets and other control measures.

  5. Bias in diet determination: incorporating traditional methods in Bayesian mixing models.

    PubMed

    Franco-Trecu, Valentina; Drago, Massimiliano; Riet-Sapriza, Federico G; Parnell, Andrew; Frau, Rosina; Inchausti, Pablo

    2013-01-01

    There are not "universal methods" to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators' diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal's diet the sea lion's did not have a clear dominance of any prey. In contrast, SIMM-IP's diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs' estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys' contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators' diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably ascertain and weight the information yielded by each method.

  6. Bias in Diet Determination: Incorporating Traditional Methods in Bayesian Mixing Models

    PubMed Central

    Franco-Trecu, Valentina; Drago, Massimiliano; Riet-Sapriza, Federico G.; Parnell, Andrew; Frau, Rosina; Inchausti, Pablo

    2013-01-01

    There are not “universal methods” to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators’ diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal’s diet the sea lion’s did not have a clear dominance of any prey. In contrast, SIMM-IP’s diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs’ estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys’ contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators’ diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably ascertain and weight the information yielded by each method. PMID:24224031

  7. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Treesearch

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  8. Time Delay Embedding Increases Estimation Precision of Models of Intraindividual Variability

    ERIC Educational Resources Information Center

    von Oertzen, Timo; Boker, Steven M.

    2010-01-01

    This paper investigates the precision of parameters estimated from local samples of time dependent functions. We find that "time delay embedding," i.e., structuring data prior to analysis by constructing a data matrix of overlapping samples, increases the precision of parameter estimates and in turn statistical power compared to standard…

  9. Using the Lorenz Curve to Characterize Risk Predictiveness and Etiologic Heterogeneity

    PubMed Central

    Mauguen, Audrey; Begg, Colin B.

    2017-01-01

    The Lorenz curve is a graphical tool that is used widely in econometrics. It represents the spread of a probability distribution, and its traditional use has been to characterize population distributions of wealth or income, or more specifically, inequalities in wealth or income. However, its utility in public health research has not been broadly established. The purpose of this article is to explain its special usefulness for characterizing the population distribution of disease risks, and in particular for identifying the precise disease burden that can be predicted to occur in segments of the population that are known to have especially high (or low) risks, a feature that is important for evaluating the yield of screening or other disease prevention initiatives. We demonstrate that, although the Lorenz curve represents the distribution of predicted risks in a population at risk for the disease, in fact it can be estimated from a case–control study conducted in the population without the need for information on absolute risks. We explore two different estimation strategies and compare their statistical properties using simulations. The Lorenz curve is a statistical tool that deserves wider use in public health research. PMID:27096256

  10. A new quantitative approach to measure perceived work-related stress in Italian employees.

    PubMed

    Cevenini, Gabriele; Fratini, Ilaria; Gambassi, Roberto

    2012-09-01

    We propose a method for a reliable quantitative measure of subjectively perceived occupational stress applicable in any company to enhance occupational safety and psychosocial health, to enable precise prevention policies and intervention and to improve work quality and efficiency. A suitable questionnaire was telephonically administered to a stratified sample of the whole Italian population of employees. Combined multivariate statistical methods, including principal component, cluster and discriminant analyses, were used to identify risk factors and to design a causal model for understanding work-related stress. The model explained the causal links of stress through employee perception of imbalance between job demands and resources for responding appropriately, by supplying a reliable U-shaped nonlinear stress index, expressed in terms of values of human systolic arterial pressure. Low, intermediate and high values indicated demotivation (or inefficiency), well-being and distress, respectively. Costs for stress-dependent productivity shortcomings were estimated to about 3.7% of national income from employment. The method identified useful structured information able to supply a simple and precise interpretation of employees' well-being and stress risk. Results could be compared with estimated national benchmarks to enable targeted intervention strategies to protect the health and safety of workers, and to reduce unproductive costs for firms.

  11. Risks of CIN 2+, CIN 3+, and Cancer by Cytology and Human Papillomavirus Status: The Foundation of Risk-Based Cervical Screening Guidelines.

    PubMed

    Demarco, Maria; Lorey, Thomas S; Fetterman, Barbara; Cheung, Li C; Guido, Richard S; Wentzensen, Nicolas; Kinney, Walter K; Poitras, Nancy E; Befano, Brian; Castle, Philip E; Schiffman, Mark

    2017-10-01

    The next round of the American Society for Colposcopy and Cervical Pathology (ASCCP)-sponsored cervical cancer screening and management guidelines will recommend clinical actions based on risk, rather than test-based algorithms. This article gives preliminary risk estimates for the screening setting, showing combinations of the 2 most important predictors, human papillomavirus (HPV) status and cytology result. Among 1,262,713 women aged 25 to 77 years co-tested with HC2 (Qiagen) and cytology at Kaiser Permanente Northern California, we estimated 0-5-year cumulative risk of cervical intraepithelial neoplasia (CIN) 2+, CIN 3+, and cancer for combinations of cytology (negative for intraepithelial lesion or malignancy [NILM], atypical squamous cells of undetermined significance [ASC-US], low-grade squamous intraepithelial lesion [LSIL], atypical squamous cells cannot exclude HSIL [ASC-H], high-grade squamous intraepithelial lesion [HSIL], atypical glandular cells [AGC]) and HPV status. Ninety percent of screened women had HPV-negative NILM and an extremely low risk of subsequent cancer. Five-year risks of CIN 3+ were lower after HPV negativity (0.12%) than after NILM (0.25%). Among HPV-negative women, 5-year risks for CIN 3+ were 0.10% for NILM, 0.44% for ASC-US, 1.8% for LSIL, 3.0% for ASC-H, 1.2% for AGC, and 29% for HSIL+ cytology (which was very rare). Among HPV-positive women, 5-year risks were 4.0% for NILM, 6.8% for ASC-US, 6.1% for LSIL, 28% for ASC-H, 30% for AGC, and 50% for HSIL+ cytology. As a foundation for the next guidelines revision, we confirmed with additional precision the risk estimates previously reported for combinations of HPV and cytology. Future analyses will estimate risks for women being followed in colposcopy clinic and posttreatment and will consider the role of risk modifiers such as age, HPV vaccine status, HPV type, and screening and treatment history.

  12. Risks of CIN 2+, CIN 3+, and Cancer by Cytology and Human Papillomavirus Status: The Foundation of Risk-Based Cervical Screening Guidelines

    PubMed Central

    Demarco, Maria; Lorey, Thomas S.; Fetterman, Barbara; Cheung, Li C.; Guido, Richard S.; Wentzensen, Nicolas; Kinney, Walter K.; Poitras, Nancy E.; Befano, Brian; Castle, Philip E.; Schiffman, Mark

    2017-01-01

    Objectives The next round of the American Society for Colposcopy and Cervical Pathology (ASCCP)-sponsored cervical cancer screening and management guidelines will recommend clinical actions based on risk, rather than test-based algorithms. This article gives preliminary risk estimates for the screening setting, showing combinations of the 2 most important predictors, human papillomavirus (HPV) status and cytology result. Materials and Methods Among 1,262,713 women aged 25 to 77 years co-tested with HC2 (Qiagen) and cytology at Kaiser Permanente Northern California, we estimated 0–5-year cumulative risk of cervical intraepithelial neoplasia (CIN) 2+, CIN 3+, and cancer for combinations of cytology (negative for intraepithelial lesion or malignancy [NILM], atypical squamous cells of undetermined significance [ASC-US], low-grade squamous intraepithelial lesion [LSIL], atypical squamous cells cannot exclude HSIL [ASC-H], high-grade squamous intraepithelial lesion [HSIL], atypical glandular cells [AGC]) and HPV status. Results Ninety percent of screened women had HPV-negative NILM and an extremely low risk of subsequent cancer. Five-year risks of CIN 3+ were lower after HPV negativity (0.12%) than after NILM (0.25%). Among HPV-negative women, 5-year risks for CIN 3+ were 0.10% for NILM, 0.44% for ASC-US, 1.8% for LSIL, 3.0% for ASC-H, 1.2% for AGC, and 29% for HSIL+ cytology (which was very rare). Among HPV-positive women, 5-year risks were 4.0% for NILM, 6.8% for ASC-US, 6.1% for LSIL, 28% for ASC-H, 30% for AGC, and 50% for HSIL+ cytology. Conclusions As a foundation for the next guidelines revision, we confirmed with additional precision the risk estimates previously reported for combinations of HPV and cytology. Future analyses will estimate risks for women being followed in colposcopy clinic and posttreatment and will consider the role of risk modifiers such as age, HPV vaccine status, HPV type, and screening and treatment history. PMID:28953116

  13. Precision of four otolith techniques for estimating age of white perch from a thermally altered reservoir

    USGS Publications Warehouse

    Snow, Richard A.; Porta, Michael J.; Long, James M.

    2018-01-01

    The White Perch Morone americana is an invasive species in many Midwestern states and is widely distributed in reservoir systems, yet little is known about the species' age structure and population dynamics. White Perch were first observed in Sooner Reservoir, a thermally altered cooling reservoir in Oklahoma, by the Oklahoma Department of Wildlife Conservation in 2006. It is unknown how thermally altered systems like Sooner Reservoir may affect the precision of White Perch age estimates. Previous studies have found that age structures from Largemouth Bass Micropterus salmoides and Bluegills Lepomis macrochirus from thermally altered reservoirs had false annuli, which increased error when estimating ages. Our objective was to quantify the precision of White Perch age estimates using four sagittal otolith preparation techniques (whole, broken, browned, and stained). Because Sooner Reservoir is thermally altered, we also wanted to identify the best month to collect a White Perch age sample based on aging precision. Ages of 569 White Perch (20–308 mm TL) were estimated using the four techniques. Age estimates from broken, stained, and browned otoliths ranged from 0 to 8 years; whole‐view otolith age estimates ranged from 0 to 7 years. The lowest mean coefficient of variation (CV) was obtained using broken otoliths, whereas the highest CV was observed using browned otoliths. July was the most precise month (lowest mean CV) for estimating age of White Perch, whereas April was the least precise month (highest mean CV). These results underscore the importance of knowing the best method to prepare otoliths for achieving the most precise age estimates and the best time of year to obtain those samples, as these factors may affect other estimates of population dynamics.

  14. Validating precision estimates in horizontal wind measurements from a Doppler lidar

    DOE PAGES

    Newsom, Rob K.; Brewer, W. Alan; Wilczak, James M.; ...

    2017-03-30

    Results from a recent field campaign are used to assess the accuracy of wind speed and direction precision estimates produced by a Doppler lidar wind retrieval algorithm. The algorithm, which is based on the traditional velocity-azimuth-display (VAD) technique, estimates the wind speed and direction measurement precision using standard error propagation techniques, assuming the input data (i.e., radial velocities) to be contaminated by random, zero-mean, errors. For this study, the lidar was configured to execute an 8-beam plan-position-indicator (PPI) scan once every 12 min during the 6-week deployment period. Several wind retrieval trials were conducted using different schemes for estimating themore » precision in the radial velocity measurements. Here, the resulting wind speed and direction precision estimates were compared to differences in wind speed and direction between the VAD algorithm and sonic anemometer measurements taken on a nearby 300 m tower.« less

  15. The effect of Web-based Braden Scale training on the reliability and precision of Braden Scale pressure ulcer risk assessments.

    PubMed

    Magnan, Morris A; Maklebust, Joann

    2008-01-01

    To evaluate the effect of Web-based Braden Scale training on the reliability and precision of pressure ulcer risk assessments made by registered nurses (RN) working in acute care settings. Pretest-posttest, 2-group, quasi-experimental design. Five hundred Braden Scale risk assessments were made on 102 acute care patients deemed to be at various levels of risk for pressure ulceration. Assessments were made by RNs working in acute care hospitals at 3 different medical centers where the Braden Scale was in regular daily use (2 medical centers) or new to the setting (1 medical center). The Braden Scale for Predicting Pressure Sore Risk was used to guide pressure ulcer risk assessments. A Web-based version of the Detroit Medical Center Braden Scale Computerized Training Module was used to teach nurses correct use of the Braden Scale and selection of risk-based pressure ulcer prevention interventions. In the aggregate, RN generated reliable Braden Scale pressure ulcer risk assessments 65% of the time after training. The effect of Web-based Braden Scale training on reliability and precision of assessments varied according to familiarity with the scale. With training, new users of the scale made reliable assessments 84% of the time and significantly improved precision of their assessments. The reliability and precision of Braden Scale risk assessments made by its regular users was unaffected by training. Technology-assisted Braden Scale training improved both reliability and precision of risk assessments made by new users of the scale, but had virtually no effect on the reliability or precision of risk assessments made by regular users of the instrument. Further research is needed to determine best approaches for improving reliability and precision of Braden Scale assessments made by its regular users.

  16. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  17. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    PubMed Central

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  18. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  19. Probabilistic metrology or how some measurement outcomes render ultra-precise estimates

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Gendra, B.; Muñoz-Tapia, R.; Bagan, E.

    2016-10-01

    We show on theoretical grounds that, even in the presence of noise, probabilistic measurement strategies (which have a certain probability of failure or abstention) can provide, upon a heralded successful outcome, estimates with a precision that exceeds the deterministic bounds for the average precision. This establishes a new ultimate bound on the phase estimation precision of particular measurement outcomes (or sequence of outcomes). For probe systems subject to local dephasing, we quantify such precision limit as a function of the probability of failure that can be tolerated. Our results show that the possibility of abstaining can set back the detrimental effects of noise.

  20. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  1. Stochastic precision analysis of 2D cardiac strain estimation in vivo

    NASA Astrophysics Data System (ADS)

    Bunting, E. A.; Provost, J.; Konofagou, E. E.

    2014-11-01

    Ultrasonic strain imaging has been applied to echocardiography and carries great potential to be used as a tool in the clinical setting. Two-dimensional (2D) strain estimation may be useful when studying the heart due to the complex, 3D deformation of the cardiac tissue. Increasing the framerate used for motion estimation, i.e. motion estimation rate (MER), has been shown to improve the precision of the strain estimation, although maintaining the spatial resolution necessary to view the entire heart structure in a single heartbeat remains challenging at high MERs. Two previously developed methods, the temporally unequispaced acquisition sequence (TUAS) and the diverging beam sequence (DBS), have been used in the past to successfully estimate in vivo axial strain at high MERs without compromising spatial resolution. In this study, a stochastic assessment of 2D strain estimation precision is performed in vivo for both sequences at varying MERs (65, 272, 544, 815 Hz for TUAS; 250, 500, 1000, 2000 Hz for DBS). 2D incremental strains were estimated during left ventricular contraction in five healthy volunteers using a normalized cross-correlation function and a least-squares strain estimator. Both sequences were shown capable of estimating 2D incremental strains in vivo. The conditional expected value of the elastographic signal-to-noise ratio (E(SNRe|ɛ)) was used to compare strain estimation precision of both sequences at multiple MERs over a wide range of clinical strain values. The results here indicate that axial strain estimation precision is much more dependent on MER than lateral strain estimation, while lateral estimation is more affected by strain magnitude. MER should be increased at least above 544 Hz to avoid suboptimal axial strain estimation. Radial and circumferential strain estimations were influenced by the axial and lateral strain in different ways. Furthermore, the TUAS and DBS were found to be of comparable precision at similar MERs.

  2. Creatinine-Based and Cystatin C-Based GFR Estimating Equations and Their Non-GFR Determinants in Kidney Transplant Recipients.

    PubMed

    Keddis, Mira T; Amer, Hatem; Voskoboev, Nikolay; Kremers, Walter K; Rule, Andrew D; Lieske, John C

    2016-09-07

    eGFR equations have been evaluated in kidney transplant recipients with variable performance. We assessed the performance of the Modification of Diet in Renal Disease equation and the Chronic Kidney Disease Epidemiology Collaboration equations on the basis of creatinine, cystatin C, and both (eGFR creatinine-cystatin C) compared with measured GFR by iothalamate clearance and evaluated their non-GFR determinants and associations across 15 cardiovascular risk factors. A cross-sectional cohort of 1139 kidney transplant recipients >1 year after transplant was analyzed. eGFR bias, precision, and accuracy (percentage of estimates within 30% of measured GFR) were assessed. Interaction of each cardiovascular risk factor with eGFR relative to measured GFR was determined. Median measured GFR was 55.0 ml/min per 1.73 m(2). eGFR creatinine overestimated measured GFR by 3.1% (percentage of estimates within 30% of measured GFR of 80.4%), and eGFR Modification of Diet in Renal Disease underestimated measured GFR by 2.2% (percentage of estimates within 30% of measured GFR of 80.4%). eGFR cystatin C underestimated measured GFR by -13.7% (percentage of estimates within 30% of measured GFR of 77.1%), and eGFR creatinine-cystatin C underestimated measured GFR by -8.1% (percentage of estimates within 30% of measured GFR of 86.5%). Lower measured GFR associated with older age, women, obesity, longer time after transplant, lower HDL, lower hemoglobin, lower albumin, higher triglycerides, higher proteinuria, and an elevated cardiac troponin T level but did not associate with diabetes, smoking, cardiovascular events, pretransplant dialysis, or hemoglobin A1c. These risk factor associations differed for five risk factors with eGFR creatinine, six risk factors for eGFR Modification of Diet in Renal Disease, ten risk factors for eGFR cystatin C, and four risk factors for eGFR creatinine-cystatin C. Thus, eGFR creatinine and eGFR creatinine-cystatin C are preferred over eGFR cystatin C in kidney transplant recipients because they are less biased, more accurate, and more consistently reflect the same risk factor associations seen with measured GFR. Copyright © 2016 by the American Society of Nephrology.

  3. Precision of information, sensational information, and self-efficacy information as message-level variables affecting risk perceptions.

    PubMed

    Dahlstrom, Michael F; Dudo, Anthony; Brossard, Dominique

    2012-01-01

    Studies that investigate how the mass media cover risk issues often assume that certain characteristics of content are related to specific risk perceptions and behavioral intentions. However, these relationships have seldom been empirically assessed. This study tests the influence of three message-level media variables--risk precision information, sensational information, and self-efficacy information--on perceptions of risk, individual worry, and behavioral intentions toward a pervasive health risk. Results suggest that more precise risk information leads to increased risk perceptions and that the effect of sensational information is moderated by risk precision information. Greater self-efficacy information is associated with greater intention to change behavior, but none of the variables influence individual worry. The results provide a quantitative understanding of how specific characteristics of informational media content can influence individuals' responses to health threats of a global and uncertain nature. © 2011 Society for Risk Analysis.

  4. Brief report: How short is too short? An ultra-brief measure of the big-five personality domains implicates "agreeableness" as a risk for all-cause mortality.

    PubMed

    Chapman, Benjamin P; Elliot, Ari J

    2017-08-01

    Controversy exists over the use of brief Big Five scales in health studies. We investigated links between an ultra-brief measure, the Big Five Inventory-10, and mortality in the General Social Survey. The Agreeableness scale was associated with elevated mortality risk (hazard ratio = 1.26, p = .017). This effect was attributable to the reversed-scored item "Tends to find fault with others," so that greater fault-finding predicted lower mortality risk. The Conscientiousness scale approached meta-analytic estimates, which were not precise enough for significance. Those seeking Big Five measurement in health studies should be aware that the Big Five Inventory-10 may yield unusual results.

  5. Using published data in Mendelian randomization: a blueprint for efficient identification of causal risk factors.

    PubMed

    Burgess, Stephen; Scott, Robert A; Timpson, Nicholas J; Davey Smith, George; Thompson, Simon G

    2015-07-01

    Finding individual-level data for adequately-powered Mendelian randomization analyses may be problematic. As publicly-available summarized data on genetic associations with disease outcomes from large consortia are becoming more abundant, use of published data is an attractive analysis strategy for obtaining precise estimates of the causal effects of risk factors on outcomes. We detail the necessary steps for conducting Mendelian randomization investigations using published data, and present novel statistical methods for combining data on the associations of multiple (correlated or uncorrelated) genetic variants with the risk factor and outcome into a single causal effect estimate. A two-sample analysis strategy may be employed, in which evidence on the gene-risk factor and gene-outcome associations are taken from different data sources. These approaches allow the efficient identification of risk factors that are suitable targets for clinical intervention from published data, although the ability to assess the assumptions necessary for causal inference is diminished. Methods and guidance are illustrated using the example of the causal effect of serum calcium levels on fasting glucose concentrations. The estimated causal effect of a 1 standard deviation (0.13 mmol/L) increase in calcium levels on fasting glucose (mM) using a single lead variant from the CASR gene region is 0.044 (95 % credible interval -0.002, 0.100). In contrast, using our method to account for the correlation between variants, the corresponding estimate using 17 genetic variants is 0.022 (95 % credible interval 0.009, 0.035), a more clearly positive causal effect.

  6. Exploring association between statin use and breast cancer risk: an updated meta-analysis.

    PubMed

    Islam, Md Mohaimenul; Yang, Hsuan-Chia; Nguyen, Phung-Anh; Poly, Tahmina Nasrin; Huang, Chih-Wei; Kekade, Shwetambara; Khalfan, Abdulwahed Mohammed; Debnath, Tonmoy; Li, Yu-Chuan Jack; Abdul, Shabbir Syed

    2017-12-01

    The benefits of statin treatment for preventing cardiac disease are well established. However, preclinical studies suggested that statins may influence mammary cancer growth, but the clinical evidence is still inconsistent. We, therefore, performed an updated meta-analysis to provide a precise estimate of the risk of breast cancer in individuals undergoing statin therapy. For this meta-analysis, we searched PubMed, the Cochrane Library, Web of Science, Embase, and CINAHL for published studies up to January 31, 2017. Articles were included if they (1) were published in English; (2) had an observational study design with individual-level exposure and outcome data, examined the effect of statin therapy, and reported the incidence of breast cancer; and (3) reported estimates of either the relative risk, odds ratios, or hazard ratios with 95% confidence intervals (CIs). We used random-effect models to pool the estimates. Of 2754 unique abstracts, 39 were selected for full-text review, and 36 studies reporting on 121,399 patients met all inclusion criteria. The overall pooled risks of breast cancer in patients using statins were 0.94 (95% CI 0.86-1.03) in random-effect models with significant heterogeneity between estimates (I 2  = 83.79%, p = 0.0001). However, we also stratified by region, the duration of statin therapy, methodological design, statin properties, and individual stain use. Our results suggest that there is no association between statin use and breast cancer risk. However, observational studies cannot clarify whether the observed epidemiologic association is a causal effect or the result of some unmeasured confounding variable. Therefore, more research is needed.

  7. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

  8. Risk of second primary cancers after testicular cancer in East and West Germany: A focus on contralateral testicular cancers

    PubMed Central

    Rusner, Carsten; Streller, Brigitte; Stegmaier, Christa; Trocchi, Pietro; Kuss, Oliver; McGlynn, Katherine A; Trabert, Britton; Stang, Andreas

    2014-01-01

    Testicular cancer survival rates improved dramatically after cisplatin-based therapy was introduced in the 1970s. However, chemotherapy and radiation therapy are potentially carcinogenic. The purpose of this study was to estimate the risk of developing second primary cancers including the risk associated with primary histologic type (seminoma and non-seminoma) among testicular cancer survivors in Germany. We identified 16 990 and 1401 cases of testicular cancer in population-based cancer registries of East Germany (1961–1989 and 1996–2008) and Saarland (a federal state in West Germany; 1970–2008), respectively. We estimated the risk of a second primary cancer using standardized incidence ratios (SIRs) with 95% confidence intervals (95% CIs). To determine trends, we plotted model-based estimated annual SIRs. In East Germany, a total of 301 second primary cancers of any location were observed between 1961 and 1989 (SIR: 1.9; 95% CI: 1.7–2.1), and 159 cancers (any location) were observed between 1996 and 2008 (SIR: 1.7; 95% CI: 1.4–2.0). The SIRs for contralateral testicular cancer were increased in the registries with a range from 6.0 in Saarland to 13.9 in East Germany. The SIR for seminoma, in particular, was higher in East Germany compared to the other registries. We observed constant trends in the model-based SIRs for contralateral testicular cancers. The majority of reported SIRs of other cancer sites including histology-specific risks showed low precisions of estimated effects, likely due to small sample sizes. Testicular cancer patients are at increased risk especially for cancers of the contralateral testis and should receive intensive follow-ups. PMID:24407180

  9. Quantitative health impact of indoor radon in France.

    PubMed

    Ajrouche, Roula; Roudier, Candice; Cléro, Enora; Ielsch, Géraldine; Gay, Didier; Guillevic, Jérôme; Marant Micallef, Claire; Vacquier, Blandine; Le Tertre, Alain; Laurier, Dominique

    2018-05-08

    Radon is the second leading cause of lung cancer after smoking. Since the previous quantitative risk assessment of indoor radon conducted in France, input data have changed such as, estimates of indoor radon concentrations, lung cancer rates and the prevalence of tobacco consumption. The aim of this work was to update the risk assessment of lung cancer mortality attributable to indoor radon in France using recent risk models and data, improving the consideration of smoking, and providing results at a fine geographical scale. The data used were population data (2012), vital statistics on death from lung cancer (2008-2012), domestic radon exposure from a recent database that combines measurement results of indoor radon concentration and the geogenic radon potential map for France (2015), and smoking prevalence (2010). The risk model used was derived from a European epidemiological study, considering that lung cancer risk increased by 16% per 100 becquerels per cubic meter (Bq/m 3 ) indoor radon concentration. The estimated number of lung cancer deaths attributable to indoor radon exposure is about 3000 (1000; 5000), which corresponds to about 10% of all lung cancer deaths each year in France. About 33% of lung cancer deaths attributable to radon are due to exposure levels above 100 Bq/m 3 . Considering the combined effect of tobacco and radon, the study shows that 75% of estimated radon-attributable lung cancer deaths occur among current smokers, 20% among ex-smokers and 5% among never-smokers. It is concluded that the results of this study, which are based on precise estimates of indoor radon concentrations at finest geographical scale, can serve as a basis for defining French policy against radon risk.

  10. Improved accuracy and precision of tracer kinetic parameters by joint fitting to variable flip angle and dynamic contrast enhanced MRI data.

    PubMed

    Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J

    2016-10-01

    To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  11. Optimal reference polarization states for the calibration of general Stokes polarimeters in the presence of noise

    NASA Astrophysics Data System (ADS)

    Mu, Tingkui; Bao, Donghao; Zhang, Chunmin; Chen, Zeyu; Song, Jionghui

    2018-07-01

    During the calibration of the system matrix of a Stokes polarimeter using reference polarization states (RPSs) and pseudo-inversion estimation method, the measurement intensities are usually noised by the signal-independent additive Gaussian noise or signal-dependent Poisson shot noise, the precision of the estimated system matrix is degraded. In this paper, we present a paradigm for selecting RPSs to improve the precision of the estimated system matrix in the presence of both types of noise. The analytical solution of the precision of the system matrix estimated with the RPSs are derived. Experimental measurements from a general Stokes polarimeter show that accurate system matrix is estimated with the optimal RPSs, which are generated using two rotating quarter-wave plates. The advantage of using optimal RPSs is a reduction in measurement time with high calibration precision.

  12. Estimation of sport fish harvest for risk and hazard assessment of environmental contaminants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poston, T.M.; Strenge, D.L.

    1989-01-01

    Consumption of contaminated fish flesh can be a significant route of human exposure to hazardous chemicals. Estimation of exposure resulting from the consumption of fish requires knowledge of fish consumption and contaminant levels in the edible portion of fish. Realistic figures of sport fish harvest are needed to estimate consumption. Estimates of freshwater sport fish harvest were developed from a review of 72 articles and reports. Descriptive statistics based on fishing pressure were derived from harvest data for four distinct groups of freshwater sport fish in three water types: streams, lakes, and reservoirs. Regression equations were developed to relate harvestmore » to surface area fished where data bases were sufficiently large. Other aspects of estimating human exposure to contaminants in fish flesh that are discussed include use of bioaccumulation factors for trace metals and organic compounds. Using the bioaccumulation factor and the concentration of contaminants in water as variables in the exposure equation may also lead to less precise estimates of tissue concentration. For instance, muscle levels of contaminants may not increase proportionately with increases in water concentrations, leading to overestimation of risk. In addition, estimates of water concentration may be variable or expressed in a manner that does not truly represent biological availability of the contaminant. These factors are discussed. 45 refs., 1 fig., 7 tabs.« less

  13. Multi-objective optimization in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  14. Designing an effective mark-recapture study of Antarctic blue whales.

    PubMed

    Peel, David; Bravington, Mark; Kelly, Natalie; Double, Michael C

    2015-06-01

    To properly conserve and manage wild populations, it is important to have information on abundance and population dynamics. In the case of rare and cryptic species, especially in remote locations, surveys can be difficult and expensive, and run the risk of not producing sample sizes large enough to produce precise estimates. Therefore, it is crucial to conduct preliminary analysis to determine if the study will produce useable estimates. The focus of this paper is a proposed mark-recapture study of Antarctic blue whales (Balaenoptera musculus intermedia). Antarctic blue whales were hunted to near extinction up until the mid- 1960s, when commercial exploitation of this species ended. Current abundance estimates are a decade old. Furthermore, at present, there are no formal circumpolar-level cetacean surveys operating in Antarctic waters and, specifically, there is no strategy to monitor the potential recovery of Antarctic blue whales. Hence the work in this paper was motivated by the need to inform decisions on strategies for future monitoring of Antarctic blue whale population. The paper describes a model to predict the precision and bias of estimates from a proposed survey program. The analysis showed that mark-recapture is indeed a suitable method to provide a circumpolar abundance estimate of Antarctic blue whales, with precision of the abundance, at the midpoint of the program, predicted to be between 0.2 and 0.3. However, this was only if passive acoustic tracking was utilized to increase the encounter rate. The analysis also provided guidance on general design for an Antarctic blue whale program, showing that it requires a 12-year duration; although surveys do not necessarily need to be run every year if multiple vessels are available to clump effort. Mark-recapture is based on a number of assumptions; it was evident from the analysis that ongoing analysis and monitoring of the data would be required to check such assumptions hold (e.g., test for heterogeneity), with the modeling adjusted as needed.

  15. Quantitative comparisons of three automated methods for estimating intracranial volume: A study of 270 longitudinal magnetic resonance images.

    PubMed

    Shang, Xiaoyan; Carlson, Michelle C; Tang, Xiaoying

    2018-04-30

    Total intracranial volume (TIV) is often used as a measure of brain size to correct for individual variability in magnetic resonance imaging (MRI) based morphometric studies. An adjustment of TIV can greatly increase the statistical power of brain morphometry methods. As such, an accurate and precise TIV estimation is of great importance in MRI studies. In this paper, we compared three automated TIV estimation methods (multi-atlas likelihood fusion (MALF), Statistical Parametric Mapping 8 (SPM8) and FreeSurfer (FS)) using longitudinal T1-weighted MR images in a cohort of 70 older participants at elevated sociodemographic risk for Alzheimer's disease. Statistical group comparisons in terms of four different metrics were performed. Furthermore, sex, education level, and intervention status were investigated separately for their impacts on the TIV estimation performance of each method. According to our experimental results, MALF was the least susceptible to atrophy, while SPM8 and FS suffered a loss in precision. In group-wise analysis, MALF was the least sensitive method to group variation, whereas SPM8 was particularly sensitive to sex and FS was unstable with respect to education level. In terms of effectiveness, both MALF and SPM8 delivered a user-friendly performance, while FS was relatively computationally intensive. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. An Efficient Design Strategy for Logistic Regression Using Outcome- and Covariate-Dependent Pooling of Biospecimens Prior to Assay

    PubMed Central

    Lyles, Robert H.; Mitchell, Emily M.; Weinberg, Clarice R.; Umbach, David M.; Schisterman, Enrique F.

    2016-01-01

    Summary Potential reductions in laboratory assay costs afforded by pooling equal aliquots of biospecimens have long been recognized in disease surveillance and epidemiological research and, more recently, have motivated design and analytic developments in regression settings. For example, Weinberg and Umbach (1999, Biometrics 55, 718–726) provided methods for fitting set-based logistic regression models to case-control data when a continuous exposure variable (e.g., a biomarker) is assayed on pooled specimens. We focus on improving estimation efficiency by utilizing available subject-specific information at the pool allocation stage. We find that a strategy that we call “(y,c)-pooling,” which forms pooling sets of individuals within strata defined jointly by the outcome and other covariates, provides more precise estimation of the risk parameters associated with those covariates than does pooling within strata defined only by the outcome. We review the approach to set-based analysis through offsets developed by Weinberg and Umbach in a recent correction to their original paper. We propose a method for variance estimation under this design and use simulations and a real-data example to illustrate the precision benefits of (y,c)-pooling relative to y-pooling. We also note and illustrate that set-based models permit estimation of covariate interactions with exposure. PMID:26964741

  17. How rapidly does the excess risk of lung cancer decline following quitting smoking? A quantitative review using the negative exponential model.

    PubMed

    Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J

    2013-10-01

    The excess lung cancer risk from smoking declines with time quit, but the shape of the decline has never been precisely modelled, or meta-analyzed. From a database of studies of at least 100 cases, we extracted 106 blocks of RRs (from 85 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls (or at-risk) formed the data for fitting the negative exponential model. We estimated the half-life (H, time in years when the excess risk becomes half that for a continuing smoker) for each block, investigated model fit, and studied heterogeneity in H. We also conducted sensitivity analyses allowing for reverse causation, either ignoring short-term quitters (S1) or considering them smokers (S2). Model fit was poor ignoring reverse causation, but much improved for both sensitivity analyses. Estimates of H were similar for all three analyses. For the best-fitting analysis (S1), H was 9.93 (95% CI 9.31-10.60), but varied by sex (females 7.92, males 10.71), and age (<50years 6.98, 70+years 12.99). Given that reverse causation is taken account of, the model adequately describes the decline in excess risk. However, estimates of H may be biased by factors including misclassification of smoking status. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Effect of Risk of Bias on the Effect Size of Meta-Analytic Estimates in Randomized Controlled Trials in Periodontology and Implant Dentistry.

    PubMed

    Faggion, Clovis Mariano; Wu, Yun-Chun; Scheidgen, Moritz; Tu, Yu-Kang

    2015-01-01

    Risk of bias (ROB) may threaten the internal validity of a clinical trial by distorting the magnitude of treatment effect estimates, although some conflicting information on this assumption exists. The objective of this study was evaluate the effect of ROB on the magnitude of treatment effect estimates in randomized controlled trials (RCTs) in periodontology and implant dentistry. A search for Cochrane systematic reviews (SRs), including meta-analyses of RCTs published in periodontology and implant dentistry fields, was performed in the Cochrane Library in September 2014. Random-effect meta-analyses were performed by grouping RCTs with different levels of ROBs in three domains (sequence generation, allocation concealment, and blinding of outcome assessment). To increase power and precision, only SRs with meta-analyses including at least 10 RCTs were included. Meta-regression was performed to investigate the association between ROB characteristics and the magnitudes of intervention effects in the meta-analyses. Of the 24 initially screened SRs, 21 SRs were excluded because they did not include at least 10 RCTs in the meta-analyses. Three SRs (two from periodontology field) generated information for conducting 27 meta-analyses. Meta-regression did not reveal significant differences in the relationship of the ROB level with the size of treatment effect estimates, although a trend for inflated estimates was observed in domains with unclear ROBs. In this sample of RCTs, high and (mainly) unclear risks of selection and detection biases did not seem to influence the size of treatment effect estimates, although several confounders might have influenced the strength of the association.

  19. Using age on clothes size label to estimate weight in emergency paediatric patients.

    PubMed

    Elgie, Laura D; Williams, Andrew R

    2012-10-01

    To study formulae that estimate children's weight using their actual age. To determine whether using the age on their clothes size label in these formulae can estimate weight when their actual age is unknown. The actual age and age on the clothes labels of 188 children were inserted into formulae that estimate children's weight. These estimates were compared with their actual weight. Bland-Altman plots calculated the precision and accuracy of each of these estimates. In all formulae, using age on the clothes sizes label provided a more precise estimate than the child's actual age. In emergencies where a child's age is unknown, use of the age on their clothes label in weight-estimating formulae yields acceptable weight estimates. Even in situations where a child's age is known, the age on their clothes label may provide a more accurate and precise weight estimate than the actual age.

  20. [Estimation of desert vegetation coverage based on multi-source remote sensing data].

    PubMed

    Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui

    2012-12-01

    Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.

  1. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis–Hastings Markov Chain Monte Carlo algorithm

    DOE PAGES

    Wang, Hongrui; Wang, Cheng; Wang, Ying; ...

    2017-04-05

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less

  2. Seasonality in risk of pandemic influenza emergence

    PubMed Central

    Meyers, Lauren Ancel

    2017-01-01

    Influenza pandemics can emerge unexpectedly and wreak global devastation. However, each of the six pandemics since 1889 emerged in the Northern Hemisphere just after the flu season, suggesting that pandemic timing may be predictable. Using a stochastic model fit to seasonal flu surveillance data from the United States, we find that seasonal flu leaves a transient wake of heterosubtypic immunity that impedes the emergence of novel flu viruses. This refractory period provides a simple explanation for not only the spring-summer timing of historical pandemics, but also early increases in pandemic severity and multiple waves of transmission. Thus, pandemic risk may be seasonal and predictable, with the accuracy of pre-pandemic and real-time risk assessments hinging on reliable seasonal influenza surveillance and precise estimates of the breadth and duration of heterosubtypic immunity. PMID:29049288

  3. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    PubMed

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    PubMed

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  5. Precision Medicine: Functional Advancements.

    PubMed

    Caskey, Thomas

    2018-01-29

    Precision medicine was conceptualized on the strength of genomic sequence analysis. High-throughput functional metrics have enhanced sequence interpretation and clinical precision. These technologies include metabolomics, magnetic resonance imaging, and I rhythm (cardiac monitoring), among others. These technologies are discussed and placed in clinical context for the medical specialties of internal medicine, pediatrics, obstetrics, and gynecology. Publications in these fields support the concept of a higher level of precision in identifying disease risk. Precise disease risk identification has the potential to enable intervention with greater specificity, resulting in disease prevention-an important goal of precision medicine.

  6. Do polymorphisms of 5,10-methylenetetrahydrofolate reductase (MTHFR) gene affect the risk of childhood acute lymphoblastic leukemia?

    PubMed

    Pereira, Tiago Veiga; Rudnicki, Martina; Pereira, Alexandre Costa; Pombo-de-Oliveira, Maria S; Franco, Rendrik França

    2006-01-01

    Meta-analysis has become an important statistical tool in genetic association studies, since it may provide more powerful and precise estimates. However, meta-analytic studies are prone to several potential biases not only because the preferential publication of "positive'' studies but also due to difficulties in obtaining all relevant information during the study selection process. In this letter, we point out major problems in meta-analysis that may lead to biased conclusions, illustrating an empirical example of two recent meta-analyses on the relation between MTHFR polymorphisms and risk of acute lymphoblastic leukemia that, despite the similarity in statistical methods and period of study selection, provided partially conflicting results.

  7. Matching on the Disease Risk Score in Comparative Effectiveness Research of New Treatments

    PubMed Central

    Wyss, Richard; Ellis, Alan R.; Brookhart, M. Alan; Funk, Michele Jonsson; Girman, Cynthia J.; Simpson, Ross J.; Stürmer, Til

    2016-01-01

    Purpose We use simulations and an empirical example to evaluate the performance of disease risk score (DRS) matching compared with propensity score (PS) matching when controlling large numbers of covariates in settings involving newly introduced treatments. Methods We simulated a dichotomous treatment, a dichotomous outcome, and 100 baseline covariates that included both continuous and dichotomous random variables. For the empirical example, we evaluated the comparative effectiveness of dabigatran versus warfarin in preventing combined ischemic stroke and all-cause mortality. We matched treatment groups on a historically estimated DRS and again on the PS. We controlled for a high-dimensional set of covariates using 20% and 1% samples of Medicare claims data from October 2010 through December 2012. Results In simulations, matching on the DRS versus the PS generally yielded matches for more treated individuals and improved precision of the effect estimate. For the empirical example, PS and DRS matching in the 20% sample resulted in similar hazard ratios (0.88 and 0.87) and standard errors (0.04 for both methods). In the 1% sample, PS matching resulted in matches for only 92.0% of the treated population and a hazard ratio and standard error of 0.89 and 0.19, respectively, while DRS matching resulted in matches for 98.5% and a hazard ratio and standard error of 0.85 and 0.16, respectively. Conclusions When PS distributions are separated, DRS matching can improve the precision of effect estimates and allow researchers to evaluate the treatment effect in a larger proportion of the treated population. However, accurately modeling the DRS can be challenging compared with the PS. PMID:26112690

  8. Matching on the disease risk score in comparative effectiveness research of new treatments.

    PubMed

    Wyss, Richard; Ellis, Alan R; Brookhart, M Alan; Jonsson Funk, Michele; Girman, Cynthia J; Simpson, Ross J; Stürmer, Til

    2015-09-01

    We use simulations and an empirical example to evaluate the performance of disease risk score (DRS) matching compared with propensity score (PS) matching when controlling large numbers of covariates in settings involving newly introduced treatments. We simulated a dichotomous treatment, a dichotomous outcome, and 100 baseline covariates that included both continuous and dichotomous random variables. For the empirical example, we evaluated the comparative effectiveness of dabigatran versus warfarin in preventing combined ischemic stroke and all-cause mortality. We matched treatment groups on a historically estimated DRS and again on the PS. We controlled for a high-dimensional set of covariates using 20% and 1% samples of Medicare claims data from October 2010 through December 2012. In simulations, matching on the DRS versus the PS generally yielded matches for more treated individuals and improved precision of the effect estimate. For the empirical example, PS and DRS matching in the 20% sample resulted in similar hazard ratios (0.88 and 0.87) and standard errors (0.04 for both methods). In the 1% sample, PS matching resulted in matches for only 92.0% of the treated population and a hazard ratio and standard error of 0.89 and 0.19, respectively, while DRS matching resulted in matches for 98.5% and a hazard ratio and standard error of 0.85 and 0.16, respectively. When PS distributions are separated, DRS matching can improve the precision of effect estimates and allow researchers to evaluate the treatment effect in a larger proportion of the treated population. However, accurately modeling the DRS can be challenging compared with the PS. Copyright © 2015 John Wiley & Sons, Ltd.

  9. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  10. Opportunities and Challenges for Personal Heat Exposure Research

    PubMed Central

    Kuras, Evan R.; Richardson, Molly B.; Calkins, Miriam M.; Ebi, Kristie L.; Hess, Jeremy J.; Kintziger, Kristina W.; Jagger, Meredith A.; Middel, Ariane; Scott, Anna A.; Spector, June T.; Uejio, Christopher K.; Vanos, Jennifer K.; Zaitchik, Benjamin F.; Gohlke, Julia M.

    2017-01-01

    Background: Environmental heat exposure is a public health concern. The impacts of environmental heat on mortality and morbidity at the population scale are well documented, but little is known about specific exposures that individuals experience. Objectives: The first objective of this work was to catalyze discussion of the role of personal heat exposure information in research and risk assessment. The second objective was to provide guidance regarding the operationalization of personal heat exposure research methods. Discussion: We define personal heat exposure as realized contact between a person and an indoor or outdoor environment that poses a risk of increases in body core temperature and/or perceived discomfort. Personal heat exposure can be measured directly with wearable monitors or estimated indirectly through the combination of time–activity and meteorological data sets. Complementary information to understand individual-scale drivers of behavior, susceptibility, and health and comfort outcomes can be collected from additional monitors, surveys, interviews, ethnographic approaches, and additional social and health data sets. Personal exposure research can help reveal the extent of exposure misclassification that occurs when individual exposure to heat is estimated using ambient temperature measured at fixed sites and can provide insights for epidemiological risk assessment concerning extreme heat. Conclusions: Personal heat exposure research provides more valid and precise insights into how often people encounter heat conditions and when, where, to whom, and why these encounters occur. Published literature on personal heat exposure is limited to date, but existing studies point to opportunities to inform public health practice regarding extreme heat, particularly where fine-scale precision is needed to reduce health consequences of heat exposure. https://doi.org/10.1289/EHP556 PMID:28796630

  11. Meta-analysis of individual registry results enhances international registry collaboration.

    PubMed

    Paxton, Elizabeth W; Mohaddes, Maziar; Laaksonen, Inari; Lorimer, Michelle; Graves, Stephen E; Malchau, Henrik; Namba, Robert S; Kärrholm, John; Rolfson, Ola; Cafri, Guy

    2018-03-28

    Background and purpose - Although common in medical research, meta-analysis has not been widely adopted in registry collaborations. A meta-analytic approach in which each registry conducts a standardized analysis on its own data followed by a meta-analysis to calculate a weighted average of the estimates allows collaboration without sharing patient-level data. The value of meta-analysis as an alternative to individual patient data analysis is illustrated in this study by comparing the risk of revision of porous tantalum cups versus other uncemented cups in primary total hip arthroplasties from Sweden, Australia, and a US registry (2003-2015). Patients and methods - For both individual patient data analysis and meta-analysis approaches a Cox proportional hazard model was fit for time to revision, comparing porous tantalum (n = 23,201) with other uncemented cups (n = 128,321). Covariates included age, sex, diagnosis, head size, and stem fixation. In the meta-analysis approach, treatment effect size (i.e., Cox model hazard ratio) was calculated within each registry and a weighted average for the individual registries' estimates was calculated. Results - Patient-level data analysis and meta-analytic approaches yielded the same results with the porous tantalum cups having a higher risk of revision than other uncemented cups (HR (95% CI) 1.6 (1.4-1.7) and HR (95% CI) 1.5 (1.4-1.7), respectively). Adding the US cohort to the meta-analysis led to greater generalizability, increased precision of the treatment effect, and similar findings (HR (95% CI) 1.6 (1.4-1.7)) with increased risk of porous tantalum cups. Interpretation - The meta-analytic technique is a viable option to address privacy, security, and data ownership concerns allowing more expansive registry collaboration, greater generalizability, and increased precision of treatment effects.

  12. An evaluation of portion size estimation aids: precision, ease of use and likelihood of future use.

    PubMed

    Faulkner, Gemma P; Livingstone, M Barbara E; Pourshahidi, L Kirsty; Spence, Michelle; Dean, Moira; O'Brien, Sinead; Gibney, Eileen R; Wallace, Julie Mw; McCaffrey, Tracy A; Kerr, Maeve A

    2016-09-01

    The present study aimed to evaluate the precision, ease of use and likelihood of future use of portion size estimation aids (PSEA). A range of PSEA were used to estimate the serving sizes of a range of commonly eaten foods and rated for ease of use and likelihood of future usage. For each food, participants selected their preferred PSEA from a range of options including: quantities and measures; reference objects; measuring; and indicators on food packets. These PSEA were used to serve out various foods (e.g. liquid, amorphous, and composite dishes). Ease of use and likelihood of future use were noted. The foods were weighed to determine the precision of each PSEA. Males and females aged 18-64 years (n 120). The quantities and measures were the most precise PSEA (lowest range of weights for estimated portion sizes). However, participants preferred household measures (e.g. 200 ml disposable cup) - deemed easy to use (median rating of 5), likely to use again in future (all scored either 4 or 5 on a scale from 1='not very likely' to 5='very likely to use again') and precise (narrow range of weights for estimated portion sizes). The majority indicated they would most likely use the PSEA preparing a meal (94 %), particularly dinner (86 %) in the home (89 %; all P<0·001) for amorphous grain foods. Household measures may be precise, easy to use and acceptable aids for estimating the appropriate portion size of amorphous grain foods.

  13. Maternal caffeine consumption and risk of cardiovascular malformations.

    PubMed

    Browne, Marilyn L; Bell, Erin M; Druschel, Charlotte M; Gensburg, Lenore J; Mitchell, Allen A; Lin, Angela E; Romitti, Paul A; Correa, Adolfo

    2007-07-01

    The physiologic effects and common use of caffeine during pregnancy call for examination of maternal caffeine consumption and risk of birth defects. Epidemiologic studies have yielded mixed results, but such studies have grouped etiologically different defects and have not evaluated effect modification. The large sample size and precise case classification of the National Birth Defects Prevention Study allowed us to examine caffeine consumption and specific cardiovascular malformation (CVM) case groups. We studied consumption of caffeinated coffee, tea, soda, and chocolate to estimate total caffeine intake and separately examined exposure to each caffeinated beverage. Smoking, alcohol, vasoactive medications, folic acid supplement use, and infant gender were evaluated for effect modification. Maternal interview reports for 4,196 CVM case infants overall and 3,957 control infants were analyzed. We did not identify any significant positive associations between maternal caffeine consumption and CVMs. For tetralogy of Fallot, nonsignificant elevations in risk were observed for moderate (but not high) caffeine intake overall and among nonsmokers (ORs of 1.3 to 1.5). Risk estimates for both smoking and consuming caffeine were less than the sum of the excess risks for each exposure. We observed an inverse trend between coffee intake and risk of atrial septal defect; however, this single significant pattern of association might have been a chance finding. Our study found no evidence for an appreciable teratogenic effect of caffeine with regard to CVMs. (c) 2007 Wiley-Liss, Inc.

  14. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew E.; Matthies, Larry H.

    1998-01-01

    Space science and solar system exploration are driving NASA to develop an array of small body missions ranging in scope from near body flybys to complete sample return. This paper presents an algorithm for onboard motion estimation that will enable the precision guidance necessary for autonomous small body landing. Our techniques are based on automatic feature tracking between a pair of descent camera images followed by two frame motion estimation and scale recovery using laser altimetry data. The output of our algorithm is an estimate of rigid motion (attitude and position) and motion covariance between frames. This motion estimate can be passed directly to the spacecraft guidance and control system to enable rapid execution of safe and precise trajectories.

  15. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    PubMed

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with a second differently generated set of spatial point populations, ν₈ and ν(W) again being the best performers in the longer-range autocorrelated populations. However, no systematic variance estimators tested were free from bias. On balance, systematic designs bring more narrow confidence intervals in clustered populations, while random designs permit unbiased estimates of (often wider) confidence interval. The search continues for better estimators of sampling variance for the systematic survey mean.

  16. Aliasing, Ambiguities, and Interpolation in Wideband Direction-of-Arrival Estimation Using Antenna Arrays

    ERIC Educational Resources Information Center

    Ho, Chung-Cheng

    2016-01-01

    For decades, direction finding has been an important research topic in many applications such as radar, location services, and medical diagnosis for treatment. For those kinds of applications, the precision of location estimation plays an important role, since that, having a higher precision location estimate method is always desirable. Although…

  17. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  18. Effect of Risk of Bias on the Effect Size of Meta-Analytic Estimates in Randomized Controlled Trials in Periodontology and Implant Dentistry

    PubMed Central

    Faggion, Clovis Mariano; Wu, Yun-Chun; Scheidgen, Moritz; Tu, Yu-Kang

    2015-01-01

    Background Risk of bias (ROB) may threaten the internal validity of a clinical trial by distorting the magnitude of treatment effect estimates, although some conflicting information on this assumption exists. Objective The objective of this study was evaluate the effect of ROB on the magnitude of treatment effect estimates in randomized controlled trials (RCTs) in periodontology and implant dentistry. Methods A search for Cochrane systematic reviews (SRs), including meta-analyses of RCTs published in periodontology and implant dentistry fields, was performed in the Cochrane Library in September 2014. Random-effect meta-analyses were performed by grouping RCTs with different levels of ROBs in three domains (sequence generation, allocation concealment, and blinding of outcome assessment). To increase power and precision, only SRs with meta-analyses including at least 10 RCTs were included. Meta-regression was performed to investigate the association between ROB characteristics and the magnitudes of intervention effects in the meta-analyses. Results Of the 24 initially screened SRs, 21 SRs were excluded because they did not include at least 10 RCTs in the meta-analyses. Three SRs (two from periodontology field) generated information for conducting 27 meta-analyses. Meta-regression did not reveal significant differences in the relationship of the ROB level with the size of treatment effect estimates, although a trend for inflated estimates was observed in domains with unclear ROBs. Conclusion In this sample of RCTs, high and (mainly) unclear risks of selection and detection biases did not seem to influence the size of treatment effect estimates, although several confounders might have influenced the strength of the association. PMID:26422698

  19. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  20. Age estimation of burbot using pectoral fin rays, brachiostegal rays, and otoliths

    USGS Publications Warehouse

    Klein, Zachary B.; Terrazas, Marc M.; Quist, Michael C.

    2014-01-01

    Throughout much of its native distribution, burbot (Lota lota) is a species of conservation concern. Understanding dynamic rate functions is critical for the effective management of sensitive burbot populations, which necessitates accurate and precise age estimates. Managing sensitive burbot populations requires an accurate and precise non-lethal alternative. In an effort to identify a non-lethal ageing structure, we compared the precision of age estimates obtained from otoliths, pectoral fin rays, dorsal fin rays and branchiostegal rays from 208 burbot collected from the Green River drainage, Wyoming. Additionally, we compared the accuracy of age estimates from pectoral fin rays, dorsal fin rays and branchiostegal rays to those of otoliths. Dorsal fin rays were immediately deemed a poor ageing structure and removed from further analysis. Age-bias plots of consensus ages derived from branchiostegal rays and pectoral fin rays were appreciably different from those obtained from otoliths. Exact agreement between readers and reader confidence was highest for otoliths and lowest for branchiostegal rays. Age-bias plots indicated that age estimates obtained from branchiostegal rays and pectoral fin rays were substantially different from age estimates obtained from otoliths. Our results indicate that otoliths provide the most precise age estimates for burbot.

  1. Variation of normal tissue complication probability (NTCP) estimates of radiation-induced hypothyroidism in relation to changes in delineation of the thyroid gland.

    PubMed

    Rønjom, Marianne F; Brink, Carsten; Lorenzen, Ebbe L; Hegedüs, Laszlo; Johansen, Jørgen

    2015-01-01

    To examine the variations of risk-estimates of radiation-induced hypothyroidism (HT) from our previously developed normal tissue complication probability (NTCP) model in patients with head and neck squamous cell carcinoma (HNSCC) in relation to variability of delineation of the thyroid gland. In a previous study for development of an NTCP model for HT, the thyroid gland was delineated in 246 treatment plans of patients with HNSCC. Fifty of these plans were randomly chosen for re-delineation for a study of the intra- and inter-observer variability of thyroid volume, Dmean and estimated risk of HT. Bland-Altman plots were used for assessment of the systematic (mean) and random [standard deviation (SD)] variability of the three parameters, and a method for displaying the spatial variation in delineation differences was developed. Intra-observer variability resulted in a mean difference in thyroid volume and Dmean of 0.4 cm(3) (SD ± 1.6) and -0.5 Gy (SD ± 1.0), respectively, and 0.3 cm(3) (SD ± 1.8) and 0.0 Gy (SD ± 1.3) for inter-observer variability. The corresponding mean differences of NTCP values for radiation-induced HT due to intra- and inter-observer variations were insignificantly small, -0.4% (SD ± 6.0) and -0.7% (SD ± 4.8), respectively, but as the SDs show, for some patients the difference in estimated NTCP was large. For the entire study population, the variation in predicted risk of radiation-induced HT in head and neck cancer was small and our NTCP model was robust against observer variations in delineation of the thyroid gland. However, for the individual patient, there may be large differences in estimated risk which calls for precise delineation of the thyroid gland to obtain correct dose and NTCP estimates for optimized treatment planning in the individual patient.

  2. Quantum metrology and estimation of Unruh effect

    PubMed Central

    Wang, Jieci; Tian, Zehua; Jing, Jiliang; Fan, Heng

    2014-01-01

    We study the quantum metrology for a pair of entangled Unruh-Dewitt detectors when one of them is accelerated and coupled to a massless scalar field. Comparing with previous schemes, our model requires only local interaction and avoids the use of cavities in the probe state preparation process. We show that the probe state preparation and the interaction between the accelerated detector and the external field have significant effects on the value of quantum Fisher information, correspondingly pose variable ultimate limit of precision in the estimation of Unruh effect. We find that the precision of the estimation can be improved by a larger effective coupling strength and a longer interaction time. Alternatively, the energy gap of the detector has a range that can provide us a better precision. Thus we may adjust those parameters and attain a higher precision in the estimation. We also find that an extremely high acceleration is not required in the quantum metrology process. PMID:25424772

  3. Spatial distribution, sampling precision and survey design optimisation with non-normal variables: The case of anchovy (Engraulis encrasicolus) recruitment in Spanish Mediterranean waters

    NASA Astrophysics Data System (ADS)

    Tugores, M. Pilar; Iglesias, Magdalena; Oñate, Dolores; Miquel, Joan

    2016-02-01

    In the Mediterranean Sea, the European anchovy (Engraulis encrasicolus) displays a key role in ecological and economical terms. Ensuring stock sustainability requires the provision of crucial information, such as species spatial distribution or unbiased abundance and precision estimates, so that management strategies can be defined (e.g. fishing quotas, temporal closure areas or marine protected areas MPA). Furthermore, the estimation of the precision of global abundance at different sampling intensities can be used for survey design optimisation. Geostatistics provide a priori unbiased estimations of the spatial structure, global abundance and precision for autocorrelated data. However, their application to non-Gaussian data introduces difficulties in the analysis in conjunction with low robustness or unbiasedness. The present study applied intrinsic geostatistics in two dimensions in order to (i) analyse the spatial distribution of anchovy in Spanish Western Mediterranean waters during the species' recruitment season, (ii) produce distribution maps, (iii) estimate global abundance and its precision, (iv) analyse the effect of changing the sampling intensity on the precision of global abundance estimates and, (v) evaluate the effects of several methodological options on the robustness of all the analysed parameters. The results suggested that while the spatial structure was usually non-robust to the tested methodological options when working with the original dataset, it became more robust for the transformed datasets (especially for the log-backtransformed dataset). The global abundance was always highly robust and the global precision was highly or moderately robust to most of the methodological options, except for data transformation.

  4. An evaluation of the accuracy of small-area demographic estimates of population at risk and its effect on prevalence statistics

    PubMed Central

    2013-01-01

    Demographic estimates of population at risk often underpin epidemiologic research and public health surveillance efforts. In spite of their central importance to epidemiology and public-health practice, little previous attention has been paid to evaluating the magnitude of errors associated with such estimates or the sensitivity of epidemiologic statistics to these effects. In spite of the well-known observation that accuracy in demographic estimates declines as the size of the population to be estimated decreases, demographers continue to face pressure to produce estimates for increasingly fine-grained population characteristics at ever-smaller geographic scales. Unfortunately, little guidance on the magnitude of errors that can be expected in such estimates is currently available in the literature and available for consideration in small-area epidemiology. This paper attempts to fill this current gap by producing a Vintage 2010 set of single-year-of-age estimates for census tracts, then evaluating their accuracy and precision in light of the results of the 2010 Census. These estimates are produced and evaluated for 499 census tracts in New Mexico for single-years of age from 0 to 21 and for each sex individually. The error distributions associated with these estimates are characterized statistically using non-parametric statistics including the median and 2.5th and 97.5th percentiles. The impact of these errors are considered through simulations in which observed and estimated 2010 population counts are used as alternative denominators and simulated event counts are used to compute a realistic range fo prevalence values. The implications of the results of this study for small-area epidemiologic research in cancer and environmental health are considered. PMID:24359344

  5. The effectiveness of visitation proxy variables in improving recreation use estimates for the USDA Forest Service

    Treesearch

    Donald B.K. English; Susan M. Kocis; J. Ross Arnold; Stanley J. Zarnoch; Larry Warren

    2003-01-01

    In estimating recreation visitation at the National Forest level in the US, annual counts of a number of types of visitation proxy measures were used. The intent was to improve the overall precision of the visitation estimate by employing the proxy counts. The precision of visitation estimates at sites that had proxy information versus those that did not is examined....

  6. Using satellite imagery as ancillary data for increasing the precision of estimates for the Forest Inventory and Analysis program of the USDA Forest Service

    Treesearch

    Ronald E. McRoberts; Geoffrey R. Holden; Mark D. Nelson; Greg C. Liknes; Dale D. Gormanson

    2006-01-01

    Forest inventory programs report estimates of forest variables for areas of interest ranging in size from municipalities, to counties, to states or provinces. Because of numerous factors, sample sizes are often insufficient to estimate attributes as precisely as is desired, unless the estimation process is enhanced using ancillary data. Classified satellite imagery has...

  7. A method for estimating radioactive cesium concentrations in cattle blood using urine samples.

    PubMed

    Sato, Itaru; Yamagishi, Ryoma; Sasaki, Jun; Satoh, Hiroshi; Miura, Kiyoshi; Kikuchi, Kaoru; Otani, Kumiko; Okada, Keiji

    2017-12-01

    In the region contaminated by the Fukushima nuclear accident, radioactive contamination of live cattle should be checked before slaughter. In this study, we establish a precise method for estimating radioactive cesium concentrations in cattle blood using urine samples. Blood and urine samples were collected from a total of 71 cattle on two farms in the 'difficult-to-return zone'. Urine 137 Cs, specific gravity, electrical conductivity, pH, sodium, potassium, calcium, and creatinine were measured and various estimation methods for blood 137 Cs were tested. The average error rate of the estimation was 54.2% without correction. Correcting for urine creatinine, specific gravity, electrical conductivity, or potassium improved the precision of the estimation. Correcting for specific gravity using the following formula gave the most precise estimate (average error rate = 16.9%): [blood 137 Cs] = [urinary 137 Cs]/([specific gravity] - 1)/329. Urine samples are faster to measure than blood samples because urine can be obtained in larger quantities and has a higher 137 Cs concentration than blood. These advantages of urine and the estimation precision demonstrated in our study, indicate that estimation of blood 137 Cs using urine samples is a practical means of monitoring radioactive contamination in live cattle. © 2017 Japanese Society of Animal Science.

  8. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  9. Inter-examination Precision of Magnitude-based Magnetic Resonance Imaging for Estimation of Segmental Hepatic Proton Density Fat Fraction (PDFF) in Obese Subjects

    PubMed Central

    Negrete, Lindsey M.; Middleton, Michael S.; Clark, Lisa; Wolfson, Tanya; Gamst, Anthony C.; Lam, Jessica; Changchien, Chris; Deyoung-Dominguez, Ivan M.; Hamilton, Gavin; Loomba, Rohit; Schwimmer, Jeffrey; Sirlin, Claude B.

    2013-01-01

    Purpose To prospectively describe magnitude-based multi-echo gradient-echo hepatic proton density fat fraction (PDFF) inter-examination precision at 3T. Materials and Methods In this prospective, IRB approved, HIPAA compliant study, written informed consent was obtained from 29 subjects (body mass indexes > 30kg/m2). Three 3T magnetic resonance imaging (MRI) examinations were obtained over 75-90 minutes. Segmental, lobar, and whole liver PDFF were estimated (using three, four, five, or six echoes) by magnitude-based multi-echo MRI in co-localized regions of interest (ROIs). For estimate (using three, four, five, or six echoes), at each anatomic level (segmental, lobar, whole liver), three inter-examination precision metrics were computed: intra-class correlation coefficient (ICC), standard deviation (SD), and range. Results Magnitude-based PDFF estimates using each reconstruction method showed excellent inter-examination precision for each segment (ICC ≥ 0.992; SD ≤ 0.66%; range ≤ 1.24%), lobe (ICC ≥ 0.998; SD ≤ 0.34%; range ≤ 0.64%), and the whole liver (ICC = 0.999; SD ≤ 0.24%; range ≤ 0.45%). Inter-examination precision was unaffected by whether PDFF was estimated using three, four, five, or six echoes. Conclusion Magnitude-based PDFF estimation shows high inter-examination precision at segmental, lobar, and whole liver anatomic levels, supporting its use in clinical care or clinical trials. The results of this study suggest that longitudinal hepatic PDFF change greater than 1.6% is likely to represent signal rather than noise. PMID:24136736

  10. Peak Measurement for Vancomycin AUC Estimation in Obese Adults Improves Precision and Lowers Bias.

    PubMed

    Pai, Manjunath P; Hong, Joseph; Krop, Lynne

    2017-04-01

    Vancomycin area under the curve (AUC) estimates may be skewed in obese adults due to weight-dependent pharmacokinetic parameters. We demonstrate that peak and trough measurements reduce bias and improve the precision of vancomycin AUC estimates in obese adults ( n = 75) and validate this in an independent cohort ( n = 31). The precision and mean percent bias of Bayesian vancomycin AUC estimates are comparable between covariate-dependent ( R 2 = 0.774, 3.55%) and covariate-independent ( R 2 = 0.804, 3.28%) models when peaks and troughs are measured but not when measurements are restricted to troughs only ( R 2 = 0.557, 15.5%). Copyright © 2017 American Society for Microbiology.

  11. Achieving metrological precision limits through postselection

    NASA Astrophysics Data System (ADS)

    Alves, G. Bié; Pimentel, A.; Hor-Meyll, M.; Walborn, S. P.; Davidovich, L.; Filho, R. L. de Matos

    2017-01-01

    Postselection strategies have been proposed with the aim of amplifying weak signals, which may help to overcome detection thresholds associated with technical noise in high-precision measurements. Here we use an optical setup to experimentally explore two different postselection protocols for the estimation of a small parameter: a weak-value amplification procedure and an alternative method that does not provide amplification but nonetheless is shown to be more robust for the sake of parameter estimation. Each technique leads approximately to the saturation of quantum limits for the estimation precision, expressed by the Cramér-Rao bound. For both situations, we show that parameter estimation is improved when the postselection statistics are considered together with the measurement device.

  12. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  13. Integrating acoustic telemetry into mark-recapture models to improve the precision of apparent survival and abundance estimates.

    PubMed

    Dudgeon, Christine L; Pollock, Kenneth H; Braccini, J Matias; Semmens, Jayson M; Barnett, Adam

    2015-07-01

    Capture-mark-recapture models are useful tools for estimating demographic parameters but often result in low precision when recapture rates are low. Low recapture rates are typical in many study systems including fishing-based studies. Incorporating auxiliary data into the models can improve precision and in some cases enable parameter estimation. Here, we present a novel application of acoustic telemetry for the estimation of apparent survival and abundance within capture-mark-recapture analysis using open population models. Our case study is based on simultaneously collecting longline fishing and acoustic telemetry data for a large mobile apex predator, the broadnose sevengill shark (Notorhynchus cepedianus), at a coastal site in Tasmania, Australia. Cormack-Jolly-Seber models showed that longline data alone had very low recapture rates while acoustic telemetry data for the same time period resulted in at least tenfold higher recapture rates. The apparent survival estimates were similar for the two datasets but the acoustic telemetry data showed much greater precision and enabled apparent survival parameter estimation for one dataset, which was inestimable using fishing data alone. Combined acoustic telemetry and longline data were incorporated into Jolly-Seber models using a Monte Carlo simulation approach. Abundance estimates were comparable to those with longline data only; however, the inclusion of acoustic telemetry data increased precision in the estimates. We conclude that acoustic telemetry is a useful tool for incorporating in capture-mark-recapture studies in the marine environment. Future studies should consider the application of acoustic telemetry within this framework when setting up the study design and sampling program.

  14. An estimation of burden of serious fungal infections in France.

    PubMed

    Gangneux, J-P; Bougnoux, M-E; Hennequin, C; Godet, C; Chandenier, J; Denning, D W; Dupont, B

    2016-12-01

    An estimation of burden of serious fungal diseases in France is essential data to inform public health priorities on the importance of resources and research needed on these infections. In France, precise data are available for invasive fungal diseases but estimates for several other diseases such as chronic and immunoallergic diseases are by contrast less known. A systematic literature search was conducted using the Web of Science Platform. Published epidemiology papers reporting fungal infection rates from France were identified. Where no data existed, we used specific populations at risk and fungal infection frequencies in those populations to estimate national incidence or prevalence, depending on the condition. The model predicts high prevalences of severe asthma with fungal sensitization episodes (189 cases/100,000 adults per year), of allergic bronchopulmonary aspergillosis (145/100,000) and of chronic pulmonary aspergillosis (5.24/100,000). Besides, estimated incidence for invasive aspergillosis is 1.8/100,000 annually based on classical high risk factors. Estimates for invasive mucormycosis, pneumocystosis and cryptococcosis are 0.12/100,000, 1/100,000 and 0.2/100,000, respectively. Regarding invasive candidiasis, more than 10,000 cases per year are estimated, and a much higher number of recurrent vaginal candidiasis is probable but must be confirmed. Finally, this survey was an opportunity to report a first picture of the frequency of tinea capitis in France. Using local and literature data of the incidence or prevalence of fungal infections, approximately 1,000,000 (1.47%) people in France are estimated to suffer from serious fungal infections each year. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  15. Glutathione S-transferase M1 polymorphism and endometriosis susceptibility: a meta-analysis.

    PubMed

    Li, H; Zhang, Y

    2015-02-01

    Many studies have investigated the association between glutathione S-transferase M1 (GSTM1) null genotype and the risk of endometriosis. However, the effect of the GSTM1 null genotype on endometriosis is still unclear because of apparent inconsistencies among those studies. A meta-analysis was performed to characterize the relationship more accurately. PubMed, Embase, and Web of Science were searched. To derive a more precise estimation of the relationship, a meta-analysis was performed. We estimated the summary odds ratio (OR) with a 95% confidence interval (95% CI) to assess the association. Up to 24 case-control studies with 2,684 endometriosis cases and 3,119 control cases were included into this meta-analysis. Meta-analysis of the 24 studies showed that GSTM1 null genotype was associated with the risk of endometriosis (random effects OR=1.66, 95% CI 1.23 to 2.24). In the subgroup analysis by ethnicity, increased risks were found for both Caucasians (OR=1.26, 95% CI 1.04-1.51) and Asians (OR=1.28, 95% CI 1.06-1.55). No evidence of publication bias was observed. In conclusion, this meta-analysis suggests that the GSTM1 null genotype increases the overall risk of endometriosis. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  16. Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.

    PubMed

    Yuan, Haidong

    2016-10-14

    Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.

  17. Novel biomarkers for prediabetes, diabetes, and associated complications

    PubMed Central

    Dorcely, Brenda; Katz, Karin; Jagannathan, Ram; Chiang, Stephanie S; Oluwadare, Babajide; Goldberg, Ira J; Bergman, Michael

    2017-01-01

    The number of individuals with prediabetes is expected to grow substantially and estimated to globally affect 482 million people by 2040. Therefore, effective methods for diagnosing prediabetes will be required to reduce the risk of progressing to diabetes and its complications. The current biomarkers, glycated hemoglobin (HbA1c), fructosamine, and glycated albumin have limitations including moderate sensitivity and specificity and are inaccurate in certain clinical conditions. Therefore, identification of additional biomarkers is being explored recognizing that any single biomarker will also likely have inherent limitations. Therefore, combining several biomarkers may more precisely identify those at high risk for developing prediabetes and subsequent progression to diabetes. This review describes recently identified biomarkers and their potential utility for addressing the burgeoning epidemic of dysglycemic disorders. PMID:28860833

  18. Challenges to studying the health effects of early life environmental chemical exposures on children's health.

    PubMed

    Braun, Joseph M; Gray, Kimberly

    2017-12-01

    Epidemiological studies play an important role in quantifying how early life environmental chemical exposures influence the risk of childhood diseases. These studies face at least four major challenges that can produce noise when trying to identify signals of associations between chemical exposure and childhood health. Challenges include accurately estimating chemical exposure, confounding from causes of both exposure and disease, identifying periods of heightened vulnerability to chemical exposures, and determining the effects of chemical mixtures. We provide recommendations that will aid in identifying these signals with more precision.

  19. Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.

    PubMed

    Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim

    2017-12-01

    The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  1. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less

  2. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150

  3. Use of the RISK21 roadmap and matrix: human health risk assessment of the use of a pyrethroid in bed netting

    PubMed Central

    Doe, John E.; Lander, Deborah R.; Doerrer, Nancy G.; Heard, Nina; Hines, Ronald N.; Lowit, Anna B.; Pastoor, Timothy; Phillips, Richard D.; Sargent, Dana; Sherman, James H.; Young Tanir, Jennifer; Embry, Michelle R.

    2016-01-01

    Abstract The HESI-coordinated RISK21 roadmap and matrix are tools that provide a transparent method to compare exposure and toxicity information and assess whether additional refinement is required to obtain the necessary precision level for a decision regarding safety. A case study of the use of a pyrethroid, “pseudomethrin,” in bed netting to control malaria is presented to demonstrate the application of the roadmap and matrix. The evaluation began with a problem formulation step. The first assessment utilized existing information pertaining to the use and the class of chemistry. At each stage of the step-wise approach, the precision of the toxicity and exposure estimates were refined as necessary by obtaining key data which enabled a decision on safety to be made efficiently and with confidence. The evaluation demonstrated the concept of using existing information within the RISK21 matrix to drive the generation of additional data using a value-of-information approach. The use of the matrix highlighted whether exposure or toxicity required further investigation and emphasized the need to address the default uncertainty factor of 100 at the highest tier of the evaluation. It also showed how new methodology such as the use of in vitro studies and assays could be used to answer the specific questions which arise through the use of the matrix. The matrix also serves as a useful means to communicate progress to stakeholders during an assessment of chemical use. PMID:26517449

  4. Analysis of the Precision of Variable Flip Angle T1 Mapping with Emphasis on the Noise Propagated from RF Transmit Field Maps.

    PubMed

    Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan

    2017-01-01

    In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.

  5. Accuracy and practicality of a portable ozone monitor for personal exposure estimates

    NASA Astrophysics Data System (ADS)

    Sagona, Jessica A.; Weisel, Clifford P.; Meng, Qingyu

    2018-02-01

    Accurate measurements of personal exposure to atmospheric pollutants such as ozone are important for understanding health risks. We tested a new personal ozone monitor (POM; 2B Technologies) for accuracy, precision, and ease of use. The POM's measurements were compared to simultaneous ozone measurements from a 2B Model 205 monitor and a ThermoScientific 49i monitor, and multiple POMs were placed side-by-side to check precision. Tests were undertaken in a controlled environmental facility, outdoors, and in a private residence. Additionally, ten volunteers wore a POM for five days and answered a questionnaire about its ease of use. The POM measured ozone accurately compared to the 49i ozone monitor, with average relative differences of less than 8%. In the controlled environment tests, the POM's ozone measurements did not change in the presence of additional atmospheric constituents with similar absorption lines to ozone, though there may have been a small decrease in precision and accuracy. Precision between POMs varied by environment (r2 = 0.98 outdoors; r2 = 0.3 to 0.9 in controlled lab conditions). Volunteers reported that the POM was reasonably comfortable to wear, although all reported that they felt that it was too noisy. Overall, the POM is a viable option for personal ozone monitoring.

  6. Does choice of estimators influence conclusions from true metabolizable energy feeding trials?

    USGS Publications Warehouse

    Sherfy, M.H.; Kirkpatrick, R.L.; Webb, K.E.

    2005-01-01

    True metabolizable energy (TME) is a measure of avian dietary quality that accounts for metabolic fecal and endogenous urinary energy losses (EL) of non-dietary origin. The TME is calculated using a bird fed the test diet and an estimate of EL derived from another bird (Paired Bird Correction), the same bird (Self Correction), or several other birds (Group Mean Correction). We evaluated precision of these estimators by using each to calculate TME of three seed diets in blue-winged teal (Anas discors). The TME varied by <2% among estimators for all three diets, and Self Correction produced the least variable TMEs for each. The TME did not differ between estimators in nine paired comparisons within diets, but variation between estimators within individual birds was sufficient to be of practical consequence. Although differences in precision among methods were slight, Self Correction required the lowest sample size to achieve a given precision. Feeding trial methods that minimize variation among individuals have several desirable properties, including higher precision of TME estimates and more rigorous experimental control. Consequently, we believe that Self Correction is most likely to accurately represent nutritional value of food items and should be considered the standard method for TME feeding trials. ?? Dt. Ornithologen-Gesellschaft e.V. 2005.

  7. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  8. Medical imaging dose optimisation from ground up: expert opinion of an international summit.

    PubMed

    Samei, Ehsan; Järvinen, Hannu; Kortesniemi, Mika; Simantirakis, George; Goh, Charles; Wallace, Anthony; Vano, Eliseo; Bejan, Adrian; Rehani, Madan; Vassileva, Jenia

    2018-05-17

    As in any medical intervention, there is either a known or an anticipated benefit to the patient from undergoing a medical imaging procedure. This benefit is generally significant, as demonstrated by the manner in which medical imaging has transformed clinical medicine. At the same time, when it comes to imaging that deploys ionising radiation, there is a potential associated risk from radiation. Radiation risk has been recognised as a key liability in the practice of medical imaging, creating a motivation for radiation dose optimisation. The level of radiation dose and risk in imaging varies but is generally low. Thus, from the epidemiological perspective, this makes the estimation of the precise level of associated risk highly uncertain. However, in spite of the low magnitude and high uncertainty of this risk, its possibility cannot easily be refuted. Therefore, given the moral obligation of healthcare providers, 'first, do no harm,' there is an ethical obligation to mitigate this risk. Precisely how to achieve this goal scientifically and practically within a coherent system has been an open question. To address this need, in 2016, the International Atomic Energy Agency (IAEA) organised a summit to clarify the role of Diagnostic Reference Levels to optimise imaging dose, summarised into an initial report (Järvinen et al 2017 Journal of Medical Imaging 4 031214). Through a consensus building exercise, the summit further concluded that the imaging optimisation goal goes beyond dose alone, and should include image quality as a means to include both the benefit and the safety of the exam. The present, second report details the deliberation of the summit on imaging optimisation.

  9. Family history and risk of endometrial cancer: a systematic review and meta-analysis.

    PubMed

    Win, Aung Ko; Reece, Jeanette C; Ryan, Shae

    2015-01-01

    To obtain precise estimates of endometrial cancer risk associated with a family history of endometrial cancer or cancers at other sites. For the systematic review, we used PubMed to search for all relevant studies on family history and endometrial cancer that were published before December 2013. Medical Subject Heading terms "endometrial neoplasm" and "uterine neoplasm" were used in combination with one of the key phrases "family history," "first-degree," "familial risk," "aggregation," or "relatedness." Studies were included if they were case-control or cohort studies that investigated the association between a family history of cancer specified to site and endometrial cancer. Studies were excluded if they were review or editorial articles or not translated into English or did not define family history clearly or used spouses as control participants. We included 16 studies containing 3,871 women as cases and 49,475 women as controls from 10 case-control studies and 33,510 women as cases from six cohort studies. We conducted meta-analyses to estimate the pooled relative risk (95% confidence interval [CI]) of endometrial cancer associated with a first-degree family history of endometrial, colorectal, breast, ovarian, and cervical cancer to be: 1.82 (1.65-1.98), 1.17 (1.03-1.31), 0.96 (0.88-1.04), 1.13 (0.85-1.41), and 1.19 (0.83-1.55), respectively. We estimated cumulative risk of endometrial cancer to age 70 years to be 3.1% (95% CI 2.8-3.4) for women with a first-degree relative with endometrial cancer and the population-attributable risk to be 3.5% (95% CI 2.8-4.2). Women with a first-degree family history of endometrial cancer or colorectal cancer have a higher risk of developing endometrial cancer than those without a family history. This study is likely to be of clinical relevance to inform women of their risk of endometrial cancer.

  10. Precision and relative effectiveness of a purse seine for sampling age-0 river herring in lakes

    USGS Publications Warehouse

    Devine, Matthew T.; Roy, Allison; Whiteley, Andrew R.; Gahagan, Benjamin I.; Armstrong, Michael P.; Jordaan, Adrian

    2018-01-01

    Stock assessments for anadromous river herring, collectively Alewife Alosa pseudoharengus and Blueback Herring A. aestivalis, lack adequate demographic information, particularly with respect to early life stages. Although sampling adult river herring is increasingly common throughout their range, currently no standardized, field‐based, analytical methods exist for estimating juvenile abundance in freshwater lakes. The objective of this research was to evaluate the relative effectiveness and sampling precision of a purse seine for estimating densities of age‐0 river herring in freshwater lakes. We used a purse seine to sample age‐0 river herring in June–September 2015 and June–July 2016 in 16 coastal freshwater lakes in the northeastern USA. Sampling effort varied from two seine hauls to more than 50 seine hauls per lake. Catch rates were highest in June and July, and sampling precision was maximized in July. Sampling at night (versus day) in open water (versus littoral areas) was most effective for capturing newly hatched larvae and juveniles up to ca. 100 mm TL. Bootstrap simulation results indicated that sampling precision of CPUE estimates increased with sampling effort, and there was a clear threshold beyond which increased effort resulted in negligible increases in precision. The effort required to produce precise CPUE estimates, as determined by the CV, was dependent on lake size; river herring densities could be estimated with up to 10 purse‐seine hauls (one‐two nights) in a small lake (<50 ha) and 15–20 hauls (two‐three nights) in a large lake (>50 ha). Fish collection techniques using a purse seine as described in this paper are likely to be effective for estimating recruit abundance of river herring in freshwater lakes across their range.

  11. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction

    PubMed Central

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-01-01

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469

  12. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction.

    PubMed

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-10-16

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions.

  13. Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.

    PubMed

    Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen

    2016-07-01

    This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.

  14. [Evaluation of the reliability of freight elevator operators].

    PubMed

    Gosk, A; Borodulin-Nadzieja, L; Janocha, A; Salomon, E

    1991-01-01

    The study involved 58 workers employed at winding machines. Their reliability was estimated from the results of psychomotoric test precision, condition of the vegetative nervous system, and from the results of psychological tests. The tests were carried out at the laboratory and at the workplaces, with all distractive factors and functional connection of the work process present. We have found that the reliability of the workers may be affected by a variety of factors. Among the winding machine operators, work monotony can lead to "monotony syndrome". Among the signalists , the appreciation of great responsibility can lead to unpredictable and non-adequate reactions. From both groups, persons displaying a lower-than-average precision were isolated. All those persons demonstrated a reckless attitude and the opinion of their superiors about them was poor. Those persons constitute potential risk for the reliable operation of the discussed team.

  15. Genetic polymorphisms of xeroderma pigmentosum group D gene Asp312Asn and Lys751Gln and susceptibility to prostate cancer: a systematic review and meta-analysis.

    PubMed

    Ma, Qingtong; Qi, Can; Tie, Chong; Guo, Zhanjun

    2013-11-10

    Many studies have reported the role of xeroderma pigmentosum group D (XPD) with prostate cancer risk, but the results remained controversial. To derive a more precise estimation of the relationship, a meta-analysis was performed. Odds ratios (ORs) with 95% confidence intervals (CIs) were estimated to assess the association between XPD Asp312Asn and Lys751Gln polymorphisms and prostate cancer risk. A total of 8 studies including 2620 cases and 3225 controls described Asp312Asn genotypes, among which 10 articles involving 3230 cases and 3582 controls described Lys751Gln genotypes and were also involved in this meta-analysis. When all the eligible studies were pooled into this meta-analysis, a significant association between prostate cancer risk and XPD Asp312Asn polymorphism was found. For Asp312Asn polymorphism, in the stratified analysis by ethnicity and source of controls, prostate cancer risk was observed in co-dominant, dominant and recessive models, while no evidence of any associations of XPD Lys751Gln polymorphism with prostate cancer was found in the overall or subgroup analyses. Our meta-analysis supports that the XPD Asp312Asn polymorphism contributed to the risk of prostate cancer from currently available evidence. However, a study with a larger sample size is needed to further evaluate gene-environment interaction on XPD Asp312Asn and Lys751Gln polymorphisms and prostate cancer risk. © 2013.

  16. Congenital Toxoplasmosis: A Plea for a Neglected Disease

    PubMed Central

    Wallon, Martine; Peyron, François

    2018-01-01

    Maternal infection by Toxoplasma gondii during pregnancy may have serious consequences for the fetus, ranging from miscarriage, central nervous system involvement, retinochoroiditis, or subclinical infection at birth with a risk of late onset of ocular diseases. As infection in pregnant women is usually symptomless, the diagnosis relies only on serological tests. Some countries like France and Austria have organized a regular serological testing of pregnant women, some others have no prenatal program of surveillance. Reasons for these discrepant attitudes are many and debatable. Among them are the efficacy of antenatal treatment and cost-effectiveness of such a program. A significant body of data demonstrated that rapid onset of treatment after maternal infection reduces the risk and severity of fetal infection. Recent cost-effectiveness studies support regular screening. This lack of consensus put both pregnant women and care providers in a difficult situation. Another reason why congenital toxoplasmosis is disregarded in some countries is the lack of precise information about its impact on the population. Precise estimations on the burden of the disease can be achieved by systematic screening that will avoid bias or underreporting of cases and provide a clear view of its outcome. PMID:29473896

  17. Is using multiple imputation better than complete case analysis for estimating a prevalence (risk) difference in randomized controlled trials when binary outcome observations are missing?

    PubMed

    Mukaka, Mavuto; White, Sarah A; Terlouw, Dianne J; Mwapasa, Victor; Kalilani-Phiri, Linda; Faragher, E Brian

    2016-07-22

    Missing outcomes can seriously impair the ability to make correct inferences from randomized controlled trials (RCTs). Complete case (CC) analysis is commonly used, but it reduces sample size and is perceived to lead to reduced statistical efficiency of estimates while increasing the potential for bias. As multiple imputation (MI) methods preserve sample size, they are generally viewed as the preferred analytical approach. We examined this assumption, comparing the performance of CC and MI methods to determine risk difference (RD) estimates in the presence of missing binary outcomes. We conducted simulation studies of 5000 simulated data sets with 50 imputations of RCTs with one primary follow-up endpoint at different underlying levels of RD (3-25 %) and missing outcomes (5-30 %). For missing at random (MAR) or missing completely at random (MCAR) outcomes, CC method estimates generally remained unbiased and achieved precision similar to or better than MI methods, and high statistical coverage. Missing not at random (MNAR) scenarios yielded invalid inferences with both methods. Effect size estimate bias was reduced in MI methods by always including group membership even if this was unrelated to missingness. Surprisingly, under MAR and MCAR conditions in the assessed scenarios, MI offered no statistical advantage over CC methods. While MI must inherently accompany CC methods for intention-to-treat analyses, these findings endorse CC methods for per protocol risk difference analyses in these conditions. These findings provide an argument for the use of the CC approach to always complement MI analyses, with the usual caveat that the validity of the mechanism for missingness be thoroughly discussed. More importantly, researchers should strive to collect as much data as possible.

  18. ESTIMATING SAMPLE REQUIREMENTS FOR FIELD EVALUATIONS OF PESTICIDE LEACHING

    EPA Science Inventory

    A method is presented for estimating the number of samples needed to evaluate pesticide leaching threats to ground water at a desired level of precision. Sample size projections are based on desired precision (exhibited as relative tolerable error), level of confidence (90 or 95%...

  19. ANALYTICAL METHOD COMPARISONS BY ESTIMATES OF PRECISION AND LOWER DETECTION LIMIT

    EPA Science Inventory

    The paper describes the use of principal component analysis to estimate the operating precision of several different analytical instruments or methods simultaneously measuring a common sample of a material whose actual value is unknown. This approach is advantageous when none of ...

  20. Evaluation of the procedure 1A component of the 1980 US/Canada wheat and barley exploratory experiment

    NASA Technical Reports Server (NTRS)

    Chapman, G. M. (Principal Investigator); Carnes, J. G.

    1981-01-01

    Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.

  1. Weighted linear least squares estimation of diffusion MRI parameters: strengths, limitations, and pitfalls.

    PubMed

    Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben

    2013-11-01

    Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Effects of lidar pulse density and sample size on a model-assisted approach to estimate forest inventory variables

    Treesearch

    Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen

    2012-01-01

    Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...

  3. Big Data’s Role in Precision Public Health

    PubMed Central

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  4. Big Data's Role in Precision Public Health.

    PubMed

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  5. Discriminatory power of common genetic variants in personalized breast cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Wu, Yirong; Abbey, Craig K.; Liu, Jie; Ong, Irene; Peissig, Peggy; Onitilo, Adedayo A.; Fan, Jun; Yuan, Ming; Burnside, Elizabeth S.

    2016-03-01

    Technology advances in genome-wide association studies (GWAS) has engendered optimism that we have entered a new age of precision medicine, in which the risk of breast cancer can be predicted on the basis of a person's genetic variants. The goal of this study is to evaluate the discriminatory power of common genetic variants in breast cancer risk estimation. We conducted a retrospective case-control study drawing from an existing personalized medicine data repository. We collected variables that predict breast cancer risk: 153 high-frequency/low-penetrance genetic variants, reflecting the state-of-the-art GWAS on breast cancer, mammography descriptors and BI-RADS assessment categories in the Breast Imaging Reporting and Data System (BI-RADS) lexicon. We trained and tested naïve Bayes models by using these predictive variables. We generated ROC curves and used the area under the ROC curve (AUC) to quantify predictive performance. We found that genetic variants achieved comparable predictive performance to BI-RADS assessment categories in terms of AUC (0.650 vs. 0.659, p-value = 0.742), but significantly lower predictive performance than the combination of BI-RADS assessment categories and mammography descriptors (0.650 vs. 0.751, p-value < 0.001). A better understanding of relative predictive capability of genetic variants and mammography data may benefit clinicians and patients to make appropriate decisions about breast cancer screening, prevention, and treatment in the era of precision medicine.

  6. Integration of a radiation biomarker into modeling of thyroid carcinogenesis and post-Chernobyl risk assessment.

    PubMed

    Kaiser, Jan Christian; Meckbach, Reinhard; Eidemüller, Markus; Selmansberger, Martin; Unger, Kristian; Shpak, Viktor; Blettner, Maria; Zitzelsberger, Horst; Jacob, Peter

    2016-12-01

    Strong evidence for the statistical association between radiation exposure and disease has been produced for thyroid cancer by epidemiological studies after the Chernobyl accident. However, limitations of the epidemiological approach in order to explore health risks especially at low doses of radiation appear obvious. Statistical fluctuations due to small case numbers dominate the uncertainty of risk estimates. Molecular radiation markers have been searched extensively to separate radiation-induced cancer cases from sporadic cases. The overexpression of the CLIP2 gene is the most promising of these markers. It was found in the majority of papillary thyroid cancers (PTCs) from young patients included in the Chernobyl tissue bank. Motivated by the CLIP2 findings we propose a mechanistic model which describes PTC development as a sequence of rate-limiting events in two distinct paths of CLIP2-associated and multistage carcinogenesis. It integrates molecular measurements of the dichotomous CLIP2 marker from 141 patients into the epidemiological risk analysis for about 13 000 subjects from the Ukrainian-American cohort which were exposed below age 19 years and were put under enhanced medical surveillance since 1998. For the first time, a radiation risk has been estimated solely from marker measurements. Cross checking with epidemiological estimates and model validation suggests that CLIP2 is a marker of high precision. CLIP2 leaves an imprint in the epidemiological incidence data which is typical for a driver gene. With the mechanistic model, we explore the impact of radiation on the molecular landscape of PTC. The model constitutes a unique interface between molecular biology and radiation epidemiology. © The Author 2016. Published by Oxford University Press.

  7. Influence of sectioning location on age estimates from common carp dorsal spines

    USGS Publications Warehouse

    Watkins, Carson J.; Klein, Zachary B.; Terrazas, Marc M.; Quist, Michael C.

    2015-01-01

    Dorsal spines have been shown to provide precise age estimates for Common CarpCyprinus carpio and are commonly used by management agencies to gain information on Common Carp populations. However, no previous studies have evaluated variation in the precision of age estimates obtained from different sectioning locations along Common Carp dorsal spines. We evaluated the precision, relative readability, and distribution of age estimates obtained from various sectioning locations along Common Carp dorsal spines. Dorsal spines from 192 Common Carp were sectioned at the base (section 1), immediately distal to the basal section (section 2), and at 25% (section 3), 50% (section 4), and 75% (section 5) of the total length of the dorsal spine. The exact agreement and within-1-year agreement among readers was highest and the coefficient of variation lowest for section 2. In general, age estimates derived from sections 2 and 3 had similar age distributions and displayed the highest concordance in age estimates with section 1. Our results indicate that sections taken at ≤ 25% of the total length of the dorsal spine can be easily interpreted and provide precise estimates of Common Carp age. The greater consistency in age estimates obtained from section 2 indicates that by using a standard sectioning location, fisheries scientists can expect age-based estimates of population metrics to be more comparable and thus more useful for understanding Common Carp population dynamics.

  8. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  9. Glomerular filtration rate estimated using creatinine, cystatin C or both markers and the risk of clinical events in HIV-infected individuals.

    PubMed

    Lucas, G M; Cozzi-Lepri, A; Wyatt, C M; Post, F A; Bormann, A M; Crum-Cianflone, N F; Ross, M J

    2014-02-01

    The accuracy and precision of glomerular filtration rate (GFR) estimating equations based on plasma creatinine (GFR(cr)), cystatin C (GFR(cys)) and the combination of these markers (GFR(cr-cys)) have recently been assessed in HIV-infected individuals. We assessed the associations of GFR, estimated by these three equations, with clinical events in HIV-infected individuals. We compared the associations of baseline GFR(cr), GFR(cys) and GFR(cr-cys) [using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations] with mortality, cardiovascular events (CVEs) and opportunistic diseases (ODs) in the Strategies for the Management of Antiretroviral Therapy (SMART) study. We used Cox proportional hazards models to estimate unadjusted and adjusted hazard ratios per standard deviation (SD) change in GFR. A total of 4614 subjects from the SMART trial with available baseline creatinine and cystatin C data were included in this analysis. Of these, 99 died, 111 had a CVE and 121 had an OD. GFR(cys) was weakly to moderately correlated with HIV RNA, CD4 cell count, high-sensitivity C-reactive protein, interleukin-6, and D-dimer, while GFR(cr) had little or no correlation with these factors. GFR(cys) had the strongest associations with the three clinical outcomes, followed closely by GFR(cr-cys), with GFR(cr) having the weakest associations with clinical outcomes. In a model adjusting for demographics, cardiovascular risk factors, HIV-related factors and inflammation markers, a 1-SD lower GFR(cys) was associated with a 55% [95% confidence interval (CI) 27-90%] increased risk of mortality, a 21% (95% CI 0-47%) increased risk of CVE, and a 22% (95% CI 0-48%) increased risk of OD. Of the three CKD-EPI GFR equations, GFR(cys) had the strongest associations with mortality, CVE and OD. © 2013 British HIV Association.

  10. Effects of female genital cutting on physical health outcomes: a systematic review and meta-analysis

    PubMed Central

    Berg, Rigmor C; Underland, Vigdis; Odgaard-Jensen, Jan; Fretheim, Atle; Vist, Gunn E

    2014-01-01

    Objective Worldwide, an estimated 125 million girls and women live with female genital mutilation/cutting (FGM/C). We aimed to systematically review the evidence for physical health risks associated with FGM/C. Design We searched 15 databases to identify studies (up to January 2012). Selection criteria were empirical studies reporting physical health outcomes from FGM/C, affecting females with any type of FGM/C, irrespective of ethnicity, nationality and age. Two review authors independently screened titles and abstracts, applied eligibility criteria, assessed methodological study quality and extracted full-text data. To derive overall risk estimates, we combined data from included studies using the Mantel-Haenszel method for unadjusted dichotomous data and the generic inverse-variance method for adjusted data. Outcomes that were sufficiently similar across studies and reasonably resistant to biases were aggregated in meta-analyses. We applied the instrument Grading of Recommendations Assessment, Development and Evaluation to assess the extent to which we have confidence in the effect estimates. Results Our search returned 5109 results, of which 185 studies (3.17 million women) satisfied the inclusion criteria. The risks of systematic and random errors were variable and we focused on key outcomes from the 57 studies with the best available evidence. The most common immediate complications were excessive bleeding, urine retention and genital tissue swelling. The most valid and statistically significant associations for the physical health sequelae of FGM/C were seen on urinary tract infections (unadjusted RR=3.01), bacterial vaginosis (adjusted OR (AOR)=1.68), dyspareunia (RR=1.53), prolonged labour (AOR=1.49), caesarean section (AOR=1.60), and difficult delivery (AOR=1.88). Conclusions While the precise estimation of the frequency and risk of immediate, gynaecological, sexual and obstetric complications is not possible, the results weigh against the continuation of FGM/C and support the diagnosis and management of girls and women suffering the physical risks of FGM/C. Trial registration number This study is registered with PROSPERO, number CRD42012003321. PMID:25416059

  11. Field design factors affecting the precision of ryegrass forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision and accuracy of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to ...

  12. Dose-response relationship between cigarette smoking and site-specific cancer risk: protocol for a systematic review with an original design combining umbrella and traditional reviews.

    PubMed

    Lugo, Alessandra; Bosetti, Cristina; Peveri, Giulia; Rota, Matteo; Bagnardi, Vincenzo; Gallus, Silvano

    2017-11-01

    Only a limited number of meta-analyses providing risk curve functions of dose-response relationships between various smoking-related variables and cancer-specific risk are available. To identify all relevant original publications on the issue, we will conduct a series of comprehensive systematic reviews based on three subsequent literature searches: (1) an umbrella review, to identify meta-analyses, pooled analyses and systematic reviews published before 28 April 2017 on the association between cigarette smoking and the risk of 28 (namely all) malignant neoplasms; (2) for each cancer site, an updated review of original publications on the association between cigarette smoking and cancer risk, starting from the last available comprehensive review identified through the umbrella review; and (3) a review of all original articles on the association between cigarette smoking and site-specific cancer risk included in the publications identified through the umbrella review and the updated reviews. The primary outcomes of interest will be (1) the excess incidence/mortality of various cancers for smokers compared with never smokers; and (2) the dose-response curves describing the association between smoking intensity, duration and time since stopping and incidence/mortality for various cancers. For each cancer site, we will perform a meta-analysis by pooling study-specific estimates for smoking status. We will also estimate the dose-response curves for other smoking-related variables through random-effects meta-regression models based on a non-linear dose-response relationship framework. Ethics approval is not required for this study. Main results will be published in peer-reviewed journals and will also be included in a publicly available website. We will provide therefore the most complete and updated estimates on the association between various measures of cigarette smoking and site-specific cancer risk. This will allow us to obtain precise estimates on the cancer burden attributable to cigarette smoking. This protocol was registered in the International Prospective Register of Systematic Reviews (CRD42017063991). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Hypothesis testing for band size detection of high-dimensional banded precision matrices.

    PubMed

    An, Baiguo; Guo, Jianhua; Liu, Yufeng

    2014-06-01

    Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.

  14. The Challenges of Measuring Glycemic Variability

    PubMed Central

    Rodbard, David

    2012-01-01

    This commentary reviews several of the challenges encountered when attempting to quantify glycemic variability and correlate it with risk of diabetes complications. These challenges include (1) immaturity of the field, including problems of data accuracy, precision, reliability, cost, and availability; (2) larger relative error in the estimates of glycemic variability than in the estimates of the mean glucose; (3) high correlation between glycemic variability and mean glucose level; (4) multiplicity of measures; (5) correlation of the multiple measures; (6) duplication or reinvention of methods; (7) confusion of measures of glycemic variability with measures of quality of glycemic control; (8) the problem of multiple comparisons when assessing relationships among multiple measures of variability and multiple clinical end points; and (9) differing needs for routine clinical practice and clinical research applications. PMID:22768904

  15. Estimation of GFR in South Asians: A Study From the General Population in Pakistan

    PubMed Central

    Jessani, Saleem; Levey, Andrew S.; Bux, Rasool; Inker, Lesley A.; Islam, Muhammad; Chaturvedi, Nish; Mariat, Christophe; Schmid, Christopher H.; Jafar, Tazeen H.

    2015-01-01

    Background South Asians are at high risk for chronic kidney disease. However, unlike those in the United States and United Kingdom, laboratories in South Asian countries do not routinely report estimated glomerular filtration rate (eGFR) when serum creatinine is measured. The objectives of the study were to: (1) evaluate the performance of existing GFR estimating equations in South Asians, and (2) modify the existing equations or develop a new equation for use in this population. Study Design Cross-sectional population-based study. Setting & Participants 581 participants 40 years or older were enrolled from 10 randomly selected communities and renal clinics in Karachi. Predictors eGFR, age, sex, serum creatinine level. Outcomes Bias (the median difference between measured GFR [mGFR] and eGFR), precision (the IQR of the difference), accuracy (P30; percentage of participants with eGFR within 30% of mGFR), and the root mean squared error reported as cross-validated estimates along with bootstrapped 95% CIs based on 1,000 replications. Results The CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) creatinine equation performed better than the MDRD (Modification of Diet in Renal Disease) Study equation in terms of greater accuracy at P30 (76.1% [95% CI, 72.7%–79.5%] vs 68.0% [95% CI, 64.3%–71.7%]; P <0.001) and improved precision (IQR, 22.6 [95% CI, 19.9–25.3] vs 28.6 [95% CI, 25.8–31.5] mL/min/1.73 m2; P < 0.001). However, both equations overestimated mGFR. Applying modification factors for slope and intercept to the CKD-EPI equation to create a CKD-EPI Pakistan equation (such that eGFRCKD-EPI(PK) = 0.686 × eGFRCKD-EPI1.059) in order to eliminate bias improved accuracy (P30, 81.6% [95% CI, 78.4%–84.8%]; P < 0.001) comparably to new estimating equations developed using creatinine level and additional variables. Limitations Lack of external validation data set and few participants with low GFR. Conclusions The CKD-EPI creatinine equation is more accurate and precise than the MDRD Study equation in estimating GFR in a South Asian population in Karachi. The CKD-EPI Pakistan equation further improves the performance of the CKD-EPI equation in South Asians and could be used for eGFR reporting. PMID:24074822

  16. Spatial variability effects on precision and power of forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Spatial analyses of yield trials are important, as they adjust cultivar means for spatial variation and improve the statistical precision of yield estimation. While the relative efficiency of spatial analysis has been frequently reported in several yield trials, its application on long-term forage y...

  17. Large-scale model-based assessment of deer-vehicle collision risk.

    PubMed

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife-vehicle collisions elsewhere.

  18. Large-Scale Model-Based Assessment of Deer-Vehicle Collision Risk

    PubMed Central

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer–vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on 74,000 deer–vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer–vehicle collisions and to investigate the relationship between deer–vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer–vehicle collisions, which allows nonlinear environment–deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new “deer–vehicle collision index” for deer management. We show that the risk of deer–vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer–vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer–vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open-source software implementing the model can be used to transfer our modelling approach to wildlife–vehicle collisions elsewhere. PMID:22359535

  19. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  20. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  1. Experimental Estimation of Entanglement at the Quantum Limit

    NASA Astrophysics Data System (ADS)

    Brida, Giorgio; Degiovanni, Ivo Pietro; Florio, Angela; Genovese, Marco; Giorda, Paolo; Meda, Alice; Paris, Matteo G. A.; Shurupov, Alexander

    2010-03-01

    Entanglement is the central resource of quantum information processing and the precise characterization of entangled states is a crucial issue for the development of quantum technologies. This leads to the necessity of a precise, experimental feasible measure of entanglement. Nevertheless, such measurements are limited both from experimental uncertainties and intrinsic quantum bounds. Here we present an experiment where the amount of entanglement of a family of two-qubit mixed photon states is estimated with the ultimate precision allowed by quantum mechanics.

  2. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  3. Breast cancer risks and risk prediction models.

    PubMed

    Engel, Christoph; Fischer, Christine

    2015-02-01

    BRCA1/2 mutation carriers have a considerably increased risk to develop breast and ovarian cancer. The personalized clinical management of carriers and other at-risk individuals depends on precise knowledge of the cancer risks. In this report, we give an overview of the present literature on empirical cancer risks, and we describe risk prediction models that are currently used for individual risk assessment in clinical practice. Cancer risks show large variability between studies. Breast cancer risks are at 40-87% for BRCA1 mutation carriers and 18-88% for BRCA2 mutation carriers. For ovarian cancer, the risk estimates are in the range of 22-65% for BRCA1 and 10-35% for BRCA2. The contralateral breast cancer risk is high (10-year risk after first cancer 27% for BRCA1 and 19% for BRCA2). Risk prediction models have been proposed to provide more individualized risk prediction, using additional knowledge on family history, mode of inheritance of major genes, and other genetic and non-genetic risk factors. User-friendly software tools have been developed that serve as basis for decision-making in family counseling units. In conclusion, further assessment of cancer risks and model validation is needed, ideally based on prospective cohort studies. To obtain such data, clinical management of carriers and other at-risk individuals should always be accompanied by standardized scientific documentation.

  4. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates

    PubMed Central

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  5. Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters

    NASA Astrophysics Data System (ADS)

    Friedrich, Oliver; Eifler, Tim

    2018-01-01

    Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.

  6. Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration

    NASA Technical Reports Server (NTRS)

    Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.

    1996-01-01

    An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.

  7. Familial Risks of Tourette Syndrome and Chronic Tic Disorders. A Population-Based Cohort Study.

    PubMed

    Mataix-Cols, David; Isomura, Kayoko; Pérez-Vigil, Ana; Chang, Zheng; Rück, Christian; Larsson, K Johan; Leckman, James F; Serlachius, Eva; Larsson, Henrik; Lichtenstein, Paul

    2015-08-01

    Tic disorders, including Tourette syndrome (TS) and chronic tic disorders (CTDs), are assumed to be strongly familial and heritable. Although gene-searching efforts are well under way, precise estimates of familial risk and heritability are lacking. Previous controlled family studies were small and typically conducted within specialist clinics, resulting in potential ascertainment biases. They were also underpowered to disentangle genetic from environmental factors that contribute to the observed familiality. Twin studies have been either very small or based on parent-reported tics in population-based (nonclinical) twin samples. To provide unbiased estimates of familial risk and heritability of tic disorders at the population level. In this population cohort, multigenerational family study, we used a validated algorithm to identify 4826 individuals diagnosed as having TS or CTDs (76.2% male) in the Swedish National Patient Register from January 1, 1969, through December 31, 2009. We studied risks for TS or CTDs in all biological relatives of probands compared with relatives of unaffected individuals (matched on a 1:10 ratio) from the general population. Structural equation modeling was used to estimate the heritability of tic disorders. The risk for tic disorders among relatives of probands with tic disorders increased proportionally to the degree of genetic relatedness. The risks for first-degree relatives (odds ratio [OR], 18.69; 95% CI, 14.53-24.05) were significantly higher than for second-degree relatives (OR, 4.58; 95% CI, 3.22-6.52) and third-degree relatives (OR, 3.07; 95% CI, 2.08-4.51). First-degree relatives at similar genetic distances (eg, parents, siblings, and offspring) had similar risks for tic disorders despite different degrees of shared environment. The risks for full siblings (50% genetic similarity; OR, 17.68; 95% CI, 12.90-24.23) were significantly higher than those for maternal half siblings (25% genetic similarity; OR, 4.41; 95% CI, 2.24-8.67) despite similar environmental exposures. The heritability of tic disorders was estimated to be 0.77 (95% CI, 0.70-0.85). There were no differences in familial risk or heritability between male and female patients. Tic disorders, including TS and CTDs, cluster in families primarily because of genetic factors and appear to be among the most heritable neuropsychiatric conditions.

  8. HIV and cancer registry linkage identifies a substantial burden of cancers in persons with HIV in India.

    PubMed

    Godbole, Sheela V; Nandy, Karabi; Gauniyal, Mansi; Nalawade, Pallavi; Sane, Suvarna; Koyande, Shravani; Toyama, Joy; Hegde, Asha; Virgo, Phil; Bhatia, Kishor; Paranjape, Ramesh S; Risbud, Arun R; Mbulaiteye, Sam M; Mitsuyasu, Ronald T

    2016-09-01

    We utilized computerized record-linkage methods to link HIV and cancer databases with limited unique identifiers in Pune, India, to determine feasibility of linkage and obtain preliminary estimates of cancer risk in persons living with HIV (PLHIV) as compared with the general population.Records of 32,575 PLHIV were linked to 31,754 Pune Cancer Registry records (1996-2008) using a probabilistic-matching algorithm. Cancer risk was estimated by calculating standardized incidence ratios (SIRs) in the early (4-27 months after HIV registration), late (28-60 months), and overall (4-60 months) incidence periods. Cancers diagnosed prior to or within 3 months of HIV registration were considered prevalent.Of 613 linked cancers to PLHIV, 188 were prevalent, 106 early incident, and 319 late incident. Incident cancers comprised 11.5% AIDS-defining cancers (ADCs), including cervical cancer and non-Hodgkin lymphoma (NHL), but not Kaposi sarcoma (KS), and 88.5% non-AIDS-defining cancers (NADCs). Risk for any incident cancer diagnosis in early, late, and combined periods was significantly elevated among PLHIV (SIRs: 5.6 [95% CI 4.6-6.8], 17.7 [95% CI 15.8-19.8], and 11.5 [95% CI 10-12.6], respectively). Cervical cancer risk was elevated in both incidence periods (SIRs: 9.6 [95% CI 4.8-17.2] and 22.6 [95% CI 14.3-33.9], respectively), while NHL risk was elevated only in the late incidence period (SIR: 18.0 [95% CI 9.8-30.20]). Risks for NADCs were dramatically elevated (SIR > 100) for eye-orbit, substantially (SIR > 20) for all-mouth, esophagus, breast, unspecified-leukemia, colon-rectum-anus, and other/unspecified cancers; moderately elevated (SIR > 10) for salivary gland, penis, nasopharynx, and brain-nervous system, and mildly elevated (SIR > 5) for stomach. Risks for 6 NADCs (small intestine, testis, lymphocytic leukemia, prostate, ovary, and melanoma) were not elevated and 5 cancers, including multiple myeloma not seen.Our study demonstrates the feasibility of using probabilistic record-linkage to study cancer/other comorbidities among PLHIV in India and provides preliminary population-based estimates of cancer risks in PLHIV in India. Our results, suggesting a potentially substantial burden and slightly different spectrum of cancers among PLHIV in India, support efforts to conduct multicenter linkage studies to obtain precise estimates and to monitor cancer risk in PLHIV in India.

  9. The Age-Specific Quantitative Effects of Metabolic Risk Factors on Cardiovascular Diseases and Diabetes: A Pooled Analysis

    PubMed Central

    Farzadfar, Farshad; Stevens, Gretchen A.; Woodward, Mark; Wormser, David; Kaptoge, Stephen; Whitlock, Gary; Qiao, Qing; Lewington, Sarah; Di Angelantonio, Emanuele; vander Hoorn, Stephen; Lawes, Carlene M. M.; Ali, Mohammed K.; Mozaffarian, Dariush; Ezzati, Majid

    2013-01-01

    Background The effects of systolic blood pressure (SBP), serum total cholesterol (TC), fasting plasma glucose (FPG), and body mass index (BMI) on the risk of cardiovascular diseases (CVD) have been established in epidemiological studies, but consistent estimates of effect sizes by age and sex are not available. Methods We reviewed large cohort pooling projects, evaluating effects of baseline or usual exposure to metabolic risks on ischemic heart disease (IHD), hypertensive heart disease (HHD), stroke, diabetes, and, as relevant selected other CVDs, after adjusting for important confounders. We pooled all data to estimate relative risks (RRs) for each risk factor and examined effect modification by age or other factors, using random effects models. Results Across all risk factors, an average of 123 cohorts provided data on 1.4 million individuals and 52,000 CVD events. Each metabolic risk factor was robustly related to CVD. At the baseline age of 55–64 years, the RR for 10 mmHg higher SBP was largest for HHD (2.16; 95% CI 2.09–2.24), followed by effects on both stroke subtypes (1.66; 1.39–1.98 for hemorrhagic stroke and 1.63; 1.57–1.69 for ischemic stroke). In the same age group, RRs for 1 mmol/L higher TC were 1.44 (1.29–1.61) for IHD and 1.20 (1.15–1.25) for ischemic stroke. The RRs for 5 kg/m2 higher BMI for ages 55–64 ranged from 2.32 (2.04–2.63) for diabetes, to 1.44 (1.40–1.48) for IHD. For 1 mmol/L higher FPG, RRs in this age group were 1.18 (1.08–1.29) for IHD and 1.14 (1.01–1.29) for total stroke. For all risk factors, proportional effects declined with age, were generally consistent by sex, and differed by region in only a few age groups for certain risk factor-disease pairs. Conclusion Our results provide robust, comparable and precise estimates of the effects of major metabolic risk factors on CVD and diabetes by age group. PMID:23935815

  10. Multiparameter Estimation in Networked Quantum Sensors

    NASA Astrophysics Data System (ADS)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-01

    We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.

  11. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  12. Comparison of sampling methodologies for nutrient monitoring in streams: uncertainties, costs and implications for mitigation

    NASA Astrophysics Data System (ADS)

    Audet, J.; Martinsen, L.; Hasler, B.; de Jonge, H.; Karydi, E.; Ovesen, N. B.; Kronvang, B.

    2014-07-01

    Eutrophication of aquatic ecosystems caused by excess concentrations of nitrogen and phosphorus may have harmful consequences for biodiversity and poses a health risk to humans via the water supplies. Reduction of nitrogen and phosphorus losses to aquatic ecosystems involves implementation of costly measures, and reliable monitoring methods are therefore essential to select appropriate mitigation strategies and to evaluate their effects. Here, we compare the performances and costs of three methodologies for the monitoring of nutrients in rivers: grab sampling, time-proportional sampling and passive sampling using flow proportional samplers. Assuming time-proportional sampling to be the best estimate of the "true" nutrient load, our results showed that the risk of obtaining wrong total nutrient load estimates by passive samplers is high despite similar costs as the time-proportional sampling. Our conclusion is that for passive samplers to provide a reliable monitoring alternative, further development is needed. Grab sampling was the cheapest of the three methods and was more precise and accurate than passive sampling. We conclude that although monitoring employing time-proportional sampling is costly, its reliability precludes unnecessarily high implementation expenses.

  13. Comparison of sampling methodologies for nutrient monitoring in streams: uncertainties, costs and implications for mitigation

    NASA Astrophysics Data System (ADS)

    Audet, J.; Martinsen, L.; Hasler, B.; de Jonge, H.; Karydi, E.; Ovesen, N. B.; Kronvang, B.

    2014-11-01

    Eutrophication of aquatic ecosystems caused by excess concentrations of nitrogen and phosphorus may have harmful consequences for biodiversity and poses a health risk to humans via water supplies. Reduction of nitrogen and phosphorus losses to aquatic ecosystems involves implementation of costly measures, and reliable monitoring methods are therefore essential to select appropriate mitigation strategies and to evaluate their effects. Here, we compare the performances and costs of three methodologies for the monitoring of nutrients in rivers: grab sampling; time-proportional sampling; and passive sampling using flow-proportional samplers. Assuming hourly time-proportional sampling to be the best estimate of the "true" nutrient load, our results showed that the risk of obtaining wrong total nutrient load estimates by passive samplers is high despite similar costs as the time-proportional sampling. Our conclusion is that for passive samplers to provide a reliable monitoring alternative, further development is needed. Grab sampling was the cheapest of the three methods and was more precise and accurate than passive sampling. We conclude that although monitoring employing time-proportional sampling is costly, its reliability precludes unnecessarily high implementation expenses.

  14. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  15. Lake Erie Yellow perch age estimation based on three structures: Precision, processing times, and management implications

    USGS Publications Warehouse

    Vandergoot, C.S.; Bur, M.T.; Powell, K.A.

    2008-01-01

    Yellow perch Perca flavescens support economically important recreational and commercial fisheries in Lake Erie and are intensively managed. Age estimation represents an integral component in the management of Lake Erie yellow perch stocks, as age-structured population models are used to set safe harvest levels on an annual basis. We compared the precision associated with yellow perch (N = 251) age estimates from scales, sagittal otoliths, and anal spine sections and evaluated the time required to process and estimate age from each structure. Three readers of varying experience estimated ages. The precision (mean coefficient of variation) of estimates among readers was 1% for sagittal otoliths, 5-6% for anal spines, and 11-13% for scales. Agreement rates among readers were 94-95% for otoliths, 71-76% for anal spines, and 45-50% for scales. Systematic age estimation differences were evident among scale and anal spine readers; less-experienced readers tended to underestimate ages of yellow perch older than age 4 relative to estimates made by an experienced reader. Mean scale age tended to underestimate ages of age-6 and older fish relative to otolith ages estimated by an experienced reader. Total annual mortality estimates based on scale ages were 20% higher than those based on otolith ages; mortality estimates based on anal spine ages were 4% higher than those based on otolith ages. Otoliths required more removal and preparation time than scales and anal spines, but age estimation time was substantially lower for otoliths than for the other two structures. We suggest the use of otoliths or anal spines for age estimation in yellow perch (regardless of length) from Lake Erie and other systems where precise age estimates are necessary, because age estimation errors resulting from the use of scales could generate incorrect management decisions. ?? Copyright by the American Fisheries Society 2008.

  16. Laboratory-associated infections and biosafety.

    PubMed Central

    Sewell, D L

    1995-01-01

    An estimated 500,000 laboratory workers in the United States are at risk of exposure to infectious agents that cause disease ranging from inapparent to life-threatening infections, but the precise risk to a given worker unknown. The emergence of human immunodeficiency virus and hantavirus, the continuing problem of hepatitis B virus, and the reemergence of Mycobacterium tuberculosis have renewed interest in biosafety for the employees of laboratories and health care facilities. This review examines the history, the causes, and the methods for prevention of laboratory-associated infections. The initial step in a biosafety program is the assessment of risk to the employee. Risk assessment guidelines include the pathogenicity of the infectious agent, the method of transmission, worker-related risk factors, the source and route of infection, and the design of the laboratory facility. Strategies for the prevention and management of laboratory-associated infections are based on the containment of the infectious agent by physical separation from the laboratory worker and the environment, employee education about the occupational risks, and availability of an employee health program. Adherence to the biosafety guidelines mandated or proposed by various governmental and accrediting agencies reduces the risk of an occupational exposure to infectious agents handled in the workplace. PMID:7553572

  17. DoD Met Most Requirements of the Improper Payments Elimination and Recovery Act in FY 2014, but Improper Payment Estimates Were Unreliable

    DTIC Science & Technology

    2015-05-12

    Deficiencies That Affect the Reliability of Estimates ________________________________________6 Statistical Precision Could Be Improved... statistical precision of improper payments estimates in seven of the DoD payment programs through the use of stratified sample designs. DoD improper...payments not subject to sampling, which made the results statistically invalid. We made a recommendation to correct this problem in a previous report;4

  18. Sampling system for wheat (Triticum aestivum L) area estimation using digital LANDSAT MSS data and aerial photographs. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Batista, G. T.

    1984-01-01

    A procedure to estimate wheat (Triticum aestivum L) area using sampling technique based on aerial photographs and digital LANDSAT MSS data is developed. Aerial photographs covering 720 square km are visually analyzed. To estimate wheat area, a regression approach is applied using different sample sizes and various sampling units. As the size of sampling unit decreased, the percentage of sampled area required to obtain similar estimation performance also decreased. The lowest percentage of the area sampled for wheat estimation with relatively high precision and accuracy through regression estimation is 13.90% using 10 square km as the sampling unit. Wheat area estimation using only aerial photographs is less precise and accurate than those obtained by regression estimation.

  19. Processes and Procedures for Estimating Score Reliability and Precision

    ERIC Educational Resources Information Center

    Bardhoshi, Gerta; Erford, Bradley T.

    2017-01-01

    Precision is a key facet of test development, with score reliability determined primarily according to the types of error one wants to approximate and demonstrate. This article identifies and discusses several primary forms of reliability estimation: internal consistency (i.e., split-half, KR-20, a), test-retest, alternate forms, interscorer, and…

  20. Association among Dietary Flavonoids, Flavonoid Subclasses and Ovarian Cancer Risk: A Meta-Analysis.

    PubMed

    Hua, Xiaoli; Yu, Lili; You, Ruxu; Yang, Yu; Liao, Jing; Chen, Dongsheng; Yu, Lixiu

    2016-01-01

    Previous studies have indicated that intake of dietary flavonoids or flavonoid subclasses is associated with the ovarian cancer risk, but presented controversial results. Therefore, we conducted a meta-analysis to derive a more precise estimation of these associations. We performed a search in PubMed, Google Scholar and ISI Web of Science from their inception to April 25, 2015 to select studies on the association among dietary flavonoids, flavonoid subclasses and ovarian cancer risk. The information was extracted by two independent authors. We assessed the heterogeneity, sensitivity, publication bias and quality of the articles. A random-effects model was used to calculate the pooled risk estimates. Five cohort studies and seven case-control studies were included in the final meta-analysis. We observed that intake of dietary flavonoids can decrease ovarian cancer risk, which was demonstrated by pooled RR (RR = 0.82, 95% CI = 0.68-0.98). In a subgroup analysis by flavonoid subtypes, the ovarian cancer risk was also decreased for isoflavones (RR = 0.67, 95% CI = 0.50-0.92) and flavonols (RR = 0.68, 95% CI = 0.58-0.80). While there was no compelling evidence that consumption of flavones (RR = 0.86, 95% CI = 0.71-1.03) could decrease ovarian cancer risk, which revealed part sources of heterogeneity. The sensitivity analysis indicated stable results, and no publication bias was observed based on the results of Funnel plot analysis and Egger's test (p = 0.26). This meta-analysis suggested that consumption of dietary flavonoids and subtypes (isoflavones, flavonols) has a protective effect against ovarian cancer with a reduced risk of ovarian cancer except for flavones consumption. Nevertheless, further investigations on a larger population covering more flavonoid subclasses are warranted.

  1. Association among Dietary Flavonoids, Flavonoid Subclasses and Ovarian Cancer Risk: A Meta-Analysis

    PubMed Central

    You, Ruxu; Yang, Yu; Liao, Jing; Chen, Dongsheng; Yu, Lixiu

    2016-01-01

    Background Previous studies have indicated that intake of dietary flavonoids or flavonoid subclasses is associated with the ovarian cancer risk, but presented controversial results. Therefore, we conducted a meta-analysis to derive a more precise estimation of these associations. Methods We performed a search in PubMed, Google Scholar and ISI Web of Science from their inception to April 25, 2015 to select studies on the association among dietary flavonoids, flavonoid subclasses and ovarian cancer risk. The information was extracted by two independent authors. We assessed the heterogeneity, sensitivity, publication bias and quality of the articles. A random-effects model was used to calculate the pooled risk estimates. Results Five cohort studies and seven case-control studies were included in the final meta-analysis. We observed that intake of dietary flavonoids can decrease ovarian cancer risk, which was demonstrated by pooled RR (RR = 0.82, 95% CI = 0.68–0.98). In a subgroup analysis by flavonoid subtypes, the ovarian cancer risk was also decreased for isoflavones (RR = 0.67, 95% CI = 0.50–0.92) and flavonols (RR = 0.68, 95% CI = 0.58–0.80). While there was no compelling evidence that consumption of flavones (RR = 0.86, 95% CI = 0.71–1.03) could decrease ovarian cancer risk, which revealed part sources of heterogeneity. The sensitivity analysis indicated stable results, and no publication bias was observed based on the results of Funnel plot analysis and Egger’s test (p = 0.26). Conclusions This meta-analysis suggested that consumption of dietary flavonoids and subtypes (isoflavones, flavonols) has a protective effect against ovarian cancer with a reduced risk of ovarian cancer except for flavones consumption. Nevertheless, further investigations on a larger population covering more flavonoid subclasses are warranted. PMID:26960146

  2. Recovery Efficiency and Limit of Detection of Aerosolized Bacillus anthracis Sterne from Environmental Surface Samples ▿

    PubMed Central

    Estill, Cheryl Fairfield; Baron, Paul A.; Beard, Jeremy K.; Hein, Misty J.; Larsen, Lloyd D.; Rose, Laura; Schaefer, Frank W.; Noble-Wang, Judith; Hodges, Lisa; Lindquist, H. D. Alan; Deye, Gregory J.; Arduino, Matthew J.

    2009-01-01

    After the 2001 anthrax incidents, surface sampling techniques for biological agents were found to be inadequately validated, especially at low surface loadings. We aerosolized Bacillus anthracis Sterne spores within a chamber to achieve very low surface loading (ca. 3, 30, and 200 CFU per 100 cm2). Steel and carpet coupons seeded in the chamber were sampled with swab (103 cm2) or wipe or vacuum (929 cm2) surface sampling methods and analyzed at three laboratories. Agar settle plates (60 cm2) were the reference for determining recovery efficiency (RE). The minimum estimated surface concentrations to achieve a 95% response rate based on probit regression were 190, 15, and 44 CFU/100 cm2 for sampling steel surfaces and 40, 9.2, and 28 CFU/100 cm2 for sampling carpet surfaces with swab, wipe, and vacuum methods, respectively; however, these results should be cautiously interpreted because of high observed variability. Mean REs at the highest surface loading were 5.0%, 18%, and 3.7% on steel and 12%, 23%, and 4.7% on carpet for the swab, wipe, and vacuum methods, respectively. Precision (coefficient of variation) was poor at the lower surface concentrations but improved with increasing surface concentration. The best precision was obtained with wipe samples on carpet, achieving 38% at the highest surface concentration. The wipe sampling method detected B. anthracis at lower estimated surface concentrations and had higher RE and better precision than the other methods. These results may guide investigators to more meaningfully conduct environmental sampling, quantify contamination levels, and conduct risk assessment for humans. PMID:19429546

  3. Recovery efficiency and limit of detection of aerosolized Bacillus anthracis Sterne from environmental surface samples.

    PubMed

    Estill, Cheryl Fairfield; Baron, Paul A; Beard, Jeremy K; Hein, Misty J; Larsen, Lloyd D; Rose, Laura; Schaefer, Frank W; Noble-Wang, Judith; Hodges, Lisa; Lindquist, H D Alan; Deye, Gregory J; Arduino, Matthew J

    2009-07-01

    After the 2001 anthrax incidents, surface sampling techniques for biological agents were found to be inadequately validated, especially at low surface loadings. We aerosolized Bacillus anthracis Sterne spores within a chamber to achieve very low surface loading (ca. 3, 30, and 200 CFU per 100 cm(2)). Steel and carpet coupons seeded in the chamber were sampled with swab (103 cm(2)) or wipe or vacuum (929 cm(2)) surface sampling methods and analyzed at three laboratories. Agar settle plates (60 cm(2)) were the reference for determining recovery efficiency (RE). The minimum estimated surface concentrations to achieve a 95% response rate based on probit regression were 190, 15, and 44 CFU/100 cm(2) for sampling steel surfaces and 40, 9.2, and 28 CFU/100 cm(2) for sampling carpet surfaces with swab, wipe, and vacuum methods, respectively; however, these results should be cautiously interpreted because of high observed variability. Mean REs at the highest surface loading were 5.0%, 18%, and 3.7% on steel and 12%, 23%, and 4.7% on carpet for the swab, wipe, and vacuum methods, respectively. Precision (coefficient of variation) was poor at the lower surface concentrations but improved with increasing surface concentration. The best precision was obtained with wipe samples on carpet, achieving 38% at the highest surface concentration. The wipe sampling method detected B. anthracis at lower estimated surface concentrations and had higher RE and better precision than the other methods. These results may guide investigators to more meaningfully conduct environmental sampling, quantify contamination levels, and conduct risk assessment for humans.

  4. Why precision medicine is not the best route to a healthier world.

    PubMed

    Rey-López, Juan Pablo; Sá, Thiago Herick de; Rezende, Leandro Fórnias Machado de

    2018-02-05

    Precision medicine has been announced as a new health revolution. The term precision implies more accuracy in healthcare and prevention of diseases, which could yield substantial cost savings. However, scientific debate about precision medicine is needed to avoid wasting economic resources and hype. In this commentary, we express the reasons why precision medicine cannot be a health revolution for population health. Advocates of precision medicine neglect the limitations of individual-centred, high-risk strategies (reduced population health impact) and the current crisis of evidence-based medicine. Overrated "precision medicine" promises may be serving vested interests, by dictating priorities in the research agenda and justifying the exorbitant healthcare expenditure in our finance-based medicine. If societies aspire to address strong risk factors for non-communicable diseases (such as air pollution, smoking, poor diets, or physical inactivity), they need less medicine and more investment in population prevention strategies.

  5. Precision of hard structures used to estimate age of mountain Whitefish (Prosopium williamsoni)

    USGS Publications Warehouse

    Watkins, Carson J.; Ross, Tyler J.; Hardy, Ryan S.; Quist, Michael C.

    2015-01-01

    The mountain whitefish (Prosopium williamsoni) is a widely distributed salmonid in western North America that has decreased in abundance over portions of its distribution due to anthropogenic disturbances. In this investigation, we examined precision of age estimates derived from scales, pectoral fin rays, and sagittal otoliths from 167 mountain whitefish. Otoliths and pectoral fin rays were mounted in epoxy and cross-sectioned before examination. Scales were pressed onto acetate slides and resulting impressions were examined. Between-reader precision (i.e., between 2 readers), between-reader variability, and reader confidence ratings were compared among hard structures. Coefficient of variation (CV) in age estimates was lowest and percentage of exact agreement (PA-0) was highest for scales (CV = 5.9; PA-0 = 70%) compared to pectoral fin rays (CV =11.0; PA-0 = 58%) and otoliths (CV = 12.3; PA-0 = 55%). Median confidence ratings were significantly different (P ≤ 0.05) among all structures, with scales having the highest median confidence. Reader confidence decreased with fish age for scales and pectoral fin rays, but reader confidence increased with fish age for otoliths. In general, age estimates were more precise and reader confidence was higher for scales compared to pectoral fin rays and otoliths. This research will help fisheries biologists in selecting the most appropriate hard structure to use for future age and growth studies on mountain whitefish. In turn, selection of the most precise hard structure will lead to better estimates of dynamic rate functions.

  6. Comparison study on disturbance estimation techniques in precise slow motion control

    NASA Astrophysics Data System (ADS)

    Fan, S.; Nagamune, R.; Altintas, Y.; Fan, D.; Zhang, Z.

    2010-08-01

    Precise low speed motion control is important for the industrial applications of both micro-milling machine tool feed drives and electro-optical tracking servo systems. It calls for precise position and instantaneous velocity measurement and disturbance, which involves direct drive motor force ripple, guide way friction and cutting force etc., estimation. This paper presents a comparison study on dynamic response and noise rejection performance of three existing disturbance estimation techniques, including the time-delayed estimators, the state augmented Kalman Filters and the conventional disturbance observers. The design technique essentials of these three disturbance estimators are introduced. For designing time-delayed estimators, it is proposed to substitute Kalman Filter for Luenberger state observer to improve noise suppression performance. The results show that the noise rejection performances of the state augmented Kalman Filters and the time-delayed estimators are much better than the conventional disturbance observers. These two estimators can give not only the estimation of the disturbance but also the low noise level estimations of position and instantaneous velocity. The bandwidth of the state augmented Kalman Filters is wider than the time-delayed estimators. In addition, the state augmented Kalman Filters can give unbiased estimations of the slow varying disturbance and the instantaneous velocity, while the time-delayed estimators can not. The simulation and experiment conducted on X axis of a 2.5-axis prototype micro milling machine are provided.

  7. Air pollution as a risk factor in health impact assessments of a travel mode shift towards cycling

    PubMed Central

    Raza, Wasif; Forsberg, Bertil; Johansson, Christer; Sommar, Johan Nilsson

    2018-01-01

    ABSTRACT Background: Promotion of active commuting provides substantial health and environmental benefits by influencing air pollution, physical activity, accidents, and noise. However, studies evaluating intervention and policies on a mode shift from motorized transport to cycling have estimated health impacts with varying validity and precision. Objective: To review and discuss the estimation of air pollution exposure and its impacts in health impact assessment studies of a shift in transport from cars to bicycles in order to guide future assessments. Methods: A systematic database search of PubMed was done primarily for articles published from January 2000 to May 2016 according to PRISMA guidelines. Results: We identified 18 studies of health impact assessment of change in transport mode. Most studies investigated future hypothetical scenarios of increased cycling. The impact on the general population was estimated using a comparative risk assessment approach in the majority of these studies, whereas some used previously published cost estimates. Air pollution exposure during cycling was estimated based on the ventilation rate, the pollutant concentration, and the trip duration. Most studies employed exposure-response functions from studies comparing background levels of fine particles between cities to estimate the health impacts of local traffic emissions. The effect of air pollution associated with increased cycling contributed small health benefits for the general population, and also only slightly increased risks associated with fine particle exposure among those who shifted to cycling. However, studies calculating health impacts based on exposure-response functions for ozone, black carbon or nitrogen oxides found larger effects attributed to changes in air pollution exposure. Conclusion: A large discrepancy between studies was observed due to different health impact assessment approaches, different assumptions for calculation of inhaled dose and different selection of dose-response functions. This kind of assessments would improve from more holistic approaches using more specific exposure-response functions. PMID:29400262

  8. Multi-dimensional Precision Livestock Farming: a potential toolbox for sustainable rangeland management.

    PubMed

    di Virgilio, Agustina; Morales, Juan M; Lambertucci, Sergio A; Shepard, Emily L C; Wilson, Rory P

    2018-01-01

    Precision Livestock Farming (PLF) is a promising approach to minimize the conflicts between socio-economic activities and landscape conservation. However, its application on extensive systems of livestock production can be challenging. The main difficulties arise because animals graze on large natural pastures where they are exposed to competition with wild herbivores for heterogeneous and scarce resources, predation risk, adverse weather, and complex topography. Considering that the 91% of the world's surface devoted to livestock production is composed of extensive systems (i.e., rangelands), our general aim was to develop a PLF methodology that quantifies: (i) detailed behavioural patterns, (ii) feeding rate, and (iii) costs associated with different behaviours and landscape traits. For this, we used Merino sheep in Patagonian rangelands as a case study. We combined data from an animal-attached multi-sensor tag (tri-axial acceleration, tri-axial magnetometry, temperature sensor and Global Positioning System) with landscape layers from a Geographical Information System to acquire data. Then, we used high accuracy decision trees, dead reckoning methods and spatial data processing techniques to show how this combination of tools could be used to assess energy balance, predation risk and competition experienced by livestock through time and space. The combination of methods proposed here are a useful tool to assess livestock behaviour and the different factors that influence extensive livestock production, such as topography, environmental temperature, predation risk and competition for heterogeneous resources. We were able to quantify feeding rate continuously through time and space with high accuracy and show how it could be used to estimate animal production and the intensity of grazing on the landscape. We also assessed the effects of resource heterogeneity (inferred through search times), and the potential costs associated with predation risk, competition, thermoregulation and movement on complex topography. The quantification of feeding rate and behavioural costs provided by our approach could be used to estimate energy balance and to predict individual growth, survival and reproduction. Finally, we discussed how the information provided by this combination of methods can be used to develop wildlife-friendly strategies that also maximize animal welfare, quality and environmental sustainability.

  9. Is digital photography an accurate and precise method for measuring range of motion of the shoulder and elbow?

    PubMed

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2018-03-01

    Accurate measurements of shoulder and elbow motion are required for the management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, shoulder flexion/abduction/internal rotation/external rotation and elbow flexion/extension were measured using visual estimation, goniometry, and digital photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard (motion capture analysis), while precision was defined by the proportion of measurements within the authors' definition of clinical significance (10° for all motions except for elbow extension where 5° was used). Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although statistically significant differences were found in measurement accuracy between the three techniques, none of these differences met the authors' definition of clinical significance. Precision of the measurements was significantly higher for both digital photography (shoulder abduction [93% vs. 74%, p < 0.001], shoulder internal rotation [97% vs. 83%, p = 0.001], and elbow flexion [93% vs. 65%, p < 0.001]) and goniometry (shoulder abduction [92% vs. 74%, p < 0.001] and shoulder internal rotation [94% vs. 83%, p = 0.008]) than visual estimation. Digital photography was more precise than goniometry for measurements of elbow flexion only [93% vs. 76%, p < 0.001]. There was no clinically significant difference in measurement accuracy between the three techniques for shoulder and elbow motion. Digital photography showed higher measurement precision compared to visual estimation for shoulder abduction, shoulder internal rotation, and elbow flexion. However, digital photography was only more precise than goniometry for measurements of elbow flexion. Overall digital photography shows equivalent accuracy to visual estimation and goniometry, but with higher precision than visual estimation. Copyright © 2017. Published by Elsevier B.V.

  10. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  11. Semi-Professional Rugby League Players have Higher Concussion Risk than Professional or Amateur Participants: A Pooled Analysis.

    PubMed

    King, Doug; Hume, Patria; Gissane, Conor; Clark, Trevor

    2017-02-01

    A combined estimate of injuries within a specific sport through pooled analysis provides more precise evidence and meaningful information about the sport, whilst controlling for between-study variation due to individual sub-cohort characteristics. The objective of this analysis was to review all published rugby league studies reporting injuries from match and training participation and report the pooled data estimates for rugby league concussion injury epidemiology. A systematic literature analysis of concussion in rugby league was performed on published studies from January 1990 to October 2015. Data were extracted and pooled from 25 studies that reported the number and incidence of concussions in rugby league match and training activities. Amateur rugby league players had the highest incidence of concussive injuries in match activities (19.1 per 1000 match hours) while semi-professional players had the highest incidence of concussive injuries in training activities (3.1 per 1000 training hours). This pooled analysis showed that, during match participation activities, amateur rugby league participants had a higher reported concussion injury rate than professional and semi-professional participants. Semi-professional participants had nearly a threefold greater concussion injury risk than amateur rugby league participants during match participation. They also had nearly a 600-fold greater concussion injury risk than professional rugby league participants during training participation.

  12. Job demands and job strain as risk factors for employee wellbeing in elderly care: an instrumental-variables analysis.

    PubMed

    Elovainio, Marko; Heponiemi, Tarja; Kuusio, Hannamaria; Jokela, Markus; Aalto, Anna-Mari; Pekkarinen, Laura; Noro, Anja; Finne-Soveri, Harriet; Kivimäki, Mika; Sinervo, Timo

    2015-02-01

    The association between psychosocial work environment and employee wellbeing has repeatedly been shown. However, as environmental evaluations have typically been self-reported, the observed associations may be attributable to reporting bias. Applying instrumental-variable regression, we used staffing level (the ratio of staff to residents) as an unconfounded instrument for self-reported job demands and job strain to predict various indicators of wellbeing (perceived stress, psychological distress and sleeping problems) among 1525 registered nurses, practical nurses and nursing assistants working in elderly care wards. In ordinary regression, higher self-reported job demands and job strain were associated with increased risk of perceived stress, psychological distress and sleeping problems. The effect estimates for the associations of these psychosocial factors with perceived stress and psychological distress were greater, but less precisely estimated, in an instrumental-variables analysis which took into account only the variation in self-reported job demands and job strain that was explained by staffing level. No association between psychosocial factors and sleeping problems was observed with the instrumental-variable analysis. These results support a causal interpretation of high self-reported job demands and job strain being risk factors for employee wellbeing. © The Author 2014. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  13. The Precision of Mapping Between Number Words and the Approximate Number System Predicts Children’s Formal Math Abilities

    PubMed Central

    Libertus, Melissa E.; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-01-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive Approximate Number System (ANS), and by using words and symbols to count and represent numbers exactly. Further, by the time they are five years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children’s math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation – mapping accuracy and variability – might each relate to math performance. Here, we address these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. PMID:27348475

  14. The precision of mapping between number words and the approximate number system predicts children's formal math abilities.

    PubMed

    Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-10-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Assessing the Risk of Progression From Asymptomatic Left Ventricular Dysfunction to Overt Heart Failure: A Systematic Overview and Meta-Analysis.

    PubMed

    Echouffo-Tcheugui, Justin B; Erqou, Sebhat; Butler, Javed; Yancy, Clyde W; Fonarow, Gregg C

    2016-04-01

    This study sought to provide estimates of the risk of progression to overt heart failure (HF) from systolic or diastolic asymptomatic left ventricular dysfunction through a systematic review and meta-analysis. Precise population-based estimates on the progression from asymptomatic left ventricular dysfunction (or stage B HF) to clinical HF (stage C HF) remain limited, despite its prognostic and clinical implications. Pre-emptive intervention with neurohormonal modulation may attenuate disease progression. MEDLINE and EMBASE were systematically searched (until March 2015). Cohort studies reporting on the progression from asymptomatic left ventricular systolic dysfunction (ALVSD) or asymptomatic left ventricular diastolic dysfunction (ALVDD) to overt HF were included. Effect estimates (prevalence, incidence, and relative risk) were pooled using a random-effects model meta-analysis, separately for systolic and diastolic dysfunction, with heterogeneity assessed with the I(2) statistic. Thirteen reports based on 11 distinct studies of progression of ALVSD were included in the meta-analysis assessing a total of 25,369 participants followed for 7.9 years on average. The absolute risks of progression to HF were 8.4 per 100 person-years (95% confidence interval [CI]: 4.0 to 12.8 per 100 person-years) for those with ALVSD, 2.8 per 100 person-years (95% CI: 1.9 to 3.7 per 100 person-years) for those with ALVDD, and 1.04 per 100 person-years (95% CI: 0.0 to 2.2 per 100 person-years) without any ventricular dysfunction evident. The combined maximally adjusted relative risk of HF for ALVSD was 4.6 (95% CI: 2.2 to 9.8), and that of ALVDD was 1.7 (95% CI: 1.3 to 2.2). ALVSD and ALVDD are each associated with a substantial risk for incident HF indicating an imperative to develop effective intervention at these stages. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  16. Precision phase estimation based on weak-value amplification

    NASA Astrophysics Data System (ADS)

    Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei

    2017-02-01

    In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.

  17. A Review on Automatic Mammographic Density and Parenchymal Segmentation

    PubMed Central

    He, Wenda; Juette, Arne; Denton, Erika R. E.; Oliver, Arnau

    2015-01-01

    Breast cancer is the most frequently diagnosed cancer in women. However, the exact cause(s) of breast cancer still remains unknown. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective way to tackle breast cancer. There are more than 70 common genetic susceptibility factors included in the current non-image-based risk prediction models (e.g., the Gail and the Tyrer-Cuzick models). Image-based risk factors, such as mammographic densities and parenchymal patterns, have been established as biomarkers but have not been fully incorporated in the risk prediction models used for risk stratification in screening and/or measuring responsiveness to preventive approaches. Within computer aided mammography, automatic mammographic tissue segmentation methods have been developed for estimation of breast tissue composition to facilitate mammographic risk assessment. This paper presents a comprehensive review of automatic mammographic tissue segmentation methodologies developed over the past two decades and the evidence for risk assessment/density classification using segmentation. The aim of this review is to analyse how engineering advances have progressed and the impact automatic mammographic tissue segmentation has in a clinical environment, as well as to understand the current research gaps with respect to the incorporation of image-based risk factors in non-image-based risk prediction models. PMID:26171249

  18. Global data on blindness.

    PubMed Central

    Thylefors, B.; Négrel, A. D.; Pararajasegaram, R.; Dadzie, K. Y.

    1995-01-01

    Globally, it is estimated that there are 38 million persons who are blind. Moreover, a further 110 million people have low vision and are at great risk of becoming blind. The main causes of blindness and low vision are cataract, trachoma, glaucoma, onchocerciasis, and xerophthalmia; however, insufficient data on blindness from causes such as diabetic retinopathy and age-related macular degeneration preclude specific estimations of their global prevalence. The age-specific prevalences of the major causes of blindness that are related to age indicate that the trend will be for an increase in such blindness over the decades to come, unless energetic efforts are made to tackle these problems. More data collected through standardized methodologies, using internationally accepted (ICD-10) definitions, are needed. Data on the incidence of blindness due to common causes would be useful for calculating future trends more precisely. PMID:7704921

  19. Optimal estimation of entanglement in optical qubit systems

    NASA Astrophysics Data System (ADS)

    Brida, Giorgio; Degiovanni, Ivo P.; Florio, Angela; Genovese, Marco; Giorda, Paolo; Meda, Alice; Paris, Matteo G. A.; Shurupov, Alexander P.

    2011-05-01

    We address the experimental determination of entanglement for systems made of a pair of polarization qubits. We exploit quantum estimation theory to derive optimal estimators, which are then implemented to achieve ultimate bound to precision. In particular, we present a set of experiments aimed at measuring the amount of entanglement for states belonging to different families of pure and mixed two-qubit two-photon states. Our scheme is based on visibility measurements of quantum correlations and achieves the ultimate precision allowed by quantum mechanics in the limit of Poissonian distribution of coincidence counts. Although optimal estimation of entanglement does not require the full tomography of the states we have also performed state reconstruction using two different sets of tomographic projectors and explicitly shown that they provide a less precise determination of entanglement. The use of optimal estimators also allows us to compare and statistically assess the different noise models used to describe decoherence effects occurring in the generation of entanglement.

  20. Demonstration of precise estimation of polar motion parameters with the global positioning system: Initial results

    NASA Technical Reports Server (NTRS)

    Lichten, S. M.

    1991-01-01

    Data from the Global Positioning System (GPS) were used to determine precise polar motion estimates. Conservatively calculated formal errors of the GPS least squares solution are approx. 10 cm. The GPS estimates agree with independently determined polar motion values from very long baseline interferometry (VLBI) at the 5 cm level. The data were obtained from a partial constellation of GPS satellites and from a sparse worldwide distribution of ground stations. The accuracy of the GPS estimates should continue to improve as more satellites and ground receivers become operational, and eventually a near real time GPS capability should be available. Because the GPS data are obtained and processed independently from the large radio antennas at the Deep Space Network (DSN), GPS estimation could provide very precise measurements of Earth orientation for calibration of deep space tracking data and could significantly relieve the ever growing burden on the DSN radio telescopes to provide Earth platform calibrations.

  1. Analysis of present and future potential compound flooding risk along the European coast

    NASA Astrophysics Data System (ADS)

    Bevacqua, Emanuele; Maraun, Douglas; Voukouvalas, Evangelos; Vousdoukas, Michalis I.; Widmann, Martin; Manning, Colin; Vrac, Mathieu

    2017-04-01

    The coastal zone is the natural border between the sea and the mainland, and it is constantly under the influence of marine and land-based natural and human-induced pressure. Compound floods are extreme events occurring in coastal areas where the interaction of joint high sea level and large amount of precipitation causes extreme floodings. Typically the risk of flooding in coastal areas is defined analysing either sea level or precipitation driven floodings, however compound floods should be considered to avoid an underestimation of the risk. In the future, the human pressure at the coastal zone is expected to increase, urging for a comprehensive analysis of the compound flooding risk under different climate change scenarios. In this study we introduce the concept of "potential risk" as we investigate how often large amount of precipitation and high sea level may co-occur, and not the effective impact due to the interaction of these two hazards. The effective risk of compound flooding in a specific place depends also on the local orography and on the existing protections. The estimation of the potential risk of compound flooding is useful to individuate places where an effective risk of compound flooding may exist, and where further studies would be useful to get more precise information on the local risk. We estimate the potential risk of compound flooding along the European coastal zone incorporating the ERA-Interim meteorological reanalysis for the past and present state, and the future projections from two RCP scenarios (namely the RCP4.5 and RCP8.5 scenarios) as derived from 8 CMIP5 models of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Sea level data are estimated by forcing the hydrodynamic model Delft3D-Flow with 6-hourly wind and atmospheric pressure fields. Based on sea level (storm surge and astronomical tide) and precipitation joint occurrence analysis, a map of the potential compound flooding risk along the European coast is proposed and critical places with high potential risk are identified. For these critical places, we plan to asses the potential compound flood risk driven by coinciding extreme values of sea level and river discharge. Finally, we analyse the atmospheric large scale processes that lead to compound floods and their variation under future climate change scenarios.

  2. Risk assessment of geo-microbial assosicated CO2 Geological Storage

    NASA Astrophysics Data System (ADS)

    Tanaka, A.; Sakamoto, Y.; Higashino, H.; Mayumi, D.; Sakata, S.; Kano, Y.; Nishi, Y.; Nakao, S.

    2014-12-01

    If we maintain preferable conditions for methanogenesis archaea during geological CCS, we will be able to abate greenhouse gas emission and produce natural gas as natural energy resource at the same time. Assuming Bio-CCS site, CO2 is injected from a well for to abate greenhouse gas emission and cultivate methanogenic geo-microbes, and CH4 is produced from another well. The procedure is similar to the Enhanced Oil/Gas Recovery (EOR/EGR) operation, but in Bio-CCS, the target is generation and production of methane out of depleted oil/gas reservoir during CO2 abatement. Our project aims to evaluate the basic practicability of Bio-CCS that cultivate methanogenic geo-microbes within depleted oil/gas reservoirs for geological CCS, and produce methane gas as fuel resources on the course of CO2 abatement for GHG control. To evaluate total feasibility of Bio-CCS concept, we have to estimate: CH4 generation volume, environmental impact along with life cycle of injection well, and risk-benefit balance of the Bio-CCS. We are modifying the model step by step to include interaction of oil/gas-CO2-geomicrobe within reservoir more practically and alternation of geo-microbes generation, so that we will be able to estimate methane generation rate more precisely. To evaluate impacts of accidental events around Bio-CCS reservoir, we estimated CO2 migration in relation with geological properties, condition of faults and pathways around well, using TOUGH2-CO2 simulator. All findings will be integrated in to it: cultivation condition of methanogenic geo-microbes, estimation method of methane generation quantities, environmental impacts of various risk scenarios, and benefit analysis of schematic site of Bio-CCS.

  3. Automated semantic indexing of figure captions to improve radiology image retrieval.

    PubMed

    Kahn, Charles E; Rubin, Daniel L

    2009-01-01

    We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Estimated precision was 0.897 (95% confidence interval, 0.857-0.937). Estimated recall was 0.930 (95% confidence interval, 0.838-1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval.

  4. Smoking and the Risk of Hospitalization for Symptomatic Diverticular Disease: A Population-Based Cohort Study from Sweden.

    PubMed

    Humes, David J; Ludvigsson, Jonas F; Jarvholm, Bengt

    2016-02-01

    Current studies reporting on the risk of smoking and development of symptomatic diverticular disease have reported conflicting results. The aim of this study was to investigate the association between smoking and symptomatic diverticular disease. This is a cohort study : Information was derived from the Swedish Construction Workers Cohort 1971-1993. Patients were selected from construction workers in Sweden. The primary outcome measured was the development of symptomatic diverticular disease and complicated diverticular disease (abscess and perforation) as identified in the Swedish Hospital Discharge Register. Adjusted relative risks of symptomatic diverticular disease according to smoking status were estimated by using negative binomial regression analysis. In total, the study included 232,685 men and 14,592 women. During follow-up, 3891 men and 318 women had a diagnosis of later symptomatic diverticular disease. In men, heavy smokers (≥15 cigarettes a day) had a 1.6-fold increased risk of developing symptomatic diverticular disease compared with nonsmokers (adjusted relative risk, 1.56; 95% CI, 1.42-1.72). There was evidence of a dose-response relationship, because moderate and ex-smokers had a 1.4- and 1.2-fold increased risk compared with nonsmokers (adjusted relative risk, 1.39; 95% CI, 1.27-1.52 and adjusted relative risk, 1.14; 95% CI, 1.04-1.27). These relationships were similar in women, but the risk estimates were less precise owing to smaller numbers. Male ever-smokers had a 2.7-fold increased risk of developing complicated diverticular disease (perforation/abscess) compared with nonsmokers (adjusted relative risks, 2.73; 95% CI, 1.69-4.41). We were unable to account for other confounding variables such as comorbidity, prescription medication, or lifestyle factors. Smoking is associated with symptomatic diverticular disease in both men and women and with an increased risk of developing complicated diverticular disease.

  5. Sub-sampling genetic data to estimate black bear population size: A case study

    USGS Publications Warehouse

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  6. Nonlinear Quantum Metrology of Many-Body Open Systems

    NASA Astrophysics Data System (ADS)

    Beau, M.; del Campo, A.

    2017-07-01

    We introduce general bounds for the parameter estimation error in nonlinear quantum metrology of many-body open systems in the Markovian limit. Given a k -body Hamiltonian and p -body Lindblad operators, the estimation error of a Hamiltonian parameter using a Greenberger-Horne-Zeilinger state as a probe is shown to scale as N-[k -(p /2 )], surpassing the shot-noise limit for 2 k >p +1 . Metrology equivalence between initial product states and maximally entangled states is established for p ≥1 . We further show that one can estimate the system-environment coupling parameter with precision N-(p /2 ), while many-body decoherence enhances the precision to N-k in the noise-amplitude estimation of a fluctuating k -body Hamiltonian. For the long-range Ising model, we show that the precision of this parameter beats the shot-noise limit when the range of interactions is below a threshold value.

  7. Comparison of the precision of age estimates generated from fin rays, scales, and otoliths of Blue Sucker

    USGS Publications Warehouse

    Acre, Matthew R.; Alejandrez, Celeste; East, Jessica; Massure, Wade A.; Miyazono, S.; Pease, Jessica E.; Roesler, Elizabeth L.; Williams, H.M.; Grabowski, Timothy B.

    2017-01-01

    Evaluating the precision of age estimates generated by different readers and different calcified structures is an important part of generating reliable estimations of growth, recruitment, and mortality for fish populations. Understanding the potential loss of precision associated with using structures harvested without sacrificing individuals, such as scales or fin rays, is particularly important when working with imperiled species, such as Cycleptus elongatus (Blue Sucker). We collected otoliths (lapilli), scales, and the first fin rays of the dorsal, anal, pelvic, and pectoral fins of 9 Blue Suckers. We generated age estimates from each structure by both experienced (n = 5) and novice (n = 4) readers. We found that, independent of the structure used to generate the age estimates, the mean coefficient of variation (CV) of experienced readers was approximately 29% lower than that of novice readers. Further, the mean CV of age estimates generated from pectoral-fin rays, pelvic-fin rays, and scales were statistically indistinguishable and less than those of dorsal-fin rays, anal-fin rays, and otoliths. Anal-, dorsal-, and pelvic-fin rays and scales underestimated age compared to otoliths, but age estimates from pectoral-fin rays were comparable to those from otoliths. Skill level, structure, and fish total-length influenced reader precision between subsequent reads of the same aging structure from a particular fish. Using structures that can be harvested non-lethally to estimate the age of Blue Sucker can provide reliable and reproducible results, similar to those that would be expected from using otoliths. Therefore, we recommend the use of pectoral-fin rays as a non-lethal method to obtain age estimates for Blue Suckers.

  8. Overcoming the risk of inaction from emissions uncertainty in smallholder agriculture

    NASA Astrophysics Data System (ADS)

    Berry, N. J.; Ryan, C. M.

    2013-03-01

    The potential for improving productivity and increasing the resilience of smallholder agriculture, while also contributing to climate change mitigation, has recently received considerable political attention (Beddington et al 2012). Financial support for improving smallholder agriculture could come from performance-based funding including sale of carbon credits or certified commodities, payments for ecosystem services, and nationally appropriate mitigation action (NAMA) budgets, as well as more traditional sources of development and environment finance. Monitoring the greenhouse gas fluxes associated with changes to agricultural practice is needed for performance-based mitigation funding, and efforts are underway to develop tools to quantify mitigation achieved and assess trade-offs and synergies between mitigation and other livelihood and environmental priorities (Olander 2012). High levels of small scale variability in carbon stocks and emissions in smallholder agricultural systems (Ziegler et al 2012) mean that data intensive approaches are needed for precise and unbiased mitigation monitoring. The cost of implementing such monitoring programmes is likely to be high, and this introduces the risk that projects will not be developed in areas where there is the greatest need for agricultural improvements, which are likely to correspond with areas where existing data or research infrastructure are lacking. When improvements to livelihoods and food security are expected as co-benefits of performance-based mitigation finance, the risk of inaction is borne by the rural poor as well as the global climate. In situ measurement of carbon accumulation in smallholders' soils are not usually feasible because of the costs associated with sampling in a heterogeneous landscape, although technological advances could improve the situation (Milori et al 2012). Alternatives to in situ measurement are to estimate greenhouse gas fluxes by extrapolating information from existing research to other areas with similar land uses and environmental conditions, or to combine information on land use activities with process-based models that describe expected emissions and carbon accumulation under specified conditions. Unfortunately long-term studies that have measured biomass and soil organic carbon accumulation in smallholder agriculture are scarce, and default values developed for national level emissions assessments (IPCC 2006) fail to capture local variability and may not scale linearly, so cannot be applied at the project scale without introducing considerable uncertainty and the potential for bias. If there is reliable information on the agricultural activities and environmental conditions at a project site, process-based models can provide accurate estimations of agricultural greenhouse gas fluxes that capture temporal and spatial variability (Olander 2012) but collecting the necessary data to parameterize and drive the models can be costly and time consuming. Assessing and monitoring greenhouse gas fluxes in smallholder agriculture therefore involves a balance between the resources required to collect information from thousands of smallholders across large areas, and the accuracy and precision of model predictions. Accuracy, or the absence of bias, is clearly an important consideration in the quantification of mitigation benefits for performance-based finance since a bias towards over-estimation of mitigation achieved would risk misallocating limited finance to projects that have not achieved mitigation benefits. Such a bias would also lead to a net increase in emissions if credits were used to offset emissions elsewhere. The accuracy of model predictions is related to uncertainty in model input data, which affects the precision of predictions, and errors in the model structure (Olander 2012). To limit the risk that projects receive credit for mitigation benefits that are not real, a precise-or-conservative approach to carbon accounting has emerged that requires projects to report mitigation benefits to a prescribed level of precision—for example with a 90% confidence interval that is less than 20% of the estimated mitigation benefit; and if this level of precision is not reached then the lower confidence limit of the value is encouraged (VCS 2012). This helps to ensure projects that lack precision in their estimates are biased towards an underestimation of mitigation benefits, which helps limit the risk of increasing net greenhouse gas emissions. It can also mean that finance from the sale of emission reduction certificates is insufficient to support smallholder agricultural projects without donor assistance to cover the cost of project establishment (Seebauer et al 2012). Understanding the mitigation benefits of improving agricultural practice is important for many purposes other than developing carbon offsets however, and with appropriate accounting approaches risks to smallholders can be reduced and scarce resources channelled to improving land use practices. Less precision is tolerable when making payments for a broad range ecosystem services, or assessing the impacts of donor support, than it is for industrial carbon offsets. Approaches that have greater uncertainty in expected emission reductions or removals may therefore be more appropriate if there is an equal emphasis on the livelihood and environmental benefits of projects as there is on mitigation benefits. One way to balance the risk of inaction against the need for accuracy is to use process-based models in greenhouse gas accounting and decision support tools, which give users control over the precision and cost of their accounting. Such models can be parameterized and driven using readily available information or best estimates for input data, as well as site specific environmental and activity data. The potential for bias in model predictions can be limited by making use of appropriate models that are validated against regionally specific data. Although process-based models have been adopted for quantifying mitigation benefit in smallholder agriculture systems (for example Seebauer et al 2012), their use is currently limited to those with specialist knowledge or access to detailed site specific information. Web-based tools that link existing global, regional, and local environmental data with process-based models (such as RothC (Coleman and Jenkinson 1996), CENTURY (Parton et al 1987), DNDC (Li et al 1994) and DAYCENT (Del Grosso et al 2002)) that have been validated for specific areas allow users to generate initial estimates of the carbon sequestration potential of agricultural systems simply by specifying the location and intervention. This can support assessments of the feasibility of supporting these interventions through various funding sources. The same tools can also generate accurate, site specific assessments and monitoring to varying levels of detail, when required, given the inclusion of new data collected in situ . When accounting for greenhouse gases in smallholder agriculture systems users should be free to decide whether it is worthwhile to invest in collecting input data to estimate mitigation benefits with sufficient precision to meet the requirements for carbon offsets, or if greater uncertainty is tolerable. By using tools that do not require specialist support and accepting estimates of mitigation benefits that are less precise, and not necessarily conservative, those providing performance-based finance can help ensure that a greater proportion of limited budgets are spent on the activities that directly benefit smallholders and that are likely to benefit the global climate. The Small-Holder Agriculture Monitoring and Baseline Assessment methodology and prototype tool (SHAMBA 2012), which has been trialled with fifteen agroforestry and conservation agriculture projects in Malawi and is currently under review for validation under the Plan Vivo Standard (Plan Vivo 2012), provides a proof of this concept and a platform on which greater functionality and flexibility can be built. We hope that this, and other similar initiatives, will deliver approaches to greenhouse gas accounting that reduce risks and maximize benefits to smallholder farmers. References Beddington J R et al 2012 What next for agriculture after Durban? Science 335 289-90 Coleman K and Jenkinson D S 1996 RothC 26.3 a model for the turnover of carbon in soil Evaluation of Soil Organic Matter Models Using Existing, Long-Term Datasets ed D S Powlson, P Smith and J U Smith (Heidelberg: Springer) Del Grosso S J, Ojima D S, Parton W J, Mosier A R, Petereson G A and Schimel D S 2002 Simulated effects of dryland cropping intensification on soil organic matter and greenhouse gas exchanges using the DAYCENT ecosystem model Environ. Pollut. 116 S75-83 IPCC (Intergovenmental Panel on Climate Change) 2006 Guidelines for National Greenhouse Gas Inventories. Prepared by the National Greenhouse Gas Inventories Programme (Hayama: IGES) (www.ipcc-nggip.iges.or.jp/public/2006gl/index.html) Li C, Frolking S and Harris R 1994 Modeling carbon biogeochemistry in agricultural soils Glob. Biogeochem. Cycles 8 237-54 Milori D M B P, Segini A, Da Silva W T L, Posadas A, Mares V, Quiroz R and Ladislau M N 2012 Emerging techniques for soil carbon measurements Climate Change Mitigation and Agriculture ed E Wollenberg, A Nihart, M-L Tapio-Bistrom and M Greig-Gran (Abingdon: Earthscan) Olander L P 2012 Using biogeochemical process models to quantify greenhouse gas mitigation from agricultural management Climate Change Mitigation and Agriculture ed E Wollenberg, A Nihart, M-L Tapio-Bistrom and M Greig-Gran (Abingdon: Earthscan) Parton W J, Schimel D S, Cole C V and Ojima D S 1987 Analysis of factors controlling soil organic matter levels in Great Plains grasslands Soil Sci. Soc. Am. J. 51 1173-9 Plan Vivo 2012 The Plan Vivo Standard For Community Payments for Ecosystem Services Programmes Version 2012 (available from: www.planvivo.org/) Seebauer M et al 2012 Carbon accounting for smallholder agricultural soil carbon projects Climate Change Mitigation and Agriculture ed E Wollenberg, A Nihart, M-L Tapio-Bistrom and M Greig-Gran (Abingdon: Earthscan) SHAMBA (Small-Holder Agriculture Monitoring and Baseline Assessment) 2012 Project webpage: http://tinyurl.com/shambatool VCS (Verified Carbon Standard) 2012 Veified Carbon Standard Requiements Document Version 3.2 (http://v-c-s.org/program-documents) Ziegler A D et al 2012 Carbon outcomes of major land-cover transitions in SE Asia: great uncertainties and REDD+ policy implications Glob. Change Biol. 18 3087-99

  9. Communicating Geographical Risks in Crisis Management: The Need for Research.

    PubMed

    French, Simon; Argyris, Nikolaos; Haywood, Stephanie M; Hort, Matthew C; Smith, Jim Q

    2017-10-23

    In any crisis, there is a great deal of uncertainty, often geographical uncertainty or, more precisely, spatiotemporal uncertainty. Examples include the spread of contamination from an industrial accident, drifting volcanic ash, and the path of a hurricane. Estimating spatiotemporal probabilities is usually a difficult task, but that is not our primary concern. Rather, we ask how analysts can communicate spatiotemporal uncertainty to those handling the crisis. We comment on the somewhat limited literature on the representation of spatial uncertainty on maps. We note that many cognitive issues arise and that the potential for confusion is high. We note that in the early stages of handling a crisis, the uncertainties involved may be deep, i.e., difficult or impossible to quantify in the time available. In such circumstance, we suggest the idea of presenting multiple scenarios. © 2017 Society for Risk Analysis.

  10. What matters most: quantifying an epidemiology of consequence

    PubMed Central

    Keyes, Katherine; Galea, Sandro

    2015-01-01

    Risk factor epidemiology has contributed to substantial public health success. In this essay, we argue, however, that the focus on risk factor epidemiology has led epidemiology to ever increasing focus on the estimation of precise causal effects of exposures on an outcome at the expense of engagement with the broader causal architecture that produces population health. To conduct an epidemiology of consequence, a systematic effort is needed to engage our science in a critical reflection both about how well and under what conditions or assumptions we can assess causal effects and also on what will truly matter most for changing population health. Such an approach changes the priorities and values of the discipline and requires reorientation of how we structure the questions we ask and the methods we use, as well as how we teach epidemiology to our emerging scholars. PMID:25749559

  11. Skeletal Structural Consequences of Reduced Gravity Environments

    NASA Technical Reports Server (NTRS)

    Ruff, Christropher B.

    1999-01-01

    The overall goal of this project is to provide structurally meaningful data on bone loss after exposure to reduced gravity environments so that more precise estimates of fracture risk and the effectiveness of countermeasures in reducing fracture risk can be developed. The project has three major components: (1) measure structural changes in the limb bones of rats subjected to complete and partial nonweightbearing, with and without treatment with ibandronate and periodic full weightbearing; (2) measure structural changes in the limb bones of human bedrest subjects, with and without treatment with alendronate and resistive exercise, and Russian cosmonauts flying on the Mir Space Station; and (3) validate and extend the 2-dimensional structural analyses currently possible in the second project component (bedrest and Mir subjects) using 3-dimensional finite element modeling techniques, and determine actual fracture-producing loads on earth and in space.

  12. Multiparameter Estimation in Networked Quantum Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  13. Multiparameter Estimation in Networked Quantum Sensors

    DOE PAGES

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-21

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  14. Where in the world are my field plots? Using GPS effectively in environmental field studies

    USGS Publications Warehouse

    Johnson, Chris E.; Barton, Christopher C.

    2004-01-01

    Global positioning system (GPS) technology is rapidly replacing tape, compass, and traditional surveying instruments as the preferred tool for estimating the positions of environmental research sites. One important problem, however, is that it can be difficult to estimate the uncertainty of GPS-derived positions. Sources of error include various satellite- and site-related factors, such as forest canopy and topographic obstructions. In a case study from the Hubbard Brook Experimental Forest in New Hampshire, hand-held, mapping-grade GPS receivers generally estimated positions with 1–5 m precision in open, unobstructed settings, and 20–30 m precision under forest canopy. Surveying-grade receivers achieved precisions of 10 cm or less, even in challenging terrain. Users can maximize the quality of their GPS measurements by “mission planning” to take advantage of high-quality satellite conditions. Repeated measurements and simultaneous data collection at multiple points can be used to assess accuracy and precision.

  15. Precision Timing of PSR J0437-4715: An Accurate Pulsar Distance, a High Pulsar Mass, and a Limit on the Variation of Newton's Gravitational Constant

    NASA Astrophysics Data System (ADS)

    Verbiest, J. P. W.; Bailes, M.; van Straten, W.; Hobbs, G. B.; Edwards, R. T.; Manchester, R. N.; Bhat, N. D. R.; Sarkissian, J. M.; Jacoby, B. A.; Kulkarni, S. R.

    2008-05-01

    Analysis of 10 years of high-precision timing data on the millisecond pulsar PSR J0437-4715 has resulted in a model-independent kinematic distance based on an apparent orbital period derivative, dot Pb , determined at the 1.5% level of precision (Dk = 157.0 +/- 2.4 pc), making it one of the most accurate stellar distance estimates published to date. The discrepancy between this measurement and a previously published parallax distance estimate is attributed to errors in the DE200 solar system ephemerides. The precise measurement of dot Pb allows a limit on the variation of Newton's gravitational constant, |Ġ/G| <= 23 × 10-12 yr-1. We also constrain any anomalous acceleration along the line of sight to the pulsar to |a⊙/c| <= 1.5 × 10-18 s-1 at 95% confidence, and derive a pulsar mass, mpsr = 1.76 +/- 0.20 M⊙, one of the highest estimates so far obtained.

  16. Estimation of the interior parameters from Mars nutations and from Doppler measurements

    NASA Astrophysics Data System (ADS)

    Yseboodt, M.; Rivoldini, A.; Le Maistre, S.; Dehant, V. M. A.

    2017-12-01

    The presence of a liquid core inside Mars changes the nutations: the nutation amplitudes can be resonantly amplified because of a free mode, called the free core nutation (FCN).We quantify how the internal structure, in particular the size of the core, affects the nutation amplifications and the Doppler observable between a Martian lander and the Earth.Present day core size estimates suggest that the effect is the largest on the prograde semi-annual and retrograde ter-annual nutation.We solve the inverse problem assuming a given precision on the nutation amplifications provided by an extensive set of geodesy measurements and we estimate the precision on the core properties. Such measurements will be available in the near future thanks to the geodesy experiments RISE (InSight mission) and LaRa (ExoMars mission).We find that the precision on the core properties is very dependent on the proximity of the FCN period to the ter-annual forcing (-229 days) and the assumed a priori precision on the nutations.

  17. Public health implications of environmental exposures.

    PubMed Central

    De Rosa, C T; Pohl, H R; Williams, M; Ademoyero, A A; Chou, C H; Jones, D E

    1998-01-01

    The Agency for Toxic Substances and Disease Registry (ATSDR) is a public health agency with responsibility for assessing the public health implications associated with uncontrolled releases of hazardous substances into the environment. The biological effects of low-level exposures are a primary concern in these assessments. One of the tools used by the agency for this purpose is the risk assessment paradigm originally outlined and described by the National Academy of Science in 1983. Because of its design and inherent concepts, risk assessment has been variously employed by a number of environmental and public health agencies and programs as a means to organize information, as a decision support tool, and as a working hypothesis for biologically based inference and extrapolation. Risk assessment has also been the subject of significant critical review. The ATSDR recognizes the utility of both the qualitative and quantitative conclusions provided by traditional risk assessment, but the agency uses such estimates only in the broader context of professional judgment, internal and external peer review, and extensive public review and comment. This multifaceted approach is consistent with the Council on Environmental Quality's description and use of risk analysis as an organizing construct based on sound biomedical and other scientific judgment in concert with risk assessment to define plausible exposure ranges of concern rather than a single numerical estimate that may convey an artificial sense of precision. In this approach biomedical opinion, host factors, mechanistic interpretation, molecular epidemiology, and actual exposure conditions are all critically important in evaluating the significance of environmental exposure to hazardous substances. As such, the ATSDR risk analysis approach is a multidimensional endeavor encompassing not only the components of risk assessment but also the principles of biomedical judgment, risk management, and risk communication. Within this framework of risk analysis, the ATSDR may rely on one or more of a number of interrelated principles and approaches to screen, organize information, set priorities, make decisions, and define future research needs and directions. Images Figure 1 PMID:9539032

  18. Statistical inference for the within-device precision of quantitative measurements in assay validation.

    PubMed

    Liu, Jen-Pei; Lu, Li-Tien; Liao, C T

    2009-09-01

    Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.

  19. Statistical approaches to account for missing values in accelerometer data: Applications to modeling physical activity.

    PubMed

    Yue Xu, Selene; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki

    2018-04-01

    Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.

  20. Accuracy and precision of two indirect methods for estimating canopy fuels

    Treesearch

    Abran Steele-Feldman; Elizabeth Reinhardt; Russell A. Parsons

    2006-01-01

    We compared the accuracy and precision of digital hemispherical photography and the LI-COR LAI-2000 plant canopy analyzer as predictors of canopy fuels. We collected data on 12 plots in western Montana under a variety of lighting and sky conditions, and used a variety of processing methods to compute estimates. Repeated measurements from each method displayed...

  1. Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation

    NASA Astrophysics Data System (ADS)

    Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.

    2017-05-01

    In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.

  2. Covariate Imbalance and Precision in Measuring Treatment Effects

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2011-01-01

    Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…

  3. Revised estimates of the risk of fetal loss following a prenatal diagnosis of trisomy 13 or trisomy 18.

    PubMed

    Cavadino, Alana; Morris, Joan K

    2017-04-01

    Edwards syndrome (trisomy 18) and Patau syndrome (trisomy 13) both have high natural fetal loss rates. The aim of this study was to provide estimates of these fetal loss rates by single gestational week of age using data from the National Down Syndrome Cytogenetic Register. Data from all pregnancies with Edwards or Patau syndrome that were prenatally detected in England and Wales from 2004 to 2014 was analyzed using Kaplan-Meier survival estimates. Pregnancies were entered into the analysis at the time of gestation at diagnosis, and were considered "under observation" until the gestation at outcome. There were 4088 prenatal diagnoses of trisomy 18 and 1471 of trisomy 13 in the analysis. For trisomy 18, 30% (95%CI: 25-34%) of viable fetuses at 12 weeks will result in a live birth and at 39 weeks gestation 67% (60-73%) will result in a live birth. For trisomy 13 the survival is 50% (41-58%) at 12 weeks and 84% (73-90%) at 39 weeks. There was no significant difference in survival between males and females when diagnosed at 12 weeks for trisomy 18 (P-value = 0.27) or trisomy 13 (P-value = 0.47). This paper provides the most precise gestational age-specific estimates currently available for the risk of fetal loss in trisomy 13 and trisomy 18 pregnancies in a general population. © 2017 Wiley Periodicals, Inc.

  4. A site-specific farm-scale GIS approach for reducing groundwater contamination by pesticides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulla, D.J.; Perillo, C.A.; Cogger, C.G.

    1996-05-01

    It has been proposed to vary pesticide applications by patterns in surface organic C to reduce the potential for contamination of groundwater. To evaluate the feasibility of this {open_quotes}precision farming{close_quotes} approach, data for carbofuran concentrations from 57 locations sampled to a depth of 180 cm were fit to the convective-dispersive equation. Fitted values for pore water velocity (v) ranged from 0.17 to 1.92 cm d{sup -1}, with a mean of 0.68 cm d{sup -1}. Values for dispersion (D) ranged from 0.29 to 13.35 cm{sup 2} d{sup -1}, with a mean of 2.57. With this data, risks of pesticide leaching weremore » estimated at each location using the attenuation factor (AF) model, and a dispersion based leached factor (LF) model. Using the AF model gave two locations with a very high pesticide leaching risk, 6 with a low risk, and 2 with no risk. Using the LF model, 6 had a high risk, 13 had a medium risk, 18 had a low risk, and 20 had no risk. Pesticide leaching risks were not correlated with any measured surface soil properties. Much of the variability in leaching risk was because of velocity variations, so it would be incorrect to assume that surface organic C content controls the leaching risk. 30 refs., 1 fig., 3 tabs.« less

  5. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  6. Preventing Type 2 Diabetes Mellitus: A Call for Personalized Intervention

    PubMed Central

    Glauber, Harry; Karnieli, Eddy

    2013-01-01

    In parallel with the rising prevalence of obesity worldwide, especially in younger people, there has been a dramatic increase in recent decades in the incidence and prevalence of metabolic consequences of obesity, in particular prediabetes and type 2 diabetes mellitus (DM2). Although approximately one-third of US adults now meet one or more diagnostic criteria for prediabetes, only a minority of those so identified as being at risk for DM2 actually progress to diabetes, and some may regress to normal status. Given the uncertain prognosis of prediabetes, it is not clear who is most likely to benefit from lifestyle change or medication interventions that are known to reduce DM2 risk. We review the many factors known to influence risk of developing DM2 and summarize treatment trials demonstrating the possibility of preventing DM2. Applying the concepts of personalized medicine and the potential of “big data” approaches to analysis of massive amounts of routinely gathered clinical and laboratory data from large populations, we call for the development of tools to more precisely estimate individual risk of DM2. PMID:24355893

  7. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459

  8. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    NASA Astrophysics Data System (ADS)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  9. Comparison of Time-to-First Event and Recurrent Event Methods in Randomized Clinical Trials.

    PubMed

    Claggett, Brian; Pocock, Stuart; Wei, L J; Pfeffer, Marc A; McMurray, John J V; Solomon, Scott D

    2018-03-27

    Background -Most Phase-3 trials feature time-to-first event endpoints for their primary and/or secondary analyses. In chronic diseases where a clinical event can occur more than once, recurrent-event methods have been proposed to more fully capture disease burden and have been assumed to improve statistical precision and power compared to conventional "time-to-first" methods. Methods -To better characterize factors that influence statistical properties of recurrent-events and time-to-first methods in the evaluation of randomized therapy, we repeatedly simulated trials with 1:1 randomization of 4000 patients to active vs control therapy, with true patient-level risk reduction of 20% (i.e. RR=0.80). For patients who discontinued active therapy after a first event, we assumed their risk reverted subsequently to their original placebo-level risk. Through simulation, we varied a) the degree of between-patient heterogeneity of risk and b) the extent of treatment discontinuation. Findings were compared with those from actual randomized clinical trials. Results -As the degree of between-patient heterogeneity of risk was increased, both time-to-first and recurrent-events methods lost statistical power to detect a true risk reduction and confidence intervals widened. The recurrent-events analyses continued to estimate the true RR=0.80 as heterogeneity increased, while the Cox model produced estimates that were attenuated. The power of recurrent-events methods declined as the rate of study drug discontinuation post-event increased. Recurrent-events methods provided greater power than time-to-first methods in scenarios where drug discontinuation was ≤30% following a first event, lesser power with drug discontinuation rates of ≥60%, and comparable power otherwise. We confirmed in several actual trials in chronic heart failure that treatment effect estimates were attenuated when estimated via the Cox model and that increased statistical power from recurrent-events methods was most pronounced in trials with lower treatment discontinuation rates. Conclusions -We find that the statistical power of both recurrent-events and time-to-first methods are reduced by increasing heterogeneity of patient risk, a parameter not included in conventional power and sample size formulas. Data from real clinical trials are consistent with simulation studies, confirming that the greatest statistical gains from use of recurrent-events methods occur in the presence of high patient heterogeneity and low rates of study drug discontinuation.

  10. Perspectives of decision-making and estimation of risk in populations exposed to low levels of ionizing radiations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabrikant, J.I.

    1979-01-01

    The setting of any permissible radiation level or guide remains essentially an arbitrary procedure. Based on the radiation risk estimates derived, any lack of precision does not minimize either the need for setting public health policies nor the conclusion that such risks are extremely small when compared with those avialable of alternative options, and those normally accepted by society as the hazards of everyday life. When compared with the benefits that society has established as goals derived from the necessary activities of medical care and energy production, it is apparent that society must establish appropriate standards and seek appropriate controllingmore » procedures which continue to assure that its needs are being met with the lowest possible risks. This implies continuing decision-making processes in which risk-benefit and cost-effectiveness assessments must be taken into account. Much of the practical information necessary for determination of radiation protection standards for public health policy is still lacking. It is now assumed that any exposure to radiaion at low levels of dose carries some risk of deleterious effects. However, how low this level may be, or the probability, or magnitude of the risk, still are not known. Radiation and the public health becomes a societal and political problem and not solely a scientific one. Our best scientific knowledge and our best scientific advice are essential for the protection of the public health, for the effective application of new technologies in medicine, and for guidance in the production of energy in industry. Unless man wishes to dispense with those activities which inevitably involve exposure to low levels of ionizing radiations, he must recognize that some degree of risk, however small, exists. In the evaluation of such risks from radiation, it is necessary to limit the radiation exposure to a level at which the risk is acceptable both to the individual and to society.« less

  11. Alternative outcome definitions and their effect on the performance of methods for observational outcome studies.

    PubMed

    Reich, Christian G; Ryan, Patrick B; Schuemie, Martijn J

    2013-10-01

    A systematic risk identification system has the potential to test marketed drugs for important Health Outcomes of Interest or HOI. For each HOI, multiple definitions are used in the literature, and some of them are validated for certain databases. However, little is known about the effect of different definitions on the ability of methods to estimate their association with medical products. Alternative definitions of HOI were studied for their effect on the performance of analytical methods in observational outcome studies. A set of alternative definitions for three HOI were defined based on literature review and clinical diagnosis guidelines: acute kidney injury, acute liver injury and acute myocardial infarction. The definitions varied by the choice of diagnostic codes and the inclusion of procedure codes and lab values. They were then used to empirically study an array of analytical methods with various analytical choices in four observational healthcare databases. The methods were executed against predefined drug-HOI pairs to generate an effect estimate and standard error for each pair. These test cases included positive controls (active ingredients with evidence to suspect a positive association with the outcome) and negative controls (active ingredients with no evidence to expect an effect on the outcome). Three different performance metrics where used: (i) Area Under the Receiver Operator Characteristics (ROC) curve (AUC) as a measure of a method's ability to distinguish between positive and negative test cases, (ii) Measure of bias by estimation of distribution of observed effect estimates for the negative test pairs where the true effect can be assumed to be one (no relative risk), and (iii) Minimal Detectable Relative Risk (MDRR) as a measure of whether there is sufficient power to generate effect estimates. In the three outcomes studied, different definitions of outcomes show comparable ability to differentiate true from false control cases (AUC) and a similar bias estimation. However, broader definitions generating larger outcome cohorts allowed more drugs to be studied with sufficient statistical power. Broader definitions are preferred since they allow studying drugs with lower prevalence than the more precise or narrow definitions while showing comparable performance characteristics in differentiation of signal vs. no signal as well as effect size estimation.

  12. Incidence of testicular cancer and occupation among Swedish men gainfully employed in 1970.

    PubMed

    Pollán, M; Gustavsson, P; Cano, M I

    2001-11-01

    To estimate occupation-specific risk of seminomas and nonseminoma subtypes of testicular cancer among Swedish men gainfully employed in 1970 over the period 1971-1989. Age-period standardized incidence ratios were computed in a dataset linking cancer diagnoses from the Swedish national cancer register to occupational and demographical data obtained in the census in 1970. Log-linear Poisson models were fitted, allowing for geographical area and town size. Taking occupational sector as a proxy for socioeconomic status, occupational risks were recalculated using intra-sector analyses, where the reference group comprised other occupations in the same sector only. Risk estimators per occupation were also computed for men reporting the same occupation in 1960 and 1970, a more specifically exposed group. Seminomas and nonseminomas showed a substantial geographical variation. The association between germ-cell testicular tumors and high socioeconomic groups was found mainly for nonseminomas. Positive associations with particular occupations were more evident for seminomas, for which railway stationmasters, metal annealers and temperers, precision toolmakers, watchmakers, construction smiths, and typographers and lithographers exhibited a risk excess. Concrete and construction worker was the only occupation consistently associated with nonseminomas. Among the many occupations studied, our results corroborate the previously reported increased risk among metal workers, specifically related with seminomatous tumors in this study. Our results confirm the geographical and socioeconomical differences in the incidence of testicular tumors. These factors should be accounted for in occupational studies. The different pattern of occupations related with seminomas and nonseminomas support the need to study these tumors separately.

  13. Predicting inpatient complications from cerebral aneurysm clipping: the Nationwide Inpatient Sample 2005-2009.

    PubMed

    Bekelis, Kimon; Missios, Symeon; MacKenzie, Todd A; Desai, Atman; Fischer, Adina; Labropoulos, Nicos; Roberts, David W

    2014-03-01

    Precise delineation of individualized risks of morbidity and mortality is crucial in decision making in cerebrovascular neurosurgery. The authors attempted to create a predictive model of complications in patients undergoing cerebral aneurysm clipping (CAC). The authors performed a retrospective cohort study of patients who had undergone CAC in the period from 2005 to 2009 and were registered in the Nationwide Inpatient Sample (NIS) database. A model for outcome prediction based on preoperative individual patient characteristics was developed. Of the 7651 patients in the NIS who underwent CAC, 3682 (48.1%) had presented with unruptured aneurysms and 3969 (51.9%) with subarachnoid hemorrhage. The respective inpatient postoperative risks for death, unfavorable discharge, stroke, treated hydrocephalus, cardiac complications, deep vein thrombosis, pulmonary embolism, and acute renal failure were 0.7%, 15.3%, 5.3%, 1.5%, 1.3%, 0.6%, 2.0%, and 0.1% for those with unruptured aneurysms and 11.5%, 52.8%, 5.5%, 39.2%, 1.7%, 2.8%, 2.7%, and 0.8% for those with ruptured aneurysms. Multivariate analysis identified risk factors independently associated with the above outcomes. A validated model for outcome prediction based on individual patient characteristics was developed. The accuracy of the model was estimated using the area under the receiver operating characteristic curve, and it was found to have good discrimination. The featured model can provide individualized estimates of the risks of postoperative complications based on preoperative conditions and can potentially be used as an adjunct in decision making in cerebrovascular neurosurgery.

  14. Use of direct versus indirect preparation data for assessing risk associated with airborne exposures at asbestos-contaminated sites.

    PubMed

    Goldade, Mary Patricia; O'Brien, Wendy Pott

    2014-01-01

    At asbestos-contaminated sites, exposure assessment requires measurement of airborne asbestos concentrations; however, the choice of preparation steps employed in the analysis has been debated vigorously among members of the asbestos exposure and risk assessment communities for many years. This study finds that the choice of preparation technique used in estimating airborne amphibole asbestos exposures for risk assessment is generally not a significant source of uncertainty. Conventionally, the indirect preparation method has been less preferred by some because it is purported to result in false elevations in airborne asbestos concentrations, when compared to direct analysis of air filters. However, airborne asbestos sampling in non-occupational settings is challenging because non-asbestos particles can interfere with the asbestos measurements, sometimes necessitating analysis via indirect preparation. To evaluate whether exposure concentrations derived from direct versus indirect preparation techniques differed significantly, paired measurements of airborne Libby-type amphibole, prepared using both techniques, were compared. For the evaluation, 31 paired direct and indirect preparations originating from the same air filters were analyzed for Libby-type amphibole using transmission electron microscopy. On average, the total Libby-type amphibole airborne exposure concentration was 3.3 times higher for indirect preparation analysis than for its paired direct preparation analysis (standard deviation = 4.1), a difference which is not statistically significant (p = 0.12, two-tailed, Wilcoxon signed rank test). The results suggest that the magnitude of the difference may be larger for shorter particles. Overall, neither preparation technique (direct or indirect) preferentially generates more precise and unbiased data for airborne Libby-type amphibole concentration estimates. The indirect preparation method is reasonable for estimating Libby-type amphibole exposure and may be necessary given the challenges of sampling in environmental settings. Relative to the larger context of uncertainties inherent in the risk assessment process, uncertainties associated with the use of airborne Libby-type amphibole exposure measurements derived from indirect preparation analysis are low. Use of exposure measurements generated by either direct or indirect preparation analyses is reasonable to estimate Libby-type Amphibole exposures in a risk assessment.

  15. Air pollution and survival within the Washington University-EPRI veterans cohort: risks based on modeled estimates of ambient levels of hazardous and criteria air pollutants.

    PubMed

    Lipfert, Frederick W; Wyzga, Ronald E; Baty, Jack D; Miller, J Philip

    2009-04-01

    For this paper, we considered relationships between mortality, vehicular traffic density, and ambient levels of 12 hazardous air pollutants, elemental carbon (EC), oxides of nitrogen (NOx), sulfur dioxide (SO2), and sulfate (SO4(2-)). These pollutant species were selected as markers for specific types of emission sources, including vehicular traffic, coal combustion, smelters, and metal-working industries. Pollutant exposures were estimated using emissions inventories and atmospheric dispersion models. We analyzed associations between county ambient levels of these pollutants and survival patterns among approximately 70,000 U.S. male veterans by mortality period (1976-2001 and subsets), type of exposure model, and traffic density level. We found significant associations between all-cause mortality and traffic-related air quality indicators and with traffic density per se, with stronger associations for benzene, formaldehyde, diesel particulate, NOx, and EC. The maximum effect on mortality for all cohort subjects during the 26-yr follow-up period is approximately 10%, but most of the pollution-related deaths in this cohort occurred in the higher-traffic counties, where excess risks approach 20%. However, mortality associations with diesel particulates are similar in high- and low-traffic counties. Sensitivity analyses show risks decreasing slightly over time and minor differences between linear and logarithmic exposure models. Two-pollutant models show stronger risks associated with specific traffic-related pollutants than with traffic density per se, although traffic density retains statistical significance in most cases. We conclude that tailpipe emissions of both gases and particles are among the most significant and robust predictors of mortality in this cohort and that most of those associations have weakened over time. However, we have not evaluated possible contributions from road dust or traffic noise. Stratification by traffic density level suggests the presence of response thresholds, especially for gaseous pollutants. Because of their wider distributions of estimated exposures, risk estimates based on emissions and atmospheric dispersion models tend to be more precise than those based on local ambient measurements.

  16. Automated Semantic Indexing of Figure Captions to Improve Radiology Image Retrieval

    PubMed Central

    Kahn, Charles E.; Rubin, Daniel L.

    2009-01-01

    Objective We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. Design The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Measurements Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Results Estimated precision was 0.897 (95% confidence interval, 0.857–0.937). Estimated recall was 0.930 (95% confidence interval, 0.838–1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Conclusion Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval. PMID:19261938

  17. Fat fraction bias correction using T1 estimates and flip angle mapping.

    PubMed

    Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A

    2014-01-01

    To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.

  18. LAI-2000 Accuracy, Precision, and Application to Visual Estimation of Leaf Area Index of Loblolly Pine

    Treesearch

    Jason A. Gatch; Timothy B. Harrington; James P. Castleberry

    2002-01-01

    Leaf area index (LAI) is an important parameter of forest stand productivity that has been used to diagnose stand vigor and potential fertilizer response of southern pines. The LAI-2000 was tested for its ability to provide accurate and precise estimates of LAI of loblolly pine (Pinus taeda L.). To test instrument accuracy, regression was used to...

  19. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  20. Coherence in quantum estimation

    NASA Astrophysics Data System (ADS)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  1. Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness

    PubMed Central

    2015-01-01

    Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073

  2. On improving the speed and reliability of T2-Relaxation-Under-Spin-Tagging (TRUST) MRI

    PubMed Central

    Xu, Feng; Uh, Jinsoo; Liu, Peiying; Lu, Hanzhang

    2011-01-01

    A T2-Relaxation-Under-Spin-Tagging (TRUST) technique was recently developed to estimate cerebral blood oxygenation, providing potentials for non-invasive assessment of the brain's oxygen consumption. A limitation of the current sequence is the need for long TR, as shorter TR causes an over-estimation in blood R2. The present study proposes a post-saturation TRUST by placing a non-selective 90° pulse after the signal acquisition to reset magnetization in the whole brain. This scheme was found to eliminate estimation bias at a slight cost of precision. To improve the precision, TE of the sequence was optimized and it was found that a modest TE shortening of 3.4ms can reduce the estimation error by 49%. We recommend the use of post-saturation TRUST sequence with a TR of 3000ms and a TE of 3.6ms, which allows the determination of global venous oxygenation with scan duration of 1 minute 12 seconds and an estimation precision of ±1% (in units of oxygen saturation percentage). PMID:22127845

  3. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  4. HIV and cancer registry linkage identifies a substantial burden of cancers in persons with HIV in India

    PubMed Central

    Godbole, Sheela V.; Nandy, Karabi; Gauniyal, Mansi; Nalawade, Pallavi; Sane, Suvarna; Koyande, Shravani; Toyama, Joy; Hegde, Asha; Virgo, Phil; Bhatia, Kishor; Paranjape, Ramesh S.; Risbud, Arun R.; Mbulaiteye, Sam M.; Mitsuyasu, Ronald T.

    2016-01-01

    Abstract We utilized computerized record-linkage methods to link HIV and cancer databases with limited unique identifiers in Pune, India, to determine feasibility of linkage and obtain preliminary estimates of cancer risk in persons living with HIV (PLHIV) as compared with the general population. Records of 32,575 PLHIV were linked to 31,754 Pune Cancer Registry records (1996–2008) using a probabilistic-matching algorithm. Cancer risk was estimated by calculating standardized incidence ratios (SIRs) in the early (4–27 months after HIV registration), late (28–60 months), and overall (4–60 months) incidence periods. Cancers diagnosed prior to or within 3 months of HIV registration were considered prevalent. Of 613 linked cancers to PLHIV, 188 were prevalent, 106 early incident, and 319 late incident. Incident cancers comprised 11.5% AIDS-defining cancers (ADCs), including cervical cancer and non-Hodgkin lymphoma (NHL), but not Kaposi sarcoma (KS), and 88.5% non-AIDS-defining cancers (NADCs). Risk for any incident cancer diagnosis in early, late, and combined periods was significantly elevated among PLHIV (SIRs: 5.6 [95% CI 4.6–6.8], 17.7 [95% CI 15.8–19.8], and 11.5 [95% CI 10–12.6], respectively). Cervical cancer risk was elevated in both incidence periods (SIRs: 9.6 [95% CI 4.8–17.2] and 22.6 [95% CI 14.3–33.9], respectively), while NHL risk was elevated only in the late incidence period (SIR: 18.0 [95% CI 9.8–30.20]). Risks for NADCs were dramatically elevated (SIR > 100) for eye-orbit, substantially (SIR > 20) for all-mouth, esophagus, breast, unspecified-leukemia, colon-rectum-anus, and other/unspecified cancers; moderately elevated (SIR > 10) for salivary gland, penis, nasopharynx, and brain-nervous system, and mildly elevated (SIR > 5) for stomach. Risks for 6 NADCs (small intestine, testis, lymphocytic leukemia, prostate, ovary, and melanoma) were not elevated and 5 cancers, including multiple myeloma not seen. Our study demonstrates the feasibility of using probabilistic record-linkage to study cancer/other comorbidities among PLHIV in India and provides preliminary population-based estimates of cancer risks in PLHIV in India. Our results, suggesting a potentially substantial burden and slightly different spectrum of cancers among PLHIV in India, support efforts to conduct multicenter linkage studies to obtain precise estimates and to monitor cancer risk in PLHIV in India. PMID:27631245

  5. Application of square-root filtering for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Schmidt, S. F.; Goka, T.

    1978-01-01

    Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.

  6. [Estimating emergency hospital admissions to gauge short-term effects of air pollution: evaluation of health data quality].

    PubMed

    Bois de Fer, Béatrice; Host, Sabine; Chardon, Benoît; Chatignoux, Edouard; Beaujouan, Laure; Brun-Ney, Dominique; Grémy, Isabelle

    2009-01-01

    The study of the short-term effects and health impact of air pollution is carrier out by the ERPURS regional surveillance program which utilizes hospitalization data obtained from the French hospital information system (PMSI) to determine these links. This system does not permit the distinction between emergency hospital admissions from scheduled ones, which cannot be related to short term changes in air pollution levels. This study examines how scheduled admissions affect the quality of the health indicators used to estimate air pollution effects. This indicator is compared to three new emergency hospitalisation indicators reconstructed based on data from the public hospitals in Paris, partly from the PMSI data and partly with data from an on-line emergency network that regroups all of the computerized emergency services. According to the pathology, scheduled admissions present a difficulty which affects the capacity to highlight the weakest risks with any precision.

  7. Methylenetetrahydrofolate reductase gene polymorphisms and acute lymphoblastic leukemia risk: a meta-analysis based on 28 case-control studies.

    PubMed

    Tong, Na; Sheng, Xiaojing; Wang, Meilin; Fang, Yongjun; Shi, Danni; Zhang, Zhizhong; Zhang, Zhengdong

    2011-10-01

    Methylenetetrahydrofolate reductase (MTHFR) is involved in DNA methylation and nucleotide synthesis. Accumulated evidence has demonstrated that C677T and A1298C polymorphisms of the MTHFR gene are associated with acute lymphoblastic leukemia (ALL) risk, but the results have been inconclusive. To determine a more precise estimation, we performed a meta-analysis of 28 studies with 4240 cases and 9289 controls. We found that the 677TT genotype showed a reduced risk of ALL compared with the 677CC genotype in the overall population (odds ratio [OR] 0.76, 95% confidence interval [CI] 0.61-0.92). The reduced risk was pronounced only among the Caucasian population (OR 0.68, 95% CI 0.51-0.90), not the Asian (OR 0.89, 95% CI 0.75-1.05). For the MTHFR A1298C polymorphism, no significant association with ALL susceptibility was observed in the pooled analyses. However, significantly increased ALL risk was found in childhood in the comparison of 1298CA versus AA genotype. This study provides evidence that MTHFR polymorphisms may play an important role in the development of ALL.

  8. Association between MTHFR polymorphisms and acute myeloid leukemia risk: a meta-analysis.

    PubMed

    Qin, Yu-Tao; Zhang, Yong; Wu, Fang; Su, Yan; Lu, Ge-Ning; Wang, Ren-Sheng

    2014-01-01

    Previous observational studies investigating the association between methylenetetrahydrofolate reductase (MTHFR) polymorphisms and acute myeloid leukemia risk (AML) have yielded inconsistent results. The aim of this study is to derive a more precise estimation of the association between MTHFR (C677T and A1298C) polymorphisms and acute myeloid leukemia risk. PubMed and Embase databases were systematically searched to identify relevant studies from their inception to August 2013. Odds ratios (ORs) with 95% confidence intervals (CIs) were the metric of choice. Thirteen studies were selected for C677T polymorphism (1838 cases and 5318 controls) and 9 studies (1335 patients and 4295 controls) for A1298C polymorphism. Overall, pooled results showed that C677T polymorphism was not significant associated with AML risk(OR, 0.98-1.04; 95% CI, 0.86-0.92 to 1.09-1.25). Similar results were observed for the A1298C polymorphism and in subgroup analysis. All comparisons revealed no substantial heterogeneity nor did we detect evidence of publication bias. In summary, this meta-analysis provides evidence that MTHFR polymorphisms were not associated with AML risk. Further investigations are needed to offer better insight into the role of these polymorphisms in AML carcinogenesis.

  9. Abbreviation definition identification based on automatic precision estimates.

    PubMed

    Sohn, Sunghwan; Comeau, Donald C; Kim, Won; Wilbur, W John

    2008-09-25

    The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. Due to the size of databases such as MEDLINE only a small fraction of abbreviation-definition pairs can be examined manually. An automatic way to estimate the accuracy of abbreviation-definition pairs extracted from text is needed. In this paper we propose an abbreviation definition identification algorithm that employs a variety of strategies to identify the most probable abbreviation definition. In addition our algorithm produces an accuracy estimate, pseudo-precision, for each strategy without using a human-judged gold standard. The pseudo-precisions determine the order in which the algorithm applies the strategies in seeking to identify the definition of an abbreviation. On the Medstract corpus our algorithm produced 97% precision and 85% recall which is higher than previously reported results. We also annotated 1250 randomly selected MEDLINE records as a gold standard. On this set we achieved 96.5% precision and 83.2% recall. This compares favourably with the well known Schwartz and Hearst algorithm. We developed an algorithm for abbreviation identification that uses a variety of strategies to identify the most probable definition for an abbreviation and also produces an estimated accuracy of the result. This process is purely automatic.

  10. Predicting invasion risk using measures of introduction effort and environmental niche models.

    PubMed

    Herborg, Leif-Matthias; Jerde, Christopher L; Lodge, David M; Ruiz, Gregory M; MacIsaac, Hugh J

    2007-04-01

    The Chinese mitten crab (Eriocheir sinensis) is native to east Asia, is established throughout Europe, and is introduced but geographically restricted in North America. We developed and compared two separate environmental niche models using genetic algorithm for rule set prediction (GARP) and mitten crab occurrences in Asia and Europe to predict the species' potential distribution in North America. Since mitten crabs must reproduce in water with >15% per hundred salinity, we limited the potential North American range to freshwater habitats within the highest documented dispersal distance (1260 km) and a more restricted dispersal limit (354 km) from the sea. Applying the higher dispersal distance, both models predicted the lower Great Lakes, most of the eastern seaboard, the Gulf of Mexico and southern extent of the Mississippi River watershed, and the Pacific northwest as suitable environment for mitten crabs, but environmental match for southern states (below 35 degrees N) was much lower for the European model. Use of the lower range with both models reduced the expected range, especially in the Great Lakes, Mississippi drainage, and inland areas of the Pacific Northwest. To estimate the risk of introduction of mitten crabs, the amount of reported ballast water discharge into major United States ports from regions in Asia and Europe with established mitten crab populations was used as an index of introduction effort. Relative risk of invasion was estimated based on a combination of environmental match and volume of unexchanged ballast water received (July 1999-December 2003) for major ports. The ports of Norfolk and Baltimore were most vulnerable to invasion and establishment, making Chesapeake Bay the most likely location to be invaded by mitten crabs in the United States. The next highest risk was predicted for Portland, Oregon. Interestingly, the port of Los Angeles/Long Beach, which has a large shipping volume, had a low risk of invasion. Ports such as Jacksonville, Florida, had a medium risk owing to small shipping volume but high environmental match. This study illustrates that the combination of environmental niche- and vector-based models can provide managers with more precise estimates of invasion risk than can either of these approaches alone.

  11. Parameter estimation in plasmonic QED

    NASA Astrophysics Data System (ADS)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  12. Estimation of chaotic coupled map lattices using symbolic vector dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya

    2010-01-01

    In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.

  13. Precision of channel catfish catch estimates using hoop nets in larger Oklahoma reservoirs

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2012-01-01

    Hoop nets are rapidly becoming the preferred gear type used to sample channel catfish Ictalurus punctatus, and many managers have reported that hoop nets effectively sample channel catfish in small impoundments (<200 ha). However, the utility and precision of this approach in larger impoundments have not been tested. We sought to determine how the number of tandem hoop net series affected the catch of channel catfish and the time involved in using 16 tandem hoop net series in larger impoundments (>200 ha). Hoop net series were fished once, set for 3 d; then we used Monte Carlo bootstrapping techniques that allowed us to estimate the number of net series required to achieve two levels of precision (relative standard errors [RSEs] of 15 and 25) at two levels of confidence (80% and 95%). Sixteen hoop net series were effective at obtaining an RSE of 25 with 80% and 95% confidence in all but one reservoir. Achieving an RSE of 15 was often less effective and required 18-96 hoop net series given the desired level of confidence. We estimated that an hour was needed, on average, to deploy and retrieve three hoop net series, which meant that 16 hoop net series per reservoir could be "set" and "retrieved" within a day, respectively. The estimated number of net series to achieve an RSE of 25 or 15 was positively associated with the coefficient of variation (CV) of the sample but not with reservoir surface area or relative abundance. Our results suggest that hoop nets are capable of providing reasonably precise estimates of channel catfish relative abundance and that the relationship with the CV of the sample reported herein can be used to determine the sampling effort for a desired level of precision.

  14. A random-censoring Poisson model for underreported data.

    PubMed

    de Oliveira, Guilherme Lopes; Loschi, Rosangela Helena; Assunção, Renato Martins

    2017-12-30

    A major challenge when monitoring risks in socially deprived areas of under developed countries is that economic, epidemiological, and social data are typically underreported. Thus, statistical models that do not take the data quality into account will produce biased estimates. To deal with this problem, counts in suspected regions are usually approached as censored information. The censored Poisson model can be considered, but all censored regions must be precisely known a priori, which is not a reasonable assumption in most practical situations. We introduce the random-censoring Poisson model (RCPM) which accounts for the uncertainty about both the count and the data reporting processes. Consequently, for each region, we will be able to estimate the relative risk for the event of interest as well as the censoring probability. To facilitate the posterior sampling process, we propose a Markov chain Monte Carlo scheme based on the data augmentation technique. We run a simulation study comparing the proposed RCPM with 2 competitive models. Different scenarios are considered. RCPM and censored Poisson model are applied to account for potential underreporting of early neonatal mortality counts in regions of Minas Gerais State, Brazil, where data quality is known to be poor. Copyright © 2017 John Wiley & Sons, Ltd.

  15. The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes.

    PubMed

    Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Friston, Karl

    2015-10-01

    Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a "limited offer" game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. © The Author 2014. Published by Oxford University Press.

  16. The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Friston, Karl

    2015-01-01

    Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a “limited offer” game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. PMID:25056572

  17. Comparison of two viewing methods for estimating largemouth bass and walleye ages from sectioned otoliths and dorsal spines

    USGS Publications Warehouse

    Wegleitner, Eric J.; Isermann, Daniel A.

    2017-01-01

    Many biologists use digital images for estimating ages of fish, but the use of images could lead to differences in age estimates and precision because image capture can produce changes in light and clarity compared to directly viewing structures through a microscope. We used sectioned sagittal otoliths from 132 Largemouth Bass Micropterus salmoides and sectioned dorsal spines and otoliths from 157 Walleyes Sander vitreus to determine whether age estimates and among‐reader precision were similar when annuli were enumerated directly through a microscope or from digital images. Agreement of ages between viewing methods for three readers were highest for Largemouth Bass otoliths (75–89% among readers), followed by Walleye otoliths (63–70%) and Walleye dorsal spines (47–64%). Most discrepancies (72–96%) were ±1 year, and differences were more prevalent for age‐5 and older fish. With few exceptions, mean ages estimated from digital images were similar to ages estimated via directly viewing the structures through the microscope, and among‐reader precision did not vary between viewing methods for each structure. However, the number of disagreements we observed suggests that biologists should assess potential differences in age structure that could arise if images of calcified structures are used in the age estimation process.

  18. [Krigle estimation and its simulated sampling of Chilo suppressalis population density].

    PubMed

    Yuan, Zheming; Bai, Lianyang; Wang, Kuiwu; Hu, Xiangyue

    2004-07-01

    In order to draw up a rational sampling plan for the larvae population of Chilo suppressalis, an original population and its two derivative populations, random population and sequence population, were sampled and compared with random sampling, gap-range-random sampling, and a new systematic sampling integrated Krigle interpolation and random original position. As for the original population whose distribution was up to aggregative and dependence range in line direction was 115 cm (6.9 units), gap-range-random sampling in line direction was more precise than random sampling. Distinguishing the population pattern correctly is the key to get a better precision. Gap-range-random sampling and random sampling are fit for aggregated population and random population, respectively, but both of them are difficult to apply in practice. Therefore, a new systematic sampling named as Krigle sample (n = 441) was developed to estimate the density of partial sample (partial estimation, n = 441) and population (overall estimation, N = 1500). As for original population, the estimated precision of Krigle sample to partial sample and population was better than that of investigation sample. With the increase of the aggregation intensity of population, Krigel sample was more effective than investigation sample in both partial estimation and overall estimation in the appropriate sampling gap according to the dependence range.

  19. Photoacoustic-based sO2 estimation through excised bovine prostate tissue with interstitial light delivery.

    PubMed

    Mitcham, Trevor; Taghavi, Houra; Long, James; Wood, Cayla; Fuentes, David; Stefan, Wolfgang; Ward, John; Bouchard, Richard

    2017-09-01

    Photoacoustic (PA) imaging is capable of probing blood oxygen saturation (sO 2 ), which has been shown to correlate with tissue hypoxia, a promising cancer biomarker. However, wavelength-dependent local fluence changes can compromise sO 2 estimation accuracy in tissue. This work investigates using PA imaging with interstitial irradiation and local fluence correction to assess precision and accuracy of sO 2 estimation of blood samples through ex vivo bovine prostate tissue ranging from 14% to 100% sO 2 . Study results for bovine blood samples at distances up to 20 mm from the irradiation source show that local fluence correction improved average sO 2 estimation error from 16.8% to 3.2% and maintained an average precision of 2.3% when compared to matched CO-oximeter sO 2 measurements. This work demonstrates the potential for future clinical translation of using fluence-corrected and interstitially driven PA imaging to accurately and precisely assess sO 2 at depth in tissue with high resolution.

  20. Using an electronic compass to determine telemetry azimuths

    USGS Publications Warehouse

    Cox, R.R.; Scalf, J.D.; Jamison, B.E.; Lutz, R.S.

    2002-01-01

    Researchers typically collect azimuths from known locations to estimate locations of radiomarked animals. Mobile, vehicle-mounted telemetry receiving systems frequently are used to gather azimuth data. Use of mobile systems typically involves estimating the vehicle's orientation to grid north (vehicle azimuth), recording an azimuth to the transmitter relative to the vehicle azimuth from a fixed rosette around the antenna mast (relative azimuth), and subsequently calculating an azimuth to the transmitter (animal azimuth). We incorporated electronic compasses into standard null-peak antenna systems by mounting the compass sensors atop the antenna masts and evaluated the precision of this configuration. This system increased efficiency by eliminating vehicle orientation and calculations to determine animal azimuths and produced estimates of precision (azimuth SD=2.6 deg., SE=0.16 deg.) similar to systems that required orienting the mobile system to grid north. Using an electronic compass increased efficiency without sacrificing precision and should produce more accurate estimates of locations when marked animals are moving or when vehicle orientation is problematic.

  1. Optimal feedback scheme and universal time scaling for Hamiltonian parameter estimation.

    PubMed

    Yuan, Haidong; Fung, Chi-Hang Fred

    2015-09-11

    Time is a valuable resource and it is expected that a longer time period should lead to better precision in Hamiltonian parameter estimation. However, recent studies in quantum metrology have shown that in certain cases more time may even lead to worse estimations, which puts this intuition into question. In this Letter we show that by including feedback controls this intuition can be restored. By deriving asymptotically optimal feedback controls we quantify the maximal improvement feedback controls can provide in Hamiltonian parameter estimation and show a universal time scaling for the precision limit under the optimal feedback scheme. Our study reveals an intriguing connection between noncommutativity in the dynamics and the gain of feedback controls in Hamiltonian parameter estimation.

  2. Power and Precision in Confirmatory Factor Analytic Tests of Measurement Invariance

    ERIC Educational Resources Information Center

    Meade, Adam W.; Bauer, Daniel J.

    2007-01-01

    This study investigates the effects of sample size, factor overdetermination, and communality on the precision of factor loading estimates and the power of the likelihood ratio test of factorial invariance in multigroup confirmatory factor analysis. Although sample sizes are typically thought to be the primary determinant of precision and power,…

  3. Measurement precision and noise analysis of CCD cameras

    NASA Astrophysics Data System (ADS)

    Wu, ZhenSen; Li, Zhiyang; Zhang, Ping

    1993-09-01

    CHINA The lirait precision of CCD camera with 1O. bit analogue to digital conversion is estimated in this paper . The noise effect on ineasurenent precision and the noise characteristics are analyzed in details. The noise process means are also discussed and the diagram of noise properties is given in this paper.

  4. Precision and Accuracy of Analysis for Boron in ITP Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tovo, L.L.

    'Inductively Coupled Plasma Emission Spectroscopy (ICPES) has been used by the Analytical Development Section (ADS) to measure boron in catalytic tetraphenylboron decomposition studies performed by the Waste Processing Technology (WPT) section. Analysis of these samples is complicated due to the presence of high concentrations of sodium and organic compounds. Previously, we found signal suppression in samples analyzed "as received". We suspected that the suppression was due to the high organic concentration (up to 0.01 molar organic decomposition products) in the samples. When the samples were acid digested prior to analysis, the suppression was eliminated. The precision of the reported boronmore » concentration was estimated as 10 percent based on the known precision of the inorganic boron standard used for calibration and quality control check of the ICPES analysis. However, a precision better than 10 percent was needed to evaluate ITP process operating parameters. Therefore, the purpose of this work was (1) to measure, instead of estimating, the precision of the boron measurement on ITP samples and (2) to determine the optimum precision attainable with current instrumentation.'« less

  5. The effectiveness of lifestyle interventions to reduce cardiovascular risk in patients with severe mental disorders: meta-analysis of intervention studies.

    PubMed

    Fernández-San-Martín, Maria Isabel; Martín-López, Luis Miguel; Masa-Font, Roser; Olona-Tabueña, Noemí; Roman, Yuani; Martin-Royo, Jaume; Oller-Canet, Silvia; González-Tejón, Susana; San-Emeterio, Luisa; Barroso-Garcia, Albert; Viñas-Cabrera, Lidia; Flores-Mateo, Gemma

    2014-01-01

    Patients with severe mental illness have higher prevalences of cardiovascular risk factors (CRF). The objective is to determine whether interventions to modify lifestyles in these patients reduce anthropometric and analytical parameters related to CRF in comparison to routine clinical practice. Systematic review of controlled clinical trials with lifestyle intervention in Medline, Cochrane Library, Embase, PsycINFO and CINALH. Change in body mass index, waist circumference, cholesterol, triglycerides and blood sugar. Meta-analyses were performed using random effects models to estimate the weighted mean difference. Heterogeneity was determined using i(2) statistical and subgroups analyses. 26 studies were selected. Lifestyle interventions decrease anthropometric and analytical parameters at 3 months follow up. At 6 and 12 months, the differences between the intervention and control groups were maintained, although with less precision. More studies with larger samples and long-term follow-up are needed.

  6. What matters most: quantifying an epidemiology of consequence.

    PubMed

    Keyes, Katherine; Galea, Sandro

    2015-05-01

    Risk factor epidemiology has contributed to substantial public health success. In this essay, we argue, however, that the focus on risk factor epidemiology has led epidemiology to ever increasing focus on the estimation of precise causal effects of exposures on an outcome at the expense of engagement with the broader causal architecture that produces population health. To conduct an epidemiology of consequence, a systematic effort is needed to engage our science in a critical reflection both about how well and under what conditions or assumptions we can assess causal effects and also on what will truly matter most for changing population health. Such an approach changes the priorities and values of the discipline and requires reorientation of how we structure the questions we ask and the methods we use, as well as how we teach epidemiology to our emerging scholars. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. An Inertial Dual-State State Estimator for Precision Planetary Landing with Hazard Detection and Avoidance

    NASA Technical Reports Server (NTRS)

    Bishop, Robert H.; DeMars, Kyle; Trawny, Nikolas; Crain, Tim; Hanak, Chad; Carson, John M.; Christian, John

    2016-01-01

    The navigation filter architecture successfully deployed on the Morpheus flight vehicle is presented. The filter was developed as a key element of the NASA Autonomous Landing and Hazard Avoidance Technology (ALHAT) project and over the course of 15 free fights was integrated into the Morpheus vehicle, operations, and flight control loop. Flight testing completed by demonstrating autonomous hazard detection and avoidance, integration of an altimeter, surface relative velocity (velocimeter) and hazard relative navigation (HRN) measurements into the onboard dual-state inertial estimator Kalman flter software, and landing within 2 meters of the vertical testbed GPS-based navigation solution at the safe landing site target. Morpheus followed a trajectory that included an ascent phase followed by a partial descent-to-landing, although the proposed filter architecture is applicable to more general planetary precision entry, descent, and landings. The main new contribution is the incorporation of a sophisticated hazard relative navigation sensor-originally intended to locate safe landing sites-into the navigation system and employed as a navigation sensor. The formulation of a dual-state inertial extended Kalman filter was designed to address the precision planetary landing problem when viewed as a rendezvous problem with an intended landing site. For the required precision navigation system that is capable of navigating along a descent-to-landing trajectory to a precise landing, the impact of attitude errors on the translational state estimation are included in a fully integrated navigation structure in which translation state estimation is combined with attitude state estimation. The map tie errors are estimated as part of the process, thereby creating a dual-state filter implementation. Also, the filter is implemented using inertial states rather than states relative to the target. External measurements include altimeter, velocimeter, star camera, terrain relative navigation sensor, and a hazard relative navigation sensor providing information regarding hazards on a map generated on-the-fly.

  8. High precision determination of the melting points of water TIP4P/2005 and water TIP4P/Ice models by the direct coexistence technique

    NASA Astrophysics Data System (ADS)

    Conde, M. M.; Rovere, M.; Gallo, P.

    2017-12-01

    An exhaustive study by molecular dynamics has been performed to analyze the factors that enhance the precision of the technique of direct coexistence for a system of ice and liquid water. The factors analyzed are the stochastic nature of the method, the finite size effects, and the influence of the initial ice configuration used. The results obtained show that the precision of estimates obtained through the technique of direct coexistence is markedly affected by the effects of finite size, requiring systems with a large number of molecules to reduce the error bar of the melting point. This increase in size causes an increase in the simulation time, but the estimate of the melting point with a great accuracy is important, for example, in studies on the ice surface. We also verified that the choice of the initial ice Ih configuration with different proton arrangements does not significantly affect the estimate of the melting point. Importantly this study leads us to estimate the melting point at ambient pressure of two of the most popular models of water, TIP4P/2005 and TIP4P/Ice, with the greatest precision to date.

  9. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  10. Inverse probability weighting for covariate adjustment in randomized studies.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling

    2014-02-20

    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Notes From the Field: Secondary Task Precision for Cognitive Load Estimation During Virtual Reality Surgical Simulation Training.

    PubMed

    Rasmussen, Sebastian R; Konge, Lars; Mikkelsen, Peter T; Sørensen, Mads S; Andersen, Steven A W

    2016-03-01

    Cognitive load (CL) theory suggests that working memory can be overloaded in complex learning tasks such as surgical technical skills training, which can impair learning. Valid and feasible methods for estimating the CL in specific learning contexts are necessary before the efficacy of CL-lowering instructional interventions can be established. This study aims to explore secondary task precision for the estimation of CL in virtual reality (VR) surgical simulation and also investigate the effects of CL-modifying factors such as simulator-integrated tutoring and repeated practice. Twenty-four participants were randomized for visual assistance by a simulator-integrated tutor function during the first 5 of 12 repeated mastoidectomy procedures on a VR temporal bone simulator. Secondary task precision was found to be significantly lower during simulation compared with nonsimulation baseline, p < .001. Contrary to expectations, simulator-integrated tutoring and repeated practice did not have an impact on secondary task precision. This finding suggests that even though considerable changes in CL are reflected in secondary task precision, it lacks sensitivity. In contrast, secondary task reaction time could be more sensitive, but requires substantial postprocessing of data. Therefore, future studies on the effect of CL modifying interventions should weigh the pros and cons of the various secondary task measurements. © The Author(s) 2015.

  13. Precision and accuracy of commonly used dental age estimation charts for the New Zealand population.

    PubMed

    Baylis, Stephanie; Bassed, Richard

    2017-08-01

    Little research has been undertaken for the New Zealand population in the field of dental age estimation. This research to date indicates there are differences in dental developmental rates between the New Zealand population and other global population groups, and within the New Zealand population itself. Dental age estimation methods range from dental development charts to complex biometric analysis. Dental development charts are not the most accurate method of dental age estimation, but are time saving in their use. They are an excellent screening tool, particularly for post-mortem identification purposes, and for assessing variation from population norms in living individuals. The aim of this study was to test the precision and accuracy of three dental development charts (Schour and Massler, Blenkin and Taylor, and the London Atlas), used to estimate dental age of a sample of New Zealand juveniles between the ages of 5 and 18 years old (n=875). Percentage 'best fit' to correct age category and to expected chart stage were calculated to determine which chart was the most precise for the sample. Chronological ages were compared to estimated dental ages using a two-tailed paired t-test (P<0.05) for each of the three methods. The mean differences between CA and DA were calculated to determine bias and the absolute mean differences were calculated to indicate accuracy. The results of this study show that while accuracy and precision were low for all charts tested against the New Zealand population sample, the Blenkin and Taylor Australian charts performed best overall. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Fidelity of the ensemble code for visual motion in primate retina.

    PubMed

    Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J

    2005-07-01

    Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.

  15. Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.

    PubMed

    Jaspersen, Johannes G; Montibeller, Gilberto

    2015-07-01

    Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. © 2015 Society for Risk Analysis.

  16. Clinical course of untreated cerebral cavernous malformations: a meta-analysis of individual patient data.

    PubMed

    Horne, Margaret A; Flemming, Kelly D; Su, I-Chang; Stapf, Christian; Jeon, Jin Pyeong; Li, Da; Maxwell, Susanne S; White, Philip; Christianson, Teresa J; Agid, Ronit; Cho, Won-Sang; Oh, Chang Wan; Wu, Zhen; Zhang, Jun-Ting; Kim, Jeong Eun; Ter Brugge, Karel; Willinsky, Robert; Brown, Robert D; Murray, Gordon D; Al-Shahi Salman, Rustam

    2016-02-01

    Cerebral cavernous malformations (CCMs) can cause symptomatic intracranial haemorrhage (ICH), but the estimated risks are imprecise and predictors remain uncertain. We aimed to obtain precise estimates and predictors of the risk of ICH during untreated follow-up in an individual patient data meta-analysis. We invited investigators of published cohorts of people aged at least 16 years, identified by a systematic review of Ovid MEDLINE and Embase from inception to April 30, 2015, to provide individual patient data on clinical course from CCM diagnosis until first CCM treatment or last available follow-up. We used survival analysis to estimate the 5-year risk of symptomatic ICH due to CCMs (primary outcome), multivariable Cox regression to identify baseline predictors of outcome, and random-effects models to pool estimates in a meta-analysis. Among 1620 people in seven cohorts from six studies, 204 experienced ICH during 5197 person-years of follow-up (Kaplan-Meier estimated 5-year risk 15·8%, 95% CI 13·7-17·9). The primary outcome of ICH within 5 years of CCM diagnosis was associated with clinical presentation with ICH or new focal neurological deficit (FND) without brain imaging evidence of recent haemorrhage versus other modes of presentation (hazard ratio 5·6, 95% CI 3·2-9·7) and with brainstem CCM location versus other locations (4·4, 2·3-8·6), but age, sex, and CCM multiplicity did not add independent prognostic information. The 5-year estimated risk of ICH during untreated follow-up was 3·8% (95% CI 2·1-5·5) for 718 people with non-brainstem CCM presenting without ICH or FND, 8·0% (0·1-15·9) for 80 people with brainstem CCM presenting without ICH or FND, 18·4% (13·3-23·5) for 327 people with non-brainstem CCM presenting with ICH or FND, and 30·8% (26·3-35·2) for 495 people with brainstem CCM presenting with ICH or FND. Mode of clinical presentation and CCM location are independently associated with ICH within 5 years of CCM diagnosis. These findings can inform decisions about CCM treatment. UK Medical Research Council, Chief Scientist Office of the Scottish Government, and UK Stroke Association. Copyright © 2016 Horne et al. Open Access article distributed under the terms of CC BY. Published by Elsevier Ltd.. All rights reserved.

  17. Quantum preservation of the measurements precision using ultra-short strong pulses in exact analytical solution

    NASA Astrophysics Data System (ADS)

    Berrada, K.; Eleuch, H.

    2017-09-01

    Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.

  18. Measurements of experimental precision for trials with cowpea (Vigna unguiculata L. Walp.) genotypes.

    PubMed

    Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G

    2016-05-09

    The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.

  19. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  20. A critical source area phosphorus index with topographic transport factors using high resolution LiDAR digital elevation models

    NASA Astrophysics Data System (ADS)

    Thomas, Ian; Murphy, Paul; Fenton, Owen; Shine, Oliver; Mellander, Per-Erik; Dunlop, Paul; Jordan, Phil

    2015-04-01

    A new phosphorus index (PI) tool is presented which aims to improve the identification of critical source areas (CSAs) of phosphorus (P) losses from agricultural land to surface waters. In a novel approach, the PI incorporates topographic indices rather than watercourse proximity as proxies for runoff risk, to account for the dominant control of topography on runoff-generating areas and P transport pathways. Runoff propensity and hydrological connectivity are modelled using the Topographic Wetness Index (TWI) and Network Index (NI) respectively, utilising high resolution digital elevation models (DEMs) derived from Light Detection and Ranging (LiDAR) to capture the influence of micro-topographic features on runoff pathways. Additionally, the PI attempts to improve risk estimates of particulate P losses by incorporating an erosion factor that accounts for fine-scale topographic variability within fields. Erosion risk is modelled using the Unit Stream Power Erosion Deposition (USPED) model, which integrates DEM-derived upslope contributing area and Universal Soil Loss Equation (USLE) factors. The PI was developed using field, sub-field and sub-catchment scale datasets of P source, mobilisation and transport factors, for four intensive agricultural catchments in Ireland representing different agri-environmental conditions. Datasets included soil test P concentrations, degree of P saturation, soil attributes, land use, artificial subsurface drainage locations, and 2 m resolution LiDAR DEMs resampled from 0.25 m resolution data. All factor datasets were integrated within a Geographical Information System (GIS) and rasterised to 2 m resolution. For each factor, values were categorised and assigned relative risk scores which ranked P loss potential. Total risk scores were calculated for each grid cell using a component formulation, which summed the products of weighted factor risk scores for runoff and erosion pathways. Results showed that the new PI was able to predict in-field risk variability and hence was able to identify CSAs at the sub-field scale. PI risk estimates and component scores were analysed at catchment and subcatchment scales, and validated using measured dissolved, particulate and total P losses at subcatchment snapshot sites and gauging stations at catchment outlets. The new PI provides CSA delineations at higher precision compared to conventional PIs, and more robust P transport risk estimates. The tool can be used to target cost-effective mitigation measures for P management within single farm units and wider catchments.

  1. Quantum-enhanced metrology for multiple phase estimation with noise

    PubMed Central

    Yue, Jie-Dong; Zhang, Yu-Ran; Fan, Heng

    2014-01-01

    We present a general quantum metrology framework to study the simultaneous estimation of multiple phases in the presence of noise as a discretized model for phase imaging. This approach can lead to nontrivial bounds of the precision for multiphase estimation. Our results show that simultaneous estimation (SE) of multiple phases is always better than individual estimation (IE) of each phase even in noisy environment. The utility of the bounds of multiple phase estimation for photon loss channels is exemplified explicitly. When noise is low, those bounds possess the Heisenberg scale showing quantum-enhanced precision with the O(d) advantage for SE, where d is the number of phases. However, this O(d) advantage of SE scheme in the variance of the estimation may disappear asymptotically when photon loss becomes significant and then only a constant advantage over that of IE scheme demonstrates. Potential application of those results is presented. PMID:25090445

  2. Comparison of creatinine and cystatin C based eGFR in the estimation of glomerular filtration rate in Indigenous Australians: The eGFR Study.

    PubMed

    Barr, Elizabeth Lm; Maple-Brown, Louise J; Barzi, Federica; Hughes, Jaquelyne T; Jerums, George; Ekinci, Elif I; Ellis, Andrew G; Jones, Graham Rd; Lawton, Paul D; Sajiv, Cherian; Majoni, Sandawana W; Brown, Alex Dh; Hoy, Wendy E; O'Dea, Kerin; Cass, Alan; MacIsaac, Richard J

    2017-04-01

    The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation that combines creatinine and cystatin C is superior to equations that include either measure alone in estimating glomerular filtration rate (GFR). However, whether cystatin C can provide any additional benefits in estimating GFR for Indigenous Australians, a population at high risk of end-stage kidney disease (ESKD) is unknown. Using a cross-sectional analysis from the eGFR Study of 654 Indigenous Australians at high risk of ESKD, eGFR was calculated using the CKD-EPI equations for serum creatinine (eGFRcr), cystatin C (eGFRcysC) and combined creatinine and cystatin C (eGFRcysC+cr). Reference GFR (mGFR) was determined using a non-isotopic iohexol plasma disappearance technique over 4h. Performance of each equation to mGFR was assessed by calculating bias, % bias, precision and accuracy for the total population, and according to age, sex, kidney disease, diabetes, obesity and c-reactive protein. Data were available for 542 participants (38% men, mean [sd] age 45 [14] years). Bias was significantly greater for eGFRcysC (15.0mL/min/1.73m 2 ; 95% CI 13.3-16.4, p<0.001) and eGFRcysC+cr (10.3; 8.8-11.5, p<0.001) compared to eGFRcr (5.4; 3.0-7.2). Accuracy was lower for eGFRcysC (80.3%; 76.7-83.5, p<0.001) but not for eGFRcysC+cr (91.9; 89.3-94.0, p=0.29) compared to eGFRcr (90.0; 87.2-92.4). Precision was comparable for all equations. The performance of eGFRcysC deteriorated across increasing levels of c-reactive protein. Cystatin C based eGFR equations may not perform well in populations with high levels of chronic inflammation. CKD-EPI eGFR based on serum creatinine remains the preferred equation in Indigenous Australians. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  3. Estimation of the uncertainty of analyte concentration from the measurement uncertainty.

    PubMed

    Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F

    2015-09-01

    Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.

  4. Radio Science from an Optical Communications Signal

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Asmar, Sami; Oudrhiri, Kamal

    2013-01-01

    NASA is currently developing the capability to deploy deep space optical communications links. This creates the opportunity to utilize the optical link to obtain range, doppler, and signal intensity estimates. These may, in turn, be used to complement or extend the capabilities of current radio science. In this paper we illustrate the achievable precision in estimating range, doppler, and received signal intensity of an non-coherent optical link (the current state-of-the-art for a deep-space link). We provide a joint estimation algorithm with performance close to the bound. We draw comparisons to estimates based on a coherent radio frequency signal, illustrating that large gains in either precision or observation time are possible with an optical link.

  5. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Task exposures in an office environment: a comparison of methods.

    PubMed

    Van Eerd, Dwayne; Hogg-Johnson, Sheilah; Mazumder, Anjali; Cole, Donald; Wells, Richard; Moore, Anne

    2009-10-01

    Task-related factors such as frequency and duration are associated with musculoskeletal disorders in office settings. The primary objective was to compare various task recording methods as measures of exposure in an office workplace. A total of 41 workers from different jobs were recruited from a large urban newspaper (71% female, mean age 41 years SD 9.6). Questionnaire, task diaries, direct observation and video methods were used to record tasks. A common set of task codes was used across methods. Different estimates of task duration, number of tasks and task transitions arose from the different methods. Self-report methods did not consistently result in longer task duration estimates. Methodological issues could explain some of the differences in estimates seen between methods observed. It was concluded that different task recording methods result in different estimates of exposure likely due to different exposure constructs. This work addresses issues of exposure measurement in office environments. It is of relevance to ergonomists/researchers interested in how to best assess the risk of injury among office workers. The paper discusses the trade-offs between precision, accuracy and burden in the collection of computer task-based exposure measures and different underlying constructs captures in each method.

  7. Causal Mediation Analysis for the Cox Proportional Hazards Model with a Smooth Baseline Hazard Estimator.

    PubMed

    Wang, Wei; Albert, Jeffrey M

    2017-08-01

    An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.

  8. Association of glutathione S-transferase pi (GSTP1) Ile105Val polymorphism with the risk of skin cancer: a meta-analysis.

    PubMed

    Zhou, Cheng-Fan; Ma, Tai; Zhou, Deng-Chuan; Shen, Tong; Zhu, Qi-Xing

    2015-08-01

    Numerous epidemiological studies have evaluated the association of Glutathione S-transferase P1 (GSTP1) Ile105Val polymorphism with the risk of skin cancer. However, the results remain inconclusive. To derive a more precise estimation of the association between the GSTP1 Ile105Val polymorphism and skin cancer risk, a meta-analysis was performed. A comprehensive search was conducted to identify the eligible studies. We used odds ratios (ORs) with 95 % confidence intervals (CIs) to assess the association of GSTP1 Ile105Val polymorphism with skin cancer risk. Thirteen case-control studies in nine articles, which included a total of 1504 cases and 2243 controls. Overall, we found that GSTP1 Ile105Val polymorphism was not associated with skin cancer risk. Furthermore, subgroup analysis by histological types showed that GSTP1 Ile105Val polymorphism was associated with risks of malignant melanoma under the dominant model (Val/Val + Val/Ile vs. Ile/Ile: OR 1.230, 95 % CI 1.017-1.488, P = 0.033). However, lack of association between GSTP1 Ile105Val polymorphism and BCC and SCC risk in all genetic models. Our meta-analysis suggested that the GSTP1 Ile105Val polymorphism might be associated with increased risk of malignant melanoma in Caucasian population.

  9. Milk, yogurt, and lactose intake and ovarian cancer risk: a meta-analysis.

    PubMed

    Liu, Jing; Tang, Wenru; Sang, Lei; Dai, Xiaoli; Wei, Danping; Luo, Ying; Zhang, Jihong

    2015-01-01

    Inconclusive information for the role of dairy food intake in relation to ovarian cancer risk may associate with adverse effects of lactose, which has been hypothesized to increase gonadotropin levels in animal models and ecological studies. Up to now, several studies have indicated the association between dairy food intake and risk of ovarian cancer, but no identified founding was reported. We performed this meta-analysis to derive a more precise estimation of the association between dairy food intake and ovarian cancer risk. Using the data from 19 available publications, we examined dairy food including low-fat/skim milk, whole milk, yogurt and lactose in relation to risk of ovarian cancer by meta-analysis. Pooled odds ratio (OR) with 95% confidence interval (CI) were used to assess the association. We observed a slightly increased risk of ovarian cancer with high intake of whole milk, but has no statistical significance (OR = 1.228, 95% CI = 1.031-1.464, P = 0.022). The results of other milk models did not provide evidence of positive association with ovarian cancer risk. This meta-analysis suggests that low-fat/skim milk, whole milk, yogurt and lactose intake has no associated with increased risk of ovarian cancer. Further studies with larger participants worldwide are needed to validate the association between dairy food intake and ovarian cancer.

  10. Global DNA hypomethylation in peripheral blood leukocytes as a biomarker for cancer risk: a meta-analysis.

    PubMed

    Woo, Hae Dong; Kim, Jeongseon

    2012-01-01

    Good biomarkers for early detection of cancer lead to better prognosis. However, harvesting tumor tissue is invasive and cannot be routinely performed. Global DNA methylation of peripheral blood leukocyte DNA was evaluated as a biomarker for cancer risk. We performed a meta-analysis to estimate overall cancer risk according to global DNA hypomethylation levels among studies with various cancer types and analytical methods used to measure DNA methylation. Studies were systemically searched via PubMed with no language limitation up to July 2011. Summary estimates were calculated using a fixed effects model. The subgroup analyses by experimental methods to determine DNA methylation level were performed due to heterogeneity within the selected studies (p<0.001, I(2): 80%). Heterogeneity was not found in the subgroup of %5-mC (p = 0.393, I(2): 0%) and LINE-1 used same target sequence (p = 0.097, I(2): 49%), whereas considerable variance remained in LINE-1 (p<0.001, I(2): 80%) and bladder cancer studies (p = 0.016, I(2): 76%). These results suggest that experimental methods used to quantify global DNA methylation levels are important factors in the association study between hypomethylation levels and cancer risk. Overall, cancer risks of the group with the lowest DNA methylation levels were significantly higher compared to the group with the highest methylation levels [OR (95% CI): 1.48 (1.28-1.70)]. Global DNA hypomethylation in peripheral blood leukocytes may be a suitable biomarker for cancer risk. However, the association between global DNA methylation and cancer risk may be different based on experimental methods, and region of DNA targeted for measuring global hypomethylation levels as well as the cancer type. Therefore, it is important to select a precise and accurate surrogate marker for global DNA methylation levels in the association studies between global DNA methylation levels in peripheral leukocyte and cancer risk.

  11. Multilevel Analysis of Multiple-Baseline Data Evaluating Precision Teaching as an Intervention for Improving Fluency in Foundational Reading Skills for at Risk Readers

    ERIC Educational Resources Information Center

    Brosnan, Julie; Moeyaert, Mariola; Brooks Newsome, Kendra; Healy, Olive; Heyvaert, Mieke; Onghena, Patrick; Van den Noortgate, Wim

    2018-01-01

    In this article, multiple-baseline across participants designs were used to evaluate the impact of a precision teaching (PT) program, within a Tier 2 Response to Intervention framework, targeting fluency in foundational reading skills with at risk kindergarten readers. Thirteen multiple-baseline design experiments that included participation from…

  12. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  13. Inclusion of Exercise Intensities Above the Lactate Threshold in VO2/Running Speed Regression Does not Improve the Precision of Accumulated Oxygen Deficit Estimation in Endurance-Trained Runners

    PubMed Central

    Reis, Victor M.; Silva, António J.; Ascensão, António; Duarte, José A.

    2005-01-01

    The present study intended to verify if the inclusion of intensities above lactate threshold (LT) in the VO2/running speed regression (RSR) affects the estimation error of accumulated oxygen deficit (AOD) during a treadmill running performed by endurance-trained subjects. Fourteen male endurance-trained runners performed a sub maximal treadmill running test followed by an exhaustive supra maximal test 48h later. The total energy demand (TED) and the AOD during the supra maximal test were calculated from the RSR established on first testing. For those purposes two regressions were used: a complete regression (CR) including all available sub maximal VO2 measurements and a sub threshold regression (STR) including solely the VO2 values measured during exercise intensities below LT. TED mean values obtained with CR and STR were not significantly different under the two conditions of analysis (177.71 ± 5.99 and 174.03 ± 6.53 ml·kg-1, respectively). Also the mean values of AOD obtained with CR and STR did not differ under the two conditions (49.75 ± 8.38 and 45.8 9 ± 9.79 ml·kg-1, respectively). Moreover, the precision of those estimations was also similar under the two procedures. The mean error for TED estimation was 3.27 ± 1.58 and 3.41 ± 1.85 ml·kg-1 (for CR and STR, respectively) and the mean error for AOD estimation was 5.03 ± 0.32 and 5.14 ± 0.35 ml·kg-1 (for CR and STR, respectively). The results indicated that the inclusion of exercise intensities above LT in the RSR does not improve the precision of the AOD estimation in endurance-trained runners. However, the use of STR may induce an underestimation of AOD comparatively to the use of CR. Key Points It has been suggested that the inclusion of exercise intensities above the lactate threshold in the VO2/power regression can significantly affect the estimation of the energy cost and, thus, the estimation of the AOD. However data on the precision of those AOD measurements is rarely provided. We have evaluated the effects of the inclusion of those exercise intensities on the AOD precision. The results have indicated that the inclusion of exercise intensities above the lactate threshold in the VO2/running speed regression does not improve the precision of AOD estimation in endurance-trained runners. However, the use of sub threshold regressions may induce an underestimation of AOD comparatively to the use of complete regressions. PMID:24501560

  14. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Precise Ionosphere Monitoring via a DSFH Satellite TT&C Link

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Li, Guangxia; Li, Zhiqiang; Yue, Chao

    2014-11-01

    A phase-coherent and frequency-hopped PN ranging system was developed, originally for the purpose of anti-jamming TT&C (tracking, telemetry and telecommand) of military satellites of China, including the Beidou-2 navigation satellites. The key innovation in the synchronization of this system is the unambiguous phase recovery of direct sequence and frequency hopping (DSFH) spread spectrum signal and the correction of frequency-dependent phase rotation caused by ionosphere. With synchronization achieved, a TEC monitoring algorithm based on maximum likelihood (ML) principle is proposed and its measuring precision is analyzed through ground simulation, onboard confirmation tests will be performed when transionosphere DSFH links are established in 2014. The measuring precision of TEC exceeds that obtained from GPS receiver data because the measurement is derived from unambiguous carrier phase estimates, not pseudorange estimates. The observation results from TT&C stations can provide real time regional ionosphere TEC estimation.

  16. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  17. Mendelian randomization with fine-mapped genetic data: Choosing from large numbers of correlated instrumental variables.

    PubMed

    Burgess, Stephen; Zuber, Verena; Valdes-Marquez, Elsa; Sun, Benjamin B; Hopewell, Jemma C

    2017-12-01

    Mendelian randomization uses genetic variants to make causal inferences about the effect of a risk factor on an outcome. With fine-mapped genetic data, there may be hundreds of genetic variants in a single gene region any of which could be used to assess this causal relationship. However, using too many genetic variants in the analysis can lead to spurious estimates and inflated Type 1 error rates. But if only a few genetic variants are used, then the majority of the data is ignored and estimates are highly sensitive to the particular choice of variants. We propose an approach based on summarized data only (genetic association and correlation estimates) that uses principal components analysis to form instruments. This approach has desirable theoretical properties: it takes the totality of data into account and does not suffer from numerical instabilities. It also has good properties in simulation studies: it is not particularly sensitive to varying the genetic variants included in the analysis or the genetic correlation matrix, and it does not have greatly inflated Type 1 error rates. Overall, the method gives estimates that are less precise than those from variable selection approaches (such as using a conditional analysis or pruning approach to select variants), but are more robust to seemingly arbitrary choices in the variable selection step. Methods are illustrated by an example using genetic associations with testosterone for 320 genetic variants to assess the effect of sex hormone related pathways on coronary artery disease risk, in which variable selection approaches give inconsistent inferences. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.

  18. Stroke Prevalence in Children With Sickle Cell Disease in Sub-Saharan Africa: A Systematic Review and Meta-Analysis

    PubMed Central

    Munube, Deogratias; Kasirye, Philip; Mupere, Ezekiel; Jin, Zhezhen; LaRussa, Philip; Idro, Richard; Green, Nancy S.

    2018-01-01

    Objectives. The prevalence of stroke among children with sickle cell disease (SCD) in sub-Saharan Africa was systematically reviewed. Methods. Comprehensive searches of PubMed, Embase, and Web of Science were performed for articles published between 1980 and 2016 (English or French) reporting stroke prevalence. Using preselected inclusion criteria, titles and abstracts were screened and full-text articles were reviewed. Results. Ten full-text articles met selection criteria. Cross-sectional clinic-based data reported 2.9% to 16.9% stroke prevalence among children with SCD. Using available sickle gene frequencies by country, estimated pediatric mortality, and fixed- and random-effects model, the number of affected individuals is projected as 29 800 (95% confidence interval = 25 571-34 027) and 59 732 (37 004-82 460), respectively. Conclusion. Systematic review enabled the estimation of the number of children with SCD stroke in sub-Saharan Africa. High disease mortality, inaccurate diagnosis, and regional variability of risk hamper more precise estimates. Adopting standardized stroke assessments may provide more accurate determination of numbers affected to inform preventive interventions. PMID:29785408

  19. Stroke Prevalence in Children With Sickle Cell Disease in Sub-Saharan Africa: A Systematic Review and Meta-Analysis.

    PubMed

    Marks, Lianna J; Munube, Deogratias; Kasirye, Philip; Mupere, Ezekiel; Jin, Zhezhen; LaRussa, Philip; Idro, Richard; Green, Nancy S

    2018-01-01

    Objectives . The prevalence of stroke among children with sickle cell disease (SCD) in sub-Saharan Africa was systematically reviewed. Methods . Comprehensive searches of PubMed, Embase, and Web of Science were performed for articles published between 1980 and 2016 (English or French) reporting stroke prevalence. Using preselected inclusion criteria, titles and abstracts were screened and full-text articles were reviewed. Results . Ten full-text articles met selection criteria. Cross-sectional clinic-based data reported 2.9% to 16.9% stroke prevalence among children with SCD. Using available sickle gene frequencies by country, estimated pediatric mortality, and fixed- and random-effects model, the number of affected individuals is projected as 29 800 (95% confidence interval = 25 571-34 027) and 59 732 (37 004-82 460), respectively. Conclusion . Systematic review enabled the estimation of the number of children with SCD stroke in sub-Saharan Africa. High disease mortality, inaccurate diagnosis, and regional variability of risk hamper more precise estimates. Adopting standardized stroke assessments may provide more accurate determination of numbers affected to inform preventive interventions.

  20. Area estimation of crops by digital analysis of Landsat data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Hixson, M. M.; Davis, B. J.

    1978-01-01

    The study for which the results are presented had these objectives: (1) to use Landsat data and computer-implemented pattern recognition to classify the major crops from regions encompassing different climates, soils, and crops; (2) to estimate crop areas for counties and states by using crop identification data obtained from the Landsat identifications; and (3) to evaluate the accuracy, precision, and timeliness of crop area estimates obtained from Landsat data. The paper describes the method of developing the training statistics and evaluating the classification accuracy. Landsat MSS data were adequate to accurately identify wheat in Kansas; corn and soybean estimates for Indiana were less accurate. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels.

  1. Comparison of Vehicle-Broadcasted Fuel Consumption Rates against Precise Fuel Measurements for Medium- and Heavy-Duty Vehicles and Engines

    DOE PAGES

    Pink, Alex; Ragatz, Adam; Wang, Lijuan; ...

    2017-03-28

    Vehicles continuously report real-time fuel consumption estimates over their data bus, known as the controller area network (CAN). However, the accuracy of these fueling estimates is uncertain to researchers who collect these data from any given vehicle. To assess the accuracy of these estimates, CAN-reported fuel consumption data are compared against fuel measurements from precise instrumentation. The data analyzed consisted of eight medium/heavy-duty vehicles and two medium-duty engines. Varying discrepancies between CAN fueling rates and the more accurate measurements emerged but without a vehicular trend-for some vehicles the CAN under-reported fuel consumption and for others the CAN over-reported fuel consumption.more » Furthermore, a qualitative real-time analysis revealed that the operating conditions under which these fueling discrepancies arose varied among vehicles. A drive cycle analysis revealed that while CAN fueling estimate accuracy differs for individual vehicles, that CAN estimates capture the relative fuel consumption differences between drive cycles within 4% for all vehicles and even more accurately for some vehicles. Furthermore, in situations where only CAN-reported data are available, CAN fueling estimates can provide relative fuel consumption trends but not accurate or precise fuel consumption rates.« less

  2. Comparison of Vehicle-Broadcasted Fuel Consumption Rates against Precise Fuel Measurements for Medium- and Heavy-Duty Vehicles and Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pink, Alex; Ragatz, Adam; Wang, Lijuan

    Vehicles continuously report real-time fuel consumption estimates over their data bus, known as the controller area network (CAN). However, the accuracy of these fueling estimates is uncertain to researchers who collect these data from any given vehicle. To assess the accuracy of these estimates, CAN-reported fuel consumption data are compared against fuel measurements from precise instrumentation. The data analyzed consisted of eight medium/heavy-duty vehicles and two medium-duty engines. Varying discrepancies between CAN fueling rates and the more accurate measurements emerged but without a vehicular trend-for some vehicles the CAN under-reported fuel consumption and for others the CAN over-reported fuel consumption.more » Furthermore, a qualitative real-time analysis revealed that the operating conditions under which these fueling discrepancies arose varied among vehicles. A drive cycle analysis revealed that while CAN fueling estimate accuracy differs for individual vehicles, that CAN estimates capture the relative fuel consumption differences between drive cycles within 4% for all vehicles and even more accurately for some vehicles. Furthermore, in situations where only CAN-reported data are available, CAN fueling estimates can provide relative fuel consumption trends but not accurate or precise fuel consumption rates.« less

  3. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images.

    PubMed

    De Backer, A; van den Bos, K H W; Van den Broek, W; Sijbers, J; Van Aert, S

    2016-12-01

    An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Risks of Lynch Syndrome Cancers for MSH6 Mutation Carriers

    PubMed Central

    Baglietto, Laura; Dowty, James G.; White, Darren M.; Wagner, Anja; Gomez Garcia, Encarna B.; Vriends, Annette H. J. T.; Cartwright, Nicola R.; Barnetson, Rebecca A.; Farrington, Susan M.; Tenesa, Albert; Hampel, Heather; Buchanan, Daniel; Arnold, Sven; Young, Joanne; Walsh, Michael D.; Jass, Jeremy; Macrae, Finlay; Antill, Yoland; Winship, Ingrid M.; Giles, Graham G.; Goldblatt, Jack; Parry, Susan; Suthers, Graeme; Leggett, Barbara; Butz, Malinda; Aronson, Melyssa; Poynter, Jenny N.; Baron, John A.; Le Marchand, Loic; Haile, Robert; Gallinger, Steve; Hopper, John L.; Potter, John; de la Chapelle, Albert; Vasen, Hans F.; Dunlop, Malcolm G.; Thibodeau, Stephen N.; Jenkins, Mark A.

    2010-01-01

    Background Germline mutations in MSH6 account for 10%–20% of Lynch syndrome colorectal cancers caused by hereditary DNA mismatch repair gene mutations. Because there have been only a few studies of mutation carriers, their cancer risks are uncertain. Methods We identified 113 families of MSH6 mutation carriers from five countries that we ascertained through family cancer clinics and population-based cancer registries. Mutation status, sex, age, and histories of cancer, polypectomy, and hysterectomy were sought from 3104 of their relatives. Age-specific cumulative risks for carriers and hazard ratios (HRs) for cancer risks of carriers, compared with those of the general population of the same country, were estimated by use of a modified segregation analysis with appropriate conditioning depending on ascertainment. Results For MSH6 mutation carriers, the estimated cumulative risks to ages 70 and 80 years, respectively, were as follows: for colorectal cancer, 22% (95% confidence interval [CI] = 14% to 32%) and 44% (95% CI = 28% to 62%) for men and 10% (95% CI = 5% to 17%) and 20% (95% CI = 11% to 35%) for women; for endometrial cancer, 26% (95% CI = 18% to 36%) and 44% (95% CI = 30% to 58%); and for any cancer associated with Lynch syndrome, 24% (95% CI = 16% to 37%) and 47% (95% CI = 32% to 66%) for men and 40% (95% CI = 32% to 52%) and 65% (95% CI = 53% to 78%) for women. Compared with incidence for the general population, MSH6 mutation carriers had an eightfold increased incidence of colorectal cancer (HR = 7.6, 95% CI = 5.4 to 10.8), which was independent of sex and age. Women who were MSH6 mutation carriers had a 26-fold increased incidence of endometrial cancer (HR = 25.5, 95% CI = 16.8 to 38.7) and a sixfold increased incidence of other cancers associated with Lynch syndrome (HR = 6.0, 95% CI = 3.4 to 10.7). Conclusion We have obtained precise and accurate estimates of both absolute and relative cancer risks for MSH6 mutation carriers. PMID:20028993

  5. S193 radiometer brightness temperature precision/accuracy for SL2 and SL3

    NASA Technical Reports Server (NTRS)

    Pounds, D. J.; Krishen, K.

    1975-01-01

    The precision and accuracy with which the S193 radiometer measured the brightness temperature of ground scenes is investigated. Estimates were derived from data collected during Skylab missions. Homogeneous ground sites were selected and S193 radiometer brightness temperature data analyzed. The precision was expressed as the standard deviation of the radiometer acquired brightness temperature. Precision was determined to be 2.40 K or better depending on mode and target temperature.

  6. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  7. MSH3 rs26279 polymorphism increases cancer risk: a meta-analysis

    PubMed Central

    Miao, Hui-Kai; Chen, Li-Ping; Cai, Dong-Ping; Kong, Wei-Ju; Xiao, Li; Lin, Jie

    2015-01-01

    Previous studies have investigated the association of mutS homolog 3 (MSH3) rs26279 G > A polymorphism with the risk of different types of cancers including colorectal cancer, breast cancer, prostate cancer, bladder cancer, thyroid cancer, ovarian cancer and oesophageal cancer. However, its association with cancer remains conflicting. We performed a comprehensive meta-analysis to derive a more precise estimation of the relationship between MSH3 rs26279 G > A polymorphism and cancer susceptibility. Systematically searching the PubMed and EMBASE databases yielded 11 publications with 12 studies of 3282 cases and 6476 controls. The strength of the association was determined by crude odds ratios (OR) and 95% confidence intervals (CI). Overall, pooled risk estimates demonstrated that MSH3 rs26279 G > A was significantly associated with an increased overall cancer risk under all the genetic models (GG vs. AA: OR = 1.27, 95% CI = 1.09-1.48, P = 0.002; AG vs. AA: OR = 1.10, 95% CI = 1.00-1.21, P = 0.045; GG vs. AG + AA: OR = 1.23, 95% CI = 1.06-1.42, P = 0.005; AG + GG vs. AA: OR = 1.13, 95% CI = 1.04-1.24, P = 0.006; G vs. A: OR = 1.13, 95% CI = 1.05-1.20, P = 0.001). The association was more evident for colorectal cancer and breast cancer. Moreover, the significant association was also observed in the following subgroups: Europeans, Asians, population-based studies, hospital-based studies, and studies comprising relatively large sample size (≥ 200). Our meta-analysis results demonstrated that MSH3 rs26279 G > A polymorphism is associated with an increased risk of overall cancer, especially for the colorectal cancer and breast cancer. PMID:26617824

  8. MSH3 rs26279 polymorphism increases cancer risk: a meta-analysis.

    PubMed

    Miao, Hui-Kai; Chen, Li-Ping; Cai, Dong-Ping; Kong, Wei-Ju; Xiao, Li; Lin, Jie

    2015-01-01

    Previous studies have investigated the association of mutS homolog 3 (MSH3) rs26279 G > A polymorphism with the risk of different types of cancers including colorectal cancer, breast cancer, prostate cancer, bladder cancer, thyroid cancer, ovarian cancer and oesophageal cancer. However, its association with cancer remains conflicting. We performed a comprehensive meta-analysis to derive a more precise estimation of the relationship between MSH3 rs26279 G > A polymorphism and cancer susceptibility. Systematically searching the PubMed and EMBASE databases yielded 11 publications with 12 studies of 3282 cases and 6476 controls. The strength of the association was determined by crude odds ratios (OR) and 95% confidence intervals (CI). Overall, pooled risk estimates demonstrated that MSH3 rs26279 G > A was significantly associated with an increased overall cancer risk under all the genetic models (GG vs. AA: OR = 1.27, 95% CI = 1.09-1.48, P = 0.002; AG vs. AA: OR = 1.10, 95% CI = 1.00-1.21, P = 0.045; GG vs. AG + AA: OR = 1.23, 95% CI = 1.06-1.42, P = 0.005; AG + GG vs. AA: OR = 1.13, 95% CI = 1.04-1.24, P = 0.006; G vs. A: OR = 1.13, 95% CI = 1.05-1.20, P = 0.001). The association was more evident for colorectal cancer and breast cancer. Moreover, the significant association was also observed in the following subgroups: Europeans, Asians, population-based studies, hospital-based studies, and studies comprising relatively large sample size (≥ 200). Our meta-analysis results demonstrated that MSH3 rs26279 G > A polymorphism is associated with an increased risk of overall cancer, especially for the colorectal cancer and breast cancer.

  9. Cancer risk after resection of polypoid dysplasia in patients with longstanding ulcerative colitis: a meta-analysis.

    PubMed

    Wanders, Linda K; Dekker, Evelien; Pullens, Bo; Bassett, Paul; Travis, Simon P L; East, James E

    2014-05-01

    American and European guidelines propose complete endoscopic resection of polypoid dysplasia (adenomas or adenoma-like masses) in patients with longstanding colitis, with close endoscopic follow-up. The incidence of cancer after detection of flat low-grade dysplasia or dysplasia-associated lesion or mass is estimated at 14 cases/1000 years of patient follow-up. However, the risk for polypoid dysplasia has not been determined with precision. We investigated the risk of cancer after endoscopic resection of polypoid dysplasia in patients with ulcerative colitis. MEDLINE, EMBASE, PubMed, and the Cochrane library were searched for studies of patients with colitis and resected polypoid dysplasia, with reports of colonoscopic follow-up and data on cancers detected. Outcomes from included articles were pooled to provide a single combined estimate of outcomes by using Poisson regression. Of 425 articles retrieved, we analyzed data from 10 studies, comprising 376 patients with colitis and polypoid dysplasia with a combined 1704 years of follow-up. A mean of 2.8 colonoscopies were performed for each patient after the index procedure (range, 0-15 colonoscopies). The pooled incidence of cancer was 5.3 cases (95% confidence interval, 2.7-10.1 cases)/1000 years of patient follow-up. There was no evidence of heterogeneity or publication bias. The pooled rate of any dysplasia was 65 cases (95% confidence interval, 54-78 cases)/1000 patient years. Patients with colitis have a low risk of colorectal cancer after resection of polypoid dysplasia; these findings support the current strategy of resection and surveillance. However, these patients have a 10-fold greater risk of developing any dysplasia than colorectal cancer and should undergo close endoscopic follow-up. Copyright © 2014 AGA Institute. Published by Elsevier Inc. All rights reserved.

  10. General Practice Clinical Data Help Identify Dementia Hotspots: A Novel Geospatial Analysis Approach.

    PubMed

    Bagheri, Nasser; Wangdi, Kinley; Cherbuin, Nicolas; Anstey, Kaarin J

    2018-01-01

    We have a poor understanding of whether dementia clusters geographically, how this occurs, and how dementia may relate to socio-demographic factors. To shed light on these important questions, this study aimed to compute a dementia risk score for individuals to assess spatial variation of dementia risk, identify significant clusters (hotspots), and explore their association with socioeconomic status. We used clinical records from 16 general practices (468 Statistical Area level 1 s, N = 14,746) from the city of west Adelaide, Australia for the duration of 1 January 2012 to 31 December 2014. Dementia risk was estimated using The Australian National University-Alzheimer's Disease Risk Index. Hotspot analyses were applied to examine potential clusters in dementia risk at small area level. Significant hotspots were observed in eastern and southern areas while coldspots were observed in the western area within the study perimeter. Additionally, significant hotspots were observed in low socio-economic communities. We found dementia risk scores increased with age, sex (female), high cholesterol, no physical activity, living alone (widow, divorced, separated, or never married), and co-morbidities such as diabetes and depression. Similarly, smoking was associated with a lower dementia risk score. The identification of dementia risk clusters may provide insight into possible geographical variations in risk factors for dementia and quantify these risks at the community level. As such, this research may enable policy makers to tailor early prevention strategies to the correct individuals within their precise locations.

  11. Risk-adjusted econometric model to estimate postoperative costs: an additional instrument for monitoring performance after major lung resection.

    PubMed

    Brunelli, Alessandro; Salati, Michele; Refai, Majed; Xiumé, Francesco; Rocco, Gaetano; Sabbatini, Armando

    2007-09-01

    The objectives of this study were to develop a risk-adjusted model to estimate individual postoperative costs after major lung resection and to use it for internal economic audit. Variable and fixed hospital costs were collected for 679 consecutive patients who underwent major lung resection from January 2000 through October 2006 at our unit. Several preoperative variables were used to develop a risk-adjusted econometric model from all patients operated on during the period 2000 through 2003 by a stepwise multiple regression analysis (validated by bootstrap). The model was then used to estimate the postoperative costs in the patients operated on during the 3 subsequent periods (years 2004, 2005, and 2006). Observed and predicted costs were then compared within each period by the Wilcoxon signed rank test. Multiple regression and bootstrap analysis yielded the following model predicting postoperative cost: 11,078 + 1340.3X (age > 70 years) + 1927.8X cardiac comorbidity - 95X ppoFEV1%. No differences between predicted and observed costs were noted in the first 2 periods analyzed (year 2004, $6188.40 vs $6241.40, P = .3; year 2005, $6308.60 vs $6483.60, P = .4), whereas in the most recent period (2006) observed costs were significantly lower than the predicted ones ($3457.30 vs $6162.70, P < .0001). Greater precision in predicting outcome and costs after therapy may assist clinicians in the optimization of clinical pathways and allocation of resources. Our economic model may be used as a methodologic template for economic audit in our specialty and complement more traditional outcome measures in the assessment of performance.

  12. A historical prospective study of European stainless steel, mild steel, and shipyard welders.

    PubMed Central

    Simonato, L; Fletcher, A C; Andersen, A; Anderson, K; Becker, N; Chang-Claude, J; Ferro, G; Gérin, M; Gray, C N; Hansen, K S

    1991-01-01

    A multicentre cohort of 11,092 male welders from 135 companies located in nine European countries has been assembled with the aim of investigating the relation of potential cancer risk, lung cancer in particular, with occupational exposure. The observation period and the criteria for inclusion of welders varied from country to country. Follow up was successful for 96.9% of the cohort and observed numbers of deaths (and for some countries incident cancer cases) were compared with expected numbers calculated from national reference rates. Mortality and cancer incidence ratios were analysed by cause category, time since first exposure, duration of employment, and estimated cumulative dose to total fumes, chromium (Cr), Cr VI, and nickel (Ni). Overall a statistically significant excess was reported for mortality from lung cancer (116 observed v 86.81 expected deaths, SMR = 134). When analysed by type of welding an increasing pattern with time since first exposure was present for both mild steel and stainless steel welders, which was more noticeable for the subcohort of predominantly stainless steel welders. No clear relation was apparent between mortality from lung cancer and duration of exposure to or estimated cumulative dose of Ni or Cr. Whereas the patterns of lung cancer mortality in these results suggest that the risk of lung cancer is higher for stainless steel than mild steel welders the different level of risk for these two categories of welding exposure cannot be quantified with precision. The report of five deaths from pleural mesothelioma unrelated to the type of welding draws attention to the risk of exposure to asbestos in welding activities. PMID:2015204

  13. Analysis of Traffic Crashes Involving Pedestrians Using Big Data: Investigation of Contributing Factors and Identification of Hotspots.

    PubMed

    Xie, Kun; Ozbay, Kaan; Kurkcu, Abdullah; Yang, Hong

    2017-08-01

    This study aims to explore the potential of using big data in advancing the pedestrian risk analysis including the investigation of contributing factors and the hotspot identification. Massive amounts of data of Manhattan from a variety of sources were collected, integrated, and processed, including taxi trips, subway turnstile counts, traffic volumes, road network, land use, sociodemographic, and social media data. The whole study area was uniformly split into grid cells as the basic geographical units of analysis. The cell-structured framework makes it easy to incorporate rich and diversified data into risk analysis. The cost of each crash, weighted by injury severity, was assigned to the cells based on the relative distance to the crash site using a kernel density function. A tobit model was developed to relate grid-cell-specific contributing factors to crash costs that are left-censored at zero. The potential for safety improvement (PSI) that could be obtained by using the actual crash cost minus the cost of "similar" sites estimated by the tobit model was used as a measure to identify and rank pedestrian crash hotspots. The proposed hotspot identification method takes into account two important factors that are generally ignored, i.e., injury severity and effects of exposure indicators. Big data, on the one hand, enable more precise estimation of the effects of risk factors by providing richer data for modeling, and on the other hand, enable large-scale hotspot identification with higher resolution than conventional methods based on census tracts or traffic analysis zones. © 2017 Society for Risk Analysis.

  14. An evaluation of the precision of fin ray, otolith, and scale age determinations for brook trout

    USGS Publications Warehouse

    Stolarski, J.T.; Hartman, K.J.

    2008-01-01

    The ages of brook trout Salvelinus fontinalis are typically estimated using scales despite a lack of research documenting the effectiveness of this technique. The use of scales is often preferred because it is nonlethal and is believed to require less effort than alternative methods. To evaluate the relative effectiveness of different age estimation methodologies for brook trout, we measured the precision and processing times of scale, sagittal otolith, and pectoral fin ray age estimation techniques. Three independent readers, age bias plots, coefficients of variation (CV = 100 x SD/mean), and percent agreement (PA) were used to measure within-reader, among-structure bias and within-structure, among-reader precision. Bias was generally minimal; however, the age estimates derived from scales tended to be lower than those derived from otoliths within older (age > 2) cohorts. Otolith, fin ray, and scale age estimates were within 1 year of each other for 95% of the comparisons. The measures of precision for scales (CV = 6.59; PA = 82.30) and otoliths (CV = 7.45; PA = 81.48) suggest higher agreement between these structures than with fin rays (CV = 11.30; PA = 65.84). The mean per-sample processing times were lower for scale (13.88 min) and otolith techniques (12.23 min) than for fin ray techniques (22.68 min). The comparable processing times of scales and otoliths contradict popular belief and are probably a result of the high proportion of regenerated scales within samples and the ability to infer age from whole (as opposed to sectioned) otoliths. This research suggests that while scales produce age estimates rivaling those of otoliths for younger (age > 3) cohorts, they may be biased within older cohorts and therefore should be used with caution. ?? Copyright by the American Fisheries Society 2008.

  15. Impact of orbit modeling on DORIS station position and Earth rotation estimates

    NASA Astrophysics Data System (ADS)

    Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav

    2014-04-01

    The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.

  16. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  17. Impact of Multi-GNSS Observations on Precise Orbit Determination and Precise Point Positioning Solutions

    NASA Astrophysics Data System (ADS)

    Amiri, N.; Bertiger, W. I.; Lu, W.; Miller, M. A.; David, M. W.; Ries, P.; Romans, L.; Sibois, A. E.; Sibthorpe, A.; Sakumura, C.

    2017-12-01

    Impact of Multi-GNSS Observations on Precise Orbit Determination and Precise Point Positioning Solutions Authors: Nikta Amiri, Willy Bertiger, Wenwen Lu, Mark Miller, David Murphy, Paul Ries, Larry Romans, Carly Sakumura, Aurore Sibois, Anthony Sibthorpe All at the Jet Propulsion Laboratory, California Institute of Technology Multiple Global Navigation Satellite Systems (GNSS) are now in various stages of completion. The four current constellations (GPS, GLONASS, BeiDou, Galileo) comprise more than 80 satellites as of July 2017, with 120 satellites expected to be available when all four constellations become fully operational. We investigate the impact of simultaneous observations to these four constellations on global network precise orbit determination (POD) solutions, and compare them to available sets of orbit and clock products submitted to the Multi-GNSS Experiment (MGEX). Using JPL's GipsyX software, we generate orbit and clock products for the four constellations. The resulting solutions are evaluated based on a number of metrics including day-to-day internal and external orbit and/or clock overlaps and estimated constellation biases. Additionally, we examine estimated station positions obtained from precise point positioning (PPP) solutions by comparing results generated from multi-GNSS and GPS-only orbit and clock products.

  18. Precision estimate for Odin-OSIRIS limb scatter retrievals

    NASA Astrophysics Data System (ADS)

    Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.

    2012-02-01

    The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.

  19. Precision bounds for gradient magnetometry with atomic ensembles

    NASA Astrophysics Data System (ADS)

    Apellaniz, Iagoba; Urizar-Lanz, Iñigo; Zimborás, Zoltán; Hyllus, Philipp; Tóth, Géza

    2018-05-01

    We study gradient magnetometry with an ensemble of atoms with arbitrary spin. We calculate precision bounds for estimating the gradient of the magnetic field based on the quantum Fisher information. For quantum states that are invariant under homogeneous magnetic fields, we need to measure a single observable to estimate the gradient. On the other hand, for states that are sensitive to homogeneous fields, a simultaneous measurement is needed, as the homogeneous field must also be estimated. We prove that for the cases studied in this paper, such a measurement is feasible. We present a method to calculate precision bounds for gradient estimation with a chain of atoms or with two spatially separated atomic ensembles. We also consider a single atomic ensemble with an arbitrary density profile, where the atoms cannot be addressed individually, and which is a very relevant case for experiments. Our model can take into account even correlations between particle positions. While in most of the discussion we consider an ensemble of localized particles that are classical with respect to their spatial degree of freedom, we also discuss the case of gradient metrology with a single Bose-Einstein condensate.

  20. Gravimetric Analysis of Particulate Matter using Air Samplers Housing Internal Filtration Capsules.

    PubMed

    O'Connor, Sean; O'Connor, Paula Fey; Feng, H Amy; Ashley, Kevin

    2014-10-01

    An evaluation was carried out to investigate the suitability of polyvinyl chloride (PVC) internal capsules, housed within air sampling devices, for gravimetric analysis of airborne particles collected in workplaces. Experiments were carried out using blank PVC capsules and PVC capsules spiked with 0,1 - 4 mg of National Institute of Standards and Technology Standard Reference Material ® (NIST SRM) 1648 (Urban Particulate Matter) and Arizona Road Dust (Air Cleaner Test Dust). The capsules were housed within plastic closed-face cassette samplers (CFCs). A method detection limit (MDL) of 0,075 mg per sample was estimated. Precision S r at 0,5 - 4 mg per sample was 0,031 and the estimated bias was 0,058. Weight stability over 28 days was verified for both blanks and spiked capsules. Independent laboratory testing on blanks and field samples verified long-term weight stability as well as sampling and analysis precision and bias estimates. An overall precision estimate Ŝ rt of 0,059 was obtained. An accuracy measure of ±15,5% was found for the gravimetric method using PVC internal capsules.

  1. Gravimetric Analysis of Particulate Matter using Air Samplers Housing Internal Filtration Capsules

    PubMed Central

    O'Connor, Sean; O'Connor, Paula Fey; Feng, H. Amy

    2015-01-01

    Summary An evaluation was carried out to investigate the suitability of polyvinyl chloride (PVC) internal capsules, housed within air sampling devices, for gravimetric analysis of airborne particles collected in workplaces. Experiments were carried out using blank PVC capsules and PVC capsules spiked with 0,1 – 4 mg of National Institute of Standards and Technology Standard Reference Material® (NIST SRM) 1648 (Urban Particulate Matter) and Arizona Road Dust (Air Cleaner Test Dust). The capsules were housed within plastic closed-face cassette samplers (CFCs). A method detection limit (MDL) of 0,075 mg per sample was estimated. Precision Sr at 0,5 - 4 mg per sample was 0,031 and the estimated bias was 0,058. Weight stability over 28 days was verified for both blanks and spiked capsules. Independent laboratory testing on blanks and field samples verified long-term weight stability as well as sampling and analysis precision and bias estimates. An overall precision estimate Ŝrt of 0,059 was obtained. An accuracy measure of ±15,5% was found for the gravimetric method using PVC internal capsules. PMID:26435581

  2. Choosing a DIVA: a comparison of emerging digital imagery vegetation analysis techniques

    USGS Publications Warehouse

    Jorgensen, Christopher F.; Stutzman, Ryan J.; Anderson, Lars C.; Decker, Suzanne E.; Powell, Larkin A.; Schacht, Walter H.; Fontaine, Joseph J.

    2013-01-01

    Question: What is the precision of five methods of measuring vegetation structure using ground-based digital imagery and processing techniques? Location: Lincoln, Nebraska, USA Methods: Vertical herbaceous cover was recorded using digital imagery techniques at two distinct locations in a mixed-grass prairie. The precision of five ground-based digital imagery vegetation analysis (DIVA) methods for measuring vegetation structure was tested using a split-split plot analysis of covariance. Variability within each DIVA technique was estimated using coefficient of variation of mean percentage cover. Results: Vertical herbaceous cover estimates differed among DIVA techniques. Additionally, environmental conditions affected the vertical vegetation obstruction estimates for certain digital imagery methods, while other techniques were more adept at handling various conditions. Overall, percentage vegetation cover values differed among techniques, but the precision of four of the five techniques was consistently high. Conclusions: DIVA procedures are sufficient for measuring various heights and densities of standing herbaceous cover. Moreover, digital imagery techniques can reduce measurement error associated with multiple observers' standing herbaceous cover estimates, allowing greater opportunity to detect patterns associated with vegetation structure.

  3. Food and Feed Safety Assessment: The Importance of Proper Sampling.

    PubMed

    Kuiper, Harry A; Paoletti, Claudia

    2015-03-24

    The general principles for safety and nutritional evaluation of foods and feed and the potential health risks associated with hazardous compounds are described as developed by the Food and Agriculture Organization (FAO) and the World Health Organization (WHO) and further elaborated in the European Union-funded project Safe Foods. We underline the crucial role of sampling in foods/feed safety assessment. High quality sampling should always be applied to ensure the use of adequate and representative samples as test materials for hazard identification, toxicological and nutritional characterization of identified hazards, as well as for estimating quantitative and reliable exposure levels of foods/feed or related compounds of concern for humans and animals. The importance of representative sampling is emphasized through examples of risk analyses in different areas of foods/feed production. The Theory of Sampling (TOS) is recognized as the only framework within which to ensure accuracy and precision of all sampling steps involved in the field-to-fork continuum, which is crucial to monitor foods and feed safety. Therefore, TOS must be integrated in the well-established FAO/WHO risk assessment approach in order to guarantee a transparent and correct frame for the risk assessment and decision making process.

  4. Food and feed safety assessment: the importance of proper sampling.

    PubMed

    Kuiper, Harry A; Paoletti, Claudia

    2015-01-01

    The general principles for safety and nutritional evaluation of foods and feed and the potential health risks associated with hazardous compounds are described as developed by the Food and Agriculture Organization (FAO) and the World Health Organization (WHO) and further elaborated in the European Union-funded project Safe Foods. We underline the crucial role of sampling in foods/feed safety assessment. High quality sampling should always be applied to ensure the use of adequate and representative samples as test materials for hazard identification, toxicological and nutritional characterization of identified hazards, as well as for estimating quantitative and reliable exposure levels of foods/feed or related compounds of concern for humans and animals. The importance of representative sampling is emphasized through examples of risk analyses in different areas of foods/feed production. The Theory of Sampling (TOS) is recognized as the only framework within which to ensure accuracy and precision of all sampling steps involved in the field-to-fork continuum, which is crucial to monitor foods and feed safety. Therefore, TOS must be integrated in the well-established FAO/WHO risk assessment approach in order to guarantee a transparent and correct frame for the risk assessment and decision making process.

  5. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  6. Precision and accuracy of age estimates obtained from anal fin spines, dorsal fin spines, and sagittal otoliths for known-age largemouth bass

    USGS Publications Warehouse

    Klein, Zachary B.; Bonvechio, Timothy F.; Bowen, Bryant R.; Quist, Michael C.

    2017-01-01

    Sagittal otoliths are the preferred aging structure for Micropterus spp. (black basses) in North America because of the accurate and precise results produced. Typically, fisheries managers are hesitant to use lethal aging techniques (e.g., otoliths) to age rare species, trophy-size fish, or when sampling in small impoundments where populations are small. Therefore, we sought to evaluate the precision and accuracy of 2 non-lethal aging structures (i.e., anal fin spines, dorsal fin spines) in comparison to that of sagittal otoliths from known-age Micropterus salmoides (Largemouth Bass; n = 87) collected from the Ocmulgee Public Fishing Area, GA. Sagittal otoliths exhibited the highest concordance with true ages of all structures evaluated (coefficient of variation = 1.2; percent agreement = 91.9). Similarly, the low coefficient of variation (0.0) and high between-reader agreement (100%) indicate that age estimates obtained from sagittal otoliths were the most precise. Relatively high agreement between readers for anal fin spines (84%) and dorsal fin spines (81%) suggested the structures were relatively precise. However, age estimates from anal fin spines and dorsal fin spines exhibited low concordance with true ages. Although use of sagittal otoliths is a lethal technique, this method will likely remain the standard for aging Largemouth Bass and other similar black bass species.

  7. The ACS statistical analyzer

    DOT National Transportation Integrated Search

    2010-03-01

    This document provides guidance for using the ACS Statistical Analyzer. It is an Excel-based template for users of estimates from the American Community Survey (ACS) to assess the precision of individual estimates and to compare pairs of estimates fo...

  8. Non-invasive assessment of bone quantity and quality in human trabeculae using scanning ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Xia, Yi

    Fractures and associated bone fragility induced by osteoporosis and osteopenia are widespread health threat to current society. Early detection of fracture risk associated with bone quantity and quality is important for both the prevention and treatment of osteoporosis and consequent complications. Quantitative ultrasound (QUS) is an engineering technology for monitoring bone quantity and quality of humans on earth and astronauts subjected to long duration microgravity. Factors currently limiting the acceptance of QUS technology involve precision, accuracy, single index and standardization. The objective of this study was to improve the accuracy and precision of an image-based QUS technique for non-invasive evaluation of trabecular bone quantity and quality by developing new techniques and understanding ultrasound/tissue interaction. Several new techniques have been developed in this dissertation study, including the automatic identification of irregular region of interest (iROI) in bone, surface topology mapping (STM) and mean scattering spacing (MSS) estimation for evaluating trabecular bone structure. In vitro results have shown that (1) the inter- and intra-observer errors in QUS measurement were reduced two to five fold by iROI compared to previous results; (2) the accuracy of QUS parameter, e.g., ultrasound velocity (UV) through bone, was improved 16% by STM; and (3) the averaged trabecular spacing can be estimated by MSS technique (r2=0.72, p<0.01). The measurement errors of BUA and UV introduced by the soft tissue and cortical shells in vivo can be quantified by developed foot model and simplified cortical-trabecular-cortical sandwich model, which were verified by the experimental results. The mechanisms of the errors induced by the cortical and soft tissues were revealed by the model. With developed new techniques and understanding of sound-tissue interaction, in vivo clinical trail and bed rest study were preformed to evaluate the performance of QUS in clinical applications. It has been demonstrated that the QUS has similar performance for in vivo bone density measurement compared to current gold-standard method, i.e., DXA, while additional information are obtained by the QUS for predicting fracture risk by monitoring of bone's quality. The developed QUS imaging technique can be used to assess bone's quantity and quality with improved accuracy and precision.

  9. Identifying and quantifying secondhand smoke in multiunit homes with tobacco smoke odor complaints

    NASA Astrophysics Data System (ADS)

    Dacunto, Philip J.; Cheng, Kai-Chung; Acevedo-Bolton, Viviana; Klepeis, Neil E.; Repace, James L.; Ott, Wayne R.; Hildemann, Lynn M.

    2013-06-01

    Accurate identification and quantification of the secondhand tobacco smoke (SHS) that drifts between multiunit homes (MUHs) is essential for assessing resident exposure and health risk. We collected 24 gaseous and particle measurements over 6-9 day monitoring periods in five nonsmoking MUHs with reported SHS intrusion problems. Nicotine tracer sampling showed evidence of SHS intrusion in all five homes during the monitoring period; logistic regression and chemical mass balance (CMB) analysis enabled identification and quantification of some of the precise periods of SHS entry. Logistic regression models identified SHS in eight periods when residents complained of SHS odor, and CMB provided estimates of SHS magnitude in six of these eight periods. Both approaches properly identified or apportioned all six cooking periods used as no-SHS controls. Finally, both approaches enabled identification and/or apportionment of suspected SHS in five additional periods when residents did not report smelling smoke. The time resolution of this methodology goes beyond sampling methods involving single tracers (such as nicotine), enabling the precise identification of the magnitude and duration of SHS intrusion, which is essential for accurate assessment of human exposure.

  10. Association between alcohol dehydrogenase 1C gene *1/*2 polymorphism and pancreatitis risk: a meta-analysis.

    PubMed

    Fang, F; Pan, J; Su, G H; Xu, L X; Li, G; Li, Z H; Zhao, H; Wang, J

    2015-11-30

    Numerous studies have focused on the relationship be-tween alcohol dehydrogenase 1C gene (ADH1C) *1/*2 polymorphism (Ile350Val, rs698, also known as ADH1C *1/*2) and pancreatitis risk, but the results have been inconsistent. Thus, we conducted a meta-anal-ysis to more precisely estimate this association. Relevant publications were searched in several widely used databases and 9 eligible studies were included in the meta-analysis. Pooled odds ratios (ORs) and 95% confidence intervals (CIs) were calculated to evaluate the strength of the association. Significant associations between ADH1C *1/*2 poly-morphism and pancreatitis risk were observed in both overall meta-analysis for 12 vs 22 (OR = 1.53, 95%CI = 1.12-2.10) and 11 + 12 vs 22 (OR = 1.44, 95%CI = 1.07-1.95), and the chronic alcoholic pancre-atitis subgroup for 12 vs 22 (OR = 1.64, 95%CI = 1.17-2.29) and 11 + 12 vs 22 (OR = 1.53, 95%CI = 1.11-2.11). Significant pancreatitis risk variation was also detected in Caucasians for 11 + 12 vs 22 (OR = 1.45, 95%CI = 1.07-1.98). In conclusion, the ADH1C *1/*2 polymorphism is likely associated with pancreatitis risk, particularly chronic alcoholic pancreatitis risk, with the *1 allele functioning as a risk factor.

  11. Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle

    NASA Technical Reports Server (NTRS)

    VanEepoel, John; Thienel, Julie; Sanner, Robert M.

    2006-01-01

    In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.

  12. Estimating bias in causes of death ascertainment in the Finnish Randomized Study of Screening for Prostate Cancer.

    PubMed

    Kilpeläinen, Tuomas P; Mäkinen, Tuukka; Karhunen, Pekka J; Aro, Jussi; Lahtela, Jorma; Taari, Kimmo; Talala, Kirsi; Tammela, Teuvo L J; Auvinen, Anssi

    2016-12-01

    Precise cause of death (CoD) ascertainment is crucial in any cancer screening trial to avoid bias from misclassification due to excessive recording of diagnosed cancer as a CoD in death certificates instead of non-cancer disease that actually caused death. We estimated whether there was bias in CoD determination between screening (SA) and control arms (CA) in a population-based prostate cancer (PCa) screening trial. Our trial is the largest component of the European Randomized Study of Screening for Prostate Cancer with more than 80,000 men. Randomly selected deaths in men with PCa (N=442/2568 cases, 17.2%) were reviewed by an independent CoD committee. Median follow-up was 16.8 years in both arms. Overdiagnosis of PCa was present in the SA as the risk ratio for PCa incidence was 1.19 (95% confidence interval (CI) 1.14-1.24). The hazard ratio (HR) for PCa mortality was 0.94 (95%CI 0.82-1.08) in favor of the SA. Agreement with official CoD registry was 94.6% (κ=0.88) in the SA and 95.4% (κ=0.91) in the CA. Altogether 14 PCa deaths were estimated as false-positive in both arms and exclusion of these resulted in HR 0.92 (95% CI 0.80-1.06). A small differential misclassification bias in ascertainment of CoD was present, most likely due to attribution bias (overdiagnosis in the SA). Maximum precision in CoD ascertainment can only be achieved with independent review of all deaths in the diseased population. However, this is cumbersome and expensive and may provide little benefit compared to random sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Is digital photography an accurate and precise method for measuring range of motion of the hip and knee?

    PubMed

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2017-09-07

    Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.

  14. Optimal structure of metaplasticity for adaptive learning

    PubMed Central

    2017-01-01

    Learning from reward feedback in a changing environment requires a high degree of adaptability, yet the precise estimation of reward information demands slow updates. In the framework of estimating reward probability, here we investigated how this tradeoff between adaptability and precision can be mitigated via metaplasticity, i.e. synaptic changes that do not always alter synaptic efficacy. Using the mean-field and Monte Carlo simulations we identified ‘superior’ metaplastic models that can substantially overcome the adaptability-precision tradeoff. These models can achieve both adaptability and precision by forming two separate sets of meta-states: reservoirs and buffers. Synapses in reservoir meta-states do not change their efficacy upon reward feedback, whereas those in buffer meta-states can change their efficacy. Rapid changes in efficacy are limited to synapses occupying buffers, creating a bottleneck that reduces noise without significantly decreasing adaptability. In contrast, more-populated reservoirs can generate a strong signal without manifesting any observable plasticity. By comparing the behavior of our model and a few competing models during a dynamic probability estimation task, we found that superior metaplastic models perform close to optimally for a wider range of model parameters. Finally, we found that metaplastic models are robust to changes in model parameters and that metaplastic transitions are crucial for adaptive learning since replacing them with graded plastic transitions (transitions that change synaptic efficacy) reduces the ability to overcome the adaptability-precision tradeoff. Overall, our results suggest that ubiquitous unreliability of synaptic changes evinces metaplasticity that can provide a robust mechanism for mitigating the tradeoff between adaptability and precision and thus adaptive learning. PMID:28658247

  15. [Usefulness of scoring risk for adverse outcomes in older patients with the Identification of Seniors at Risk scale and the Triage Risk Screening Tool: a meta-analysis].

    PubMed

    Rivero-Santana, Amado; Del Pino-Sedeño, Tasmania; Ramallo-Fariña, Yolanda; Vergara, Itziar; Serrano-Aguilar, Pedro

    2017-02-01

    A considerable proportion of the geriatric population experiences unfavorable outcomes of hospital emergency department care. An assessment of risk for adverse outcomes would facilitate making changes in clinical management by adjusting available resources to needs according to an individual patient's risk. Risk assessment tools are available, but their prognostic precision varies. This systematic review sought to quantify the prognostic precision of 2 geriatric screening and risk assessment tools commonly used in emergency settings for patients at high risk of adverse outcomes (revisits, functional deterioration, readmissions, or death): the Identification of Seniors at Risk (ISAR) scale and the Triage Risk Screening Tool (TRST). We searched PubMed, EMBASE, the Cochrane Central Register of Controlled Trials, and SCOPUS, with no date limits, to find relevant studies. Quality was assessed with the QUADAS-2 checklist (for quality assessment of diagnostic accuracy studies). We pooled data for prognostic yield reported for the ISAR and TRST scores for each short- and medium-term outcome using bivariate random-effects modeling. The sensitivity of the ISAR scoring system as a whole ranged between 67% and 99%; specificity fell between 21% and 41%. TRST sensitivity ranged between 52% and 75% and specificity between 39% and 51%.We conclude that the tools currently used to assess risk of adverse outcomes in patients of advanced age attended in hospital emergency departments do not have adequate prognostic precision to be clinically useful.

  16. Association between MTHFR Polymorphisms and Acute Myeloid Leukemia Risk: A Meta-Analysis

    PubMed Central

    Su, Yan; Lu, Ge-Ning; Wang, Ren-Sheng

    2014-01-01

    Previous observational studies investigating the association between methylenetetrahydrofolate reductase (MTHFR) polymorphisms and acute myeloid leukemia risk (AML) have yielded inconsistent results. The aim of this study is to derive a more precise estimation of the association between MTHFR (C677T and A1298C) polymorphisms and acute myeloid leukemia risk. PubMed and Embase databases were systematically searched to identify relevant studies from their inception to August 2013. Odds ratios (ORs) with 95% confidence intervals (CIs) were the metric of choice. Thirteen studies were selected for C677T polymorphism (1838 cases and 5318 controls) and 9 studies (1335 patients and 4295 controls) for A1298C polymorphism. Overall, pooled results showed that C677T polymorphism was not significant associated with AML risk(OR, 0.98–1.04; 95% CI, 0.86–0.92 to 1.09–1.25). Similar results were observed for the A1298C polymorphism and in subgroup analysis. All comparisons revealed no substantial heterogeneity nor did we detect evidence of publication bias. In summary, this meta-analysis provides evidence that MTHFR polymorphisms were not associated with AML risk. Further investigations are needed to offer better insight into the role of these polymorphisms in AML carcinogenesis. PMID:24586405

  17. Precise Orbital and Geodetic Parameter Estimation using SLR Observations for ILRS AAC

    NASA Astrophysics Data System (ADS)

    Kim, Young-Rok; Park, Eunseo; Oh, Hyungjik Jay; Park, Sang-Young; Lim, Hyung-Chul; Park, Chandeok

    2013-12-01

    In this study, we present results of precise orbital geodetic parameter estimation using satellite laser ranging (SLR) observations for the International Laser Ranging Service (ILRS) associate analysis center (AAC). Using normal point observations of LAGEOS-1, LAGEOS-2, ETALON-1, and ETALON-2 in SLR consolidated laser ranging data format, the NASA/ GSFC GEODYN II and SOLVE software programs were utilized for precise orbit determination (POD) and finding solutions of a terrestrial reference frame (TRF) and Earth orientation parameters (EOPs). For POD, a weekly-based orbit determination strategy was employed to process SLR observations taken from 20 weeks in 2013. For solutions of TRF and EOPs, loosely constrained scheme was used to integrate POD results of four geodetic SLR satellites. The coordinates of 11 ILRS core sites were determined and daily polar motion and polar motion rates were estimated. The root mean square (RMS) value of post-fit residuals was used for orbit quality assessment, and both the stability of TRF and the precision of EOPs by external comparison were analyzed for verification of our solutions. Results of post-fit residuals show that the RMS of the orbits of LAGEOS-1 and LAGEOS-2 are 1.20 and 1.12 cm, and those of ETALON-1 and ETALON-2 are 1.02 and 1.11 cm, respectively. The stability analysis of TRF shows that the mean value of 3D stability of the coordinates of 11 ILRS core sites is 7.0 mm. An external comparison, with respect to International Earth rotation and Reference systems Service (IERS) 08 C04 results, shows that standard deviations of polar motion XP and YP are 0.754 milliarcseconds (mas) and 0.576 mas, respectively. Our results of precise orbital and geodetic parameter estimation are reasonable and help advance research at ILRS AAC.

  18. Improving precision of glomerular filtration rate estimating model by ensemble learning.

    PubMed

    Liu, Xun; Li, Ningshan; Lv, Linsheng; Fu, Yongmei; Cheng, Cailian; Wang, Caixia; Ye, Yuqiu; Li, Shaomin; Lou, Tanqi

    2017-11-09

    Accurate assessment of kidney function is clinically important, but estimates of glomerular filtration rate (GFR) by regression are imprecise. We hypothesized that ensemble learning could improve precision. A total of 1419 participants were enrolled, with 1002 in the development dataset and 417 in the external validation dataset. GFR was independently estimated from age, sex and serum creatinine using an artificial neural network (ANN), support vector machine (SVM), regression, and ensemble learning. GFR was measured by 99mTc-DTPA renal dynamic imaging calibrated with dual plasma sample 99mTc-DTPA GFR. Mean measured GFRs were 70.0 ml/min/1.73 m 2 in the developmental and 53.4 ml/min/1.73 m 2 in the external validation cohorts. In the external validation cohort, precision was better in the ensemble model of the ANN, SVM and regression equation (IQR = 13.5 ml/min/1.73 m 2 ) than in the new regression model (IQR = 14.0 ml/min/1.73 m 2 , P < 0.001). The precision of ensemble learning was the best of the three models, but the models had similar bias and accuracy. The median difference ranged from 2.3 to 3.7 ml/min/1.73 m 2 , 30% accuracy ranged from 73.1 to 76.0%, and P was > 0.05 for all comparisons of the new regression equation and the other new models. An ensemble learning model including three variables, the average ANN, SVM, and regression equation values, was more precise than the new regression model. A more complex ensemble learning strategy may further improve GFR estimates.

  19. Establishing the accuracy of asteroseismic mass and radius estimates of giant stars - I. Three eclipsing systems at [Fe/H] ˜ -0.3 and the need for a large high-precision sample

    NASA Astrophysics Data System (ADS)

    Brogaard, K.; Hansen, C. J.; Miglio, A.; Slumstrup, D.; Frandsen, S.; Jessen-Hansen, J.; Lund, M. N.; Bossini, D.; Thygesen, A.; Davies, G. R.; Chaplin, W. J.; Arentoft, T.; Bruntt, H.; Grundahl, F.; Handberg, R.

    2018-05-01

    We aim to establish and improve the accuracy level of asteroseismic estimates of mass, radius, and age of giant stars. This can be achieved by measuring independent, accurate, and precise masses, radii, effective temperatures and metallicities of long period eclipsing binary stars with a red giant component that displays solar-like oscillations. We measured precise properties of the three eclipsing binary systems KIC 7037405, KIC 9540226, and KIC 9970396 and estimated their ages be 5.3 ± 0.5, 3.1 ± 0.6, and 4.8 ± 0.5 Gyr. The measurements of the giant stars were compared to corresponding measurements of mass, radius, and age using asteroseismic scaling relations and grid modelling. We found that asteroseismic scaling relations without corrections to Δν systematically overestimate the masses of the three red giants by 11.7 per cent, 13.7 per cent, and 18.9 per cent, respectively. However, by applying theoretical correction factors fΔν according to Rodrigues et al. (2017), we reached general agreement between dynamical and asteroseismic mass estimates, and no indications of systematic differences at the precision level of the asteroseismic measurements. The larger sample investigated by Gaulme et al. (2016) showed a much more complicated situation, where some stars show agreement between the dynamical and corrected asteroseismic measures while others suggest significant overestimates of the asteroseismic measures. We found no simple explanation for this, but indications of several potential problems, some theoretical, others observational. Therefore, an extension of the present precision study to a larger sample of eclipsing systems is crucial for establishing and improving the accuracy of asteroseismology of giant stars.

  20. Effects of lek count protocols on greater sage-grouse population trend estimates

    USGS Publications Warehouse

    Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.

    2016-01-01

    Annual counts of males displaying at lek sites are an important tool for monitoring greater sage-grouse populations (Centrocercus urophasianus), but seasonal and diurnal variation in lek attendance may increase variance and bias of trend analyses. Recommendations for protocols to reduce observation error have called for restricting lek counts to within 30 minutes of sunrise, but this may limit the number of lek counts available for analysis, particularly from years before monitoring was widely standardized. Reducing the temporal window for conducting lek counts also may constrain the ability of agencies to monitor leks efficiently. We used lek count data collected across Wyoming during 1995−2014 to investigate the effect of lek counts conducted between 30 minutes before and 30, 60, or 90 minutes after sunrise on population trend estimates. We also evaluated trends across scales relevant to management, including statewide, within Working Group Areas and Core Areas, and for individual leks. To further evaluate accuracy and precision of trend estimates from lek count protocols, we used simulations based on a lek attendance model and compared simulated and estimated values of annual rate of change in population size (λ) from scenarios of varying numbers of leks, lek count timing, and count frequency (counts/lek/year). We found that restricting analyses to counts conducted within 30 minutes of sunrise generally did not improve precision of population trend estimates, although differences among timings increased as the number of leks and count frequency decreased. Lek attendance declined >30 minutes after sunrise, but simulations indicated that including lek counts conducted up to 90 minutes after sunrise can increase the number of leks monitored compared to trend estimates based on counts conducted within 30 minutes of sunrise. This increase in leks monitored resulted in greater precision of estimates without reducing accuracy. Increasing count frequency also improved precision. These results suggest that the current distribution of count timings available in lek count databases such as that of Wyoming (conducted up to 90 minutes after sunrise) can be used to estimate sage-grouse population trends without reducing precision or accuracy relative to trends from counts conducted within 30 minutes of sunrise. However, only 10% of all Wyoming counts in our sample (1995−2014) were conducted 61−90 minutes after sunrise, and further increasing this percentage may still bias trend estimates because of declining lek attendance. 

  1. In the eye of the beholder: the effect of rater variability and different rating scales on QTL mapping.

    PubMed

    Poland, Jesse A; Nelson, Rebecca J

    2011-02-01

    The agronomic importance of developing durably resistant cultivars has led to substantial research in the field of quantitative disease resistance (QDR) and, in particular, mapping quantitative trait loci (QTL) for disease resistance. The assessment of QDR is typically conducted by visual estimation of disease severity, which raises concern over the accuracy and precision of visual estimates. Although previous studies have examined the factors affecting the accuracy and precision of visual disease assessment in relation to the true value of disease severity, the impact of this variability on the identification of disease resistance QTL has not been assessed. In this study, the effects of rater variability and rating scales on mapping QTL for northern leaf blight resistance in maize were evaluated in a recombinant inbred line population grown under field conditions. The population of 191 lines was evaluated by 22 different raters using a direct percentage estimate, a 0-to-9 ordinal rating scale, or both. It was found that more experienced raters had higher precision and that using a direct percentage estimation of diseased leaf area produced higher precision than using an ordinal scale. QTL mapping was then conducted using the disease estimates from each rater using stepwise general linear model selection (GLM) and inclusive composite interval mapping (ICIM). For GLM, the same QTL were largely found across raters, though some QTL were only identified by a subset of raters. The magnitudes of estimated allele effects at identified QTL varied drastically, sometimes by as much as threefold. ICIM produced highly consistent results across raters and for the different rating scales in identifying the location of QTL. We conclude that, despite variability between raters, the identification of QTL was largely consistent among raters, particularly when using ICIM. However, care should be taken in estimating QTL allele effects, because this was highly variable and rater dependent.

  2. Walleye age estimation using otoliths and dorsal spines: Preparation techniques and sampling guidelines based on sex and total length

    USGS Publications Warehouse

    Dembkowski, Daniel J.; Isermann, Daniel A.; Koenigs, Ryan P.

    2017-01-01

    We used dorsal spines and otoliths from 735 Walleye Sander vitreus collected from 35 Wisconsin water bodies to evaluate whether 1) otolith and dorsal spine cross sections provided age estimates similar to simpler methods of preparation (e.g., whole otoliths and dorsal spines, cracked otoliths); and 2) between-reader precision and differences between spine and otolith ages varied in relation to total length (TL), sex, and growth rate. Ages estimated from structures prepared using simpler techniques were generally similar to ages estimated using thin sections of dorsal spines and otoliths, suggesting that, in some instances, much of the additional processing time and specialized equipment associated with thin sectioning could be avoided. Overall, between-reader precision was higher for sectioned otoliths (mean coefficient of variation [CV] = 3.28%; standard error [SE] = 0.33%) than for sectioned dorsal spines (mean CV = 9.20%; SE = 0.56%). When using sectioned otoliths for age assignment, between-reader precision did not vary between sexes or growth categories (i.e., fast, moderate, slow), but between-reader precision was higher for females than males when using sectioned dorsal spines. Dorsal spines were generally effective at replicating otolith ages for male Walleye <450 mm TL and female Walleye <600 mm TL, suggesting that dorsal spines can be used to estimate ages for male Walleye <450 mm TL and female Walleye <600 mm TL. If sex is unknown, we suggest dorsal spines be used to estimate ages for Walleye <450 mm TL, but that otoliths be used for fish >450 mm TL. Our results provide useful guidance on structure and preparation technique selection for Walleye age estimation, thereby allowing biologists to develop sampling guidelines that could be implemented using information that is always (TL) or often (sex) available at the time of fish collection.

  3. Touch Precision Modulates Visual Bias.

    PubMed

    Misceo, Giovanni F; Jones, Maurice D

    2018-01-01

    The sensory precision hypothesis holds that different seen and felt cues about the size of an object resolve themselves in favor of the more reliable modality. To examine this precision hypothesis, 60 college students were asked to look at one size while manually exploring another unseen size either with their bare fingers or, to lessen the reliability of touch, with their fingers sleeved in rigid tubes. Afterwards, the participants estimated either the seen size or the felt size by finding a match from a visual display of various sizes. Results showed that the seen size biased the estimates of the felt size when the reliability of touch decreased. This finding supports the interaction between touch reliability and visual bias predicted by statistically optimal models of sensory integration.

  4. What to use to express the variability of data: Standard deviation or standard error of mean?

    PubMed

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  5. Precision Diagnosis Of Melanoma And Other Skin Lesions From Digital Images.

    PubMed

    Bhattacharya, Abhishek; Young, Albert; Wong, Andrew; Stalling, Simone; Wei, Maria; Hadley, Dexter

    2017-01-01

    Melanoma will affect an estimated 73,000 new cases this year and result in 9,000 deaths, yet precise diagnosis remains a serious problem. Without early detection and preventative care, melanoma can quickly spread to become fatal (Stage IV 5-year survival rate is 20-10%) from a once localized skin lesion (Stage IA 5- year survival rate is 97%). There is no biomarker for melanoma in clinical use, and the current diagnostic criteria for skin lesions remains subjective and imprecise. Accurate diagnosis of melanoma relies on a histopathologic gold standard; thus, aggressive excision of melanocytic skin lesions has been the mainstay of treatment. It is estimated that 36 biopsies are performed for every melanoma confirmed by pathology among excised lesions. There is significant morbidity in misdiagnosing melanoma such as progression of the disease for a false negative prediction vs the risks of unnecessary surgery for a false positive prediction. Every year, poor diagnostic precision adds an estimated $673 million in overall cost to manage the disease. Currently, manual dermatoscopic imaging is the standard of care in selecting atypical skin lesions for biopsy, and at best it achieves 90% sensitivity but only 59% specificity when performed by an expert dermatologist. Many computer vision (CV) algorithms perform better than dermatologists in classifying skin lesions although not significantly so in clinical practice. Meanwhile, open source deep learning (DL) techniques in CV have been gaining dominance since 2012 for image classification, and today DL can outperform humans in classifying millions of digital images with less than 5% error rates. Moreover, DL algorithms are readily run on commoditized hardware and have a strong online community of developers supporting their rapid adoption. In this work, we performed a successful pilot study to show proof of concept to DL skin pathology from images. However, DL algorithms must be trained on very large labelled datasets of images to achieve high accuracy. Here, we begin to assemble a large imageset of skin lesions from the UCSF and the San Francisco Veterans Affairs Medical Center (VAMC) dermatology clinics that are well characterized by their underlying pathology, on which to train DL algorithms. If trained on sufficient data, we hypothesize that our approach will significantly outperform general dermatologists in predicting skin lesion pathology. We posit that our work will allow for precision diagnosis of melanoma from widely available digital photography, which may optimize the management of the disease by decreasing unnecessary office visits and the significant morbidity and cost of melanoma misdiagnosis.

  6. Giant congenital melanocytic nevus*

    PubMed Central

    Viana, Ana Carolina Leite; Gontijo, Bernardo; Bittencourt, Flávia Vasques

    2013-01-01

    Giant congenital melanocytic nevus is usually defined as a melanocytic lesion present at birth that will reach a diameter ≥ 20 cm in adulthood. Its incidence is estimated in <1:20,000 newborns. Despite its rarity, this lesion is important because it may associate with severe complications such as malignant melanoma, affect the central nervous system (neurocutaneous melanosis), and have major psychosocial impact on the patient and his family due to its unsightly appearance. Giant congenital melanocytic nevus generally presents as a brown lesion, with flat or mammilated surface, well-demarcated borders and hypertrichosis. Congenital melanocytic nevus is primarily a clinical diagnosis. However, congenital nevi are histologically distinguished from acquired nevi mainly by their larger size, the spread of the nevus cells to the deep layers of the skin and by their more varied architecture and morphology. Although giant congenital melanocytic nevus is recognized as a risk factor for the development of melanoma, the precise magnitude of this risk is still controversial. The estimated lifetime risk of developing melanoma varies from 5 to 10%. On account of these uncertainties and the size of the lesions, the management of giant congenital melanocytic nevus needs individualization. Treatment may include surgical and non-surgical procedures, psychological intervention and/or clinical follow-up, with special attention to changes in color, texture or on the surface of the lesion. The only absolute indication for surgery in giant congenital melanocytic nevus is the development of a malignant neoplasm on the lesion. PMID:24474093

  7. Validity of the alcohol purchase task: a meta-analysis.

    PubMed

    Kiselica, Andrew M; Webber, Troy A; Bornovalova, Marina A

    2016-05-01

    Behavioral economists assess alcohol consumption as a function of unit price. This method allows construction of demand curves and demand indices, which are thought to provide precise numerical estimates of risk for alcohol problems. One of the more commonly used behavioral economic measures is the Alcohol Purchase Task (APT). Although the APT has shown promise as a measure of risk for alcohol problems, the construct validity and incremental utility of the APT remain unclear. This paper presents a meta-analysis of the APT literature. Sixteen studies were included in the meta-analysis. Studies were gathered via searches of the PsycInfo, PubMed, Web of Science and EconLit research databases. Random-effects meta-analyses with inverse variance weighting were used to calculate summary effect sizes for each demand index-drinking outcome relationship. Moderation of these effects by drinking status (regular versus heavy drinkers) was examined. Additionally, tests of the incremental utility of the APT indices in predicting drinking problems above and beyond measuring alcohol consumption were performed. The APT indices were correlated in the expected directions with drinking outcomes, although many effects were small in size. These effects were typically not moderated by the drinking status of the samples. Additionally, the intensity metric demonstrated incremental utility in predicting alcohol use disorder symptoms beyond measuring drinking. The Alcohol Purchase Task appears to have good construct validity, but limited incremental utility in estimating risk for alcohol problems. © 2015 Society for the Study of Addiction.

  8. Precision of measurement and body size in whole-body air-displacement plethysmography.

    PubMed

    Wells, J C; Fuller, N J

    2001-08-01

    To investigate methodological and biological precision for air-displacement plethysmography (ADP) across a wide range of body size. Repeated measurements of body volume (BV) and body weight (WT), and derived estimates of density (BD) and indices of fat mass (FM) and fat-free mass (FFM). Sixteen men, aged 22--48 y; 12 women, aged 24--42 y; 13 boys, aged 5--14 y; 17 girls, aged 5--16 y. BV and WT were measured using the Bodpod ADP system from which estimates of BD, FM and FFM were derived. FM and FFM were further adjusted for height to give fat mass index (FMI) and fat-free mass index (FFMI). ADP is very precise for measuring both BV and BD (between 0.16 and 0.44% of the mean). After removing two outliers from the database, and converting BD to body composition, precision of FMI was <6% in adults and within 8% in children, while precision of FFMI was within 1.5% for both age groups. ADP shows good precision for BV and BD across a wide range of body size, subject to biological artefacts. If aberrant values can be identified and rejected, precision of body composition is also good. Aberrant values can be identified by using pairs of ADP procedures, allowing the rejection of data where successive BD values differed by >0.007 kg/l. Precision of FMI obtained using pairs of procedures improves to <4.5% in adults and <5.5% in children.

  9. Precision medicine and precision therapeutics: hedgehog signaling pathway, basal cell carcinoma and beyond.

    PubMed

    Mohan, Shalini V; Chang, Anne Lynn S

    2014-06-01

    Precision medicine and precision therapeutics is currently in its infancy with tremendous potential to improve patient care by better identifying individuals at risk for skin cancer and predict tumor responses to treatment. This review focuses on the Hedgehog signaling pathway, its critical role in the pathogenesis of basal cell carcinoma, and the emergence of targeted treatments for advanced basal cell carcinoma. Opportunities to utilize precision medicine are outlined, such as molecular profiling to predict basal cell carcinoma response to targeted therapy and to inform therapeutic decisions.

  10. Tree imbalance causes a bias in phylogenetic estimation of evolutionary timescales using heterochronous sequences.

    PubMed

    Duchêne, David; Duchêne, Sebastian; Ho, Simon Y W

    2015-07-01

    Phylogenetic estimation of evolutionary timescales has become routine in biology, forming the basis of a wide range of evolutionary and ecological studies. However, there are various sources of bias that can affect these estimates. We investigated whether tree imbalance, a property that is commonly observed in phylogenetic trees, can lead to reduced accuracy or precision of phylogenetic timescale estimates. We analysed simulated data sets with calibrations at internal nodes and at the tips, taking into consideration different calibration schemes and levels of tree imbalance. We also investigated the effect of tree imbalance on two empirical data sets: mitogenomes from primates and serial samples of the African swine fever virus. In analyses calibrated using dated, heterochronous tips, we found that tree imbalance had a detrimental impact on precision and produced a bias in which the overall timescale was underestimated. A pronounced effect was observed in analyses with shallow calibrations. The greatest decreases in accuracy usually occurred in the age estimates for medium and deep nodes of the tree. In contrast, analyses calibrated at internal nodes did not display a reduction in estimation accuracy or precision due to tree imbalance. Our results suggest that molecular-clock analyses can be improved by increasing taxon sampling, with the specific aims of including deeper calibrations, breaking up long branches and reducing tree imbalance. © 2014 John Wiley & Sons Ltd.

  11. Workplace social capital and all-cause mortality: a prospective cohort study of 28,043 public-sector employees in Finland.

    PubMed

    Oksanen, Tuula; Kivimäki, Mika; Kawachi, Ichiro; Subramanian, S V; Takao, Soshi; Suzuki, Etsuji; Kouvonen, Anne; Pentti, Jaana; Salo, Paula; Virtanen, Marianna; Vahtera, Jussi

    2011-09-01

    We examined the association between workplace social capital and all-cause mortality in a large occupational cohort from Finland. We linked responses of 28 043 participants to surveys in 2000 to 2002 and in 2004 to national mortality registers through 2009. We used repeated measurements of self- and coworker-assessed social capital. We carried out Cox proportional hazard and fixed-effects logistic regressions. During the 5-year follow-up, 196 employees died. A 1-unit increase in the mean of repeat measurements of self-assessed workplace social capital (range 1-5) was associated with a 19% decrease in the risk of all-cause mortality (age- and gender-adjusted hazard ratio [HR] = 0.81; 95% confidence interval [CI] = 0.66, 0.99). The corresponding point estimate for the mean of coworker-assessed social capital was similar, although the association was less precisely estimated (age- and gender-adjusted HR = 0.77; 95% CI = 0.50, 1.20). In fixed-effects analysis, a 1-unit increase in self-assessed social capital across the 2 time points was associated with a lower mortality risk (odds ratio = 0.81; 95% CI = 0.55, 1.19). Workplace social capital appears to be associated with lowered mortality in the working-aged population.

  12. Workplace Social Capital and All-Cause Mortality: A Prospective Cohort Study of 28 043 Public-Sector Employees in Finland

    PubMed Central

    Kivimäki, Mika; Kawachi, Ichiro; Subramanian, S. V.; Takao, Soshi; Suzuki, Etsuji; Kouvonen, Anne; Pentti, Jaana; Salo, Paula; Virtanen, Marianna; Vahtera, Jussi

    2011-01-01

    Objectives. We examined the association between workplace social capital and all-cause mortality in a large occupational cohort from Finland. Methods. We linked responses of 28 043 participants to surveys in 2000 to 2002 and in 2004 to national mortality registers through 2009. We used repeated measurements of self- and coworker-assessed social capital. We carried out Cox proportional hazard and fixed-effects logistic regressions. Results. During the 5-year follow-up, 196 employees died. A 1-unit increase in the mean of repeat measurements of self-assessed workplace social capital (range 1–5) was associated with a 19% decrease in the risk of all-cause mortality (age- and gender-adjusted hazard ratio [HR] = 0.81; 95% confidence interval [CI] = 0.66, 0.99). The corresponding point estimate for the mean of coworker-assessed social capital was similar, although the association was less precisely estimated (age- and gender-adjusted HR = 0.77; 95% CI = 0.50, 1.20). In fixed-effects analysis, a 1-unit increase in self-assessed social capital across the 2 time points was associated with a lower mortality risk (odds ratio = 0.81; 95% CI = 0.55, 1.19). Conclusions. Workplace social capital appears to be associated with lowered mortality in the working-aged population. PMID:21778502

  13. Measurable residual disease testing in acute myeloid leukaemia.

    PubMed

    Hourigan, C S; Gale, R P; Gormley, N J; Ossenkoppele, G J; Walter, R B

    2017-07-01

    There is considerable interest in developing techniques to detect and/or quantify remaining leukaemia cells termed measurable or, less precisely, minimal residual disease (MRD) in persons with acute myeloid leukaemia (AML) in complete remission defined by cytomorphological criteria. An important reason for AML MRD-testing is the possibility of estimating the likelihood (and timing) of leukaemia relapse. A perfect MRD-test would precisely quantify leukaemia cells biologically able and likely to cause leukaemia relapse within a defined interval. AML is genetically diverse and there is currently no uniform approach to detecting such cells. Several technologies focused on immune phenotype or cytogenetic and/or molecular abnormalities have been developed, each with advantages and disadvantages. Many studies report a positive MRD-test at diverse time points during AML therapy identifies persons with a higher risk of leukaemia relapse compared with those with a negative MRD-test even after adjusting for other prognostic and predictive variables. No MRD-test in AML has perfect sensitivity and specificity for relapse prediction at the cohort- or subject levels and there are substantial rates of false-positive and -negative tests. Despite these limitations, correlations between MRD-test results and relapse risk have generated interest in MRD-test result-directed therapy interventions. However, convincing proof that a specific intervention will reduce relapse risk in persons with a positive MRD-test is lacking and needs testing in randomized trials. Routine clinical use of MRD-testing requires further refinements and standardization/harmonization of assay platforms and results reporting. Such data are needed to determine whether results of MRD-testing can be used as a surrogate end point in AML therapy trials. This could make drug-testing more efficient and accelerate regulatory approvals. Although MRD-testing in AML has advanced substantially, much remains to be done.

  14. Seasonal Influenza Vaccine Effectiveness in Preventing Laboratory Confirmed Influenza in 2014-2015 Season in Turkey: A Test-Negative Case Control Study

    PubMed Central

    Hekimoğlu, Can Hüseyin; Emek, Mestan; Avcı, Emine; Topal, Selmur; Demiröz, Mustafa; Ergör, Gül

    2018-01-01

    Background: Influenza has an important public health impact worldwide with its considerable annual morbidity among persons with or without risk factors and its serious complications among persons in high-risk groups. The seasonal influenza vaccine is essential for preventing the burden of influenza in a population. Since the vaccine is reformulated each season according to the virus serotypes in circulation, its effectiveness can vary from season to season. Vaccine effectiveness is defined as the relative risk reduction in vaccinated individuals in observational studies. Aims: To calculate influenza vaccine effectiveness in preventing laboratory-confirmed influenza in the Turkish population for the first time using the national sentinel surveillance data in the 2014-2015 influenza season. Study Design: Test-negative case-control study. Methods: We compared vaccination odds of influenza positive cases to influenza negative controls in the national influenza surveillance in Turkey to estimate influenza vaccine effectiveness. Results: The influenza vaccine effectiveness against influenza A (H1N1) (68.4%, 95% CI: -2.9 to 90.3) and B (44.6%, 95% CI: -27.9 to 66.6) were moderate, and the influenza vaccine effectiveness against influenza A (H3N2) (75.0%, 95% CI: -86.1 to 96.7) was relatively high; all had low precision given the low vaccination coverage. Overall, the influenza vaccination coverage rate was 4.2% (95% CI: 3.5 to 5.0), which is not sufficient to control the burden of influenza. Conclusion: In Turkey, national surveillance for influenza should be strengthened and utilised annually for the assessment of influenza vaccine effectiveness with more precision. Annual influenza vaccine effectiveness in Turkey should continue to be monitored as part of the national sentinel influenza surveillance. PMID:28903887

  15. The Impact of Estimating High-Resolution Tropospheric Gradients on Multi-GNSS Precise Positioning

    PubMed Central

    Zhou, Feng; Li, Xingxing; Li, Weiwei; Chen, Wen; Dong, Danan; Wickert, Jens; Schuh, Harald

    2017-01-01

    Benefits from the modernized US Global Positioning System (GPS), the revitalized Russian GLObal NAvigation Satellite System (GLONASS), and the newly-developed Chinese BeiDou Navigation Satellite System (BDS) and European Galileo, multi-constellation Global Navigation Satellite System (GNSS) has emerged as a powerful tool not only in positioning, navigation, and timing (PNT), but also in remote sensing of the atmosphere and ionosphere. Both precise positioning and the derivation of atmospheric parameters can benefit from multi-GNSS observations. In this contribution, extensive evaluations are conducted with multi-GNSS datasets collected from 134 globally-distributed ground stations of the International GNSS Service (IGS) Multi-GNSS Experiment (MGEX) network in July 2016. The datasets are processed in six different constellation combinations, i.e., GPS-, GLONASS-, BDS-only, GPS + GLONASS, GPS + BDS, and GPS + GLONASS + BDS + Galileo precise point positioning (PPP). Tropospheric gradients are estimated with eight different temporal resolutions, from 1 h to 24 h, to investigate the impact of estimating high-resolution gradients on position estimates. The standard deviation (STD) is used as an indicator of positioning repeatability. The results show that estimating tropospheric gradients with high temporal resolution can achieve better positioning performance than the traditional strategy in which tropospheric gradients are estimated on a daily basis. Moreover, the impact of estimating tropospheric gradients with different temporal resolutions at various elevation cutoff angles (from 3° to 20°) is investigated. It can be observed that with increasing elevation cutoff angles, the improvement in positioning repeatability is decreased. PMID:28368346

  16. Reference interval estimation: Methodological comparison using extensive simulations and empirical data.

    PubMed

    Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S

    2017-12-01

    To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  17. Denoising forced-choice detection data.

    PubMed

    García-Pérez, Miguel A

    2010-02-01

    Observers in a two-alternative forced-choice (2AFC) detection task face the need to produce a response at random (a guess) on trials in which neither presentation appeared to display a stimulus. Observers could alternatively be instructed to use a 'guess' key on those trials, a key that would produce a random guess and would also record the resultant correct or wrong response as emanating from a computer-generated guess. A simulation study shows that 'denoising' 2AFC data with information regarding which responses are a result of guesses yields estimates of detection threshold and spread of the psychometric function that are far more precise than those obtained in the absence of this information, and parallel the precision of estimates obtained with yes-no tasks running for the same number of trials. Simulations also show that partial compliance with the instructions to use the 'guess' key reduces the quality of the estimates, which nevertheless continue to be more precise than those obtained from conventional 2AFC data if the observers are still moderately compliant. An empirical study testing the validity of simulation results showed that denoised 2AFC estimates of spread were clearly superior to conventional 2AFC estimates and similar to yes-no estimates, but variations in threshold across observers and across sessions hid the benefits of denoising for threshold estimation. The empirical study also proved the feasibility of using a 'guess' key in addition to the conventional response keys defined in 2AFC tasks.

  18. The Impact of Estimating High-Resolution Tropospheric Gradients on Multi-GNSS Precise Positioning.

    PubMed

    Zhou, Feng; Li, Xingxing; Li, Weiwei; Chen, Wen; Dong, Danan; Wickert, Jens; Schuh, Harald

    2017-04-03

    Benefits from the modernized US Global Positioning System (GPS), the revitalized Russian GLObal NAvigation Satellite System (GLONASS), and the newly-developed Chinese BeiDou Navigation Satellite System (BDS) and European Galileo, multi-constellation Global Navigation Satellite System (GNSS) has emerged as a powerful tool not only in positioning, navigation, and timing (PNT), but also in remote sensing of the atmosphere and ionosphere. Both precise positioning and the derivation of atmospheric parameters can benefit from multi-GNSS observations. In this contribution, extensive evaluations are conducted with multi-GNSS datasets collected from 134 globally-distributed ground stations of the International GNSS Service (IGS) Multi-GNSS Experiment (MGEX) network in July 2016. The datasets are processed in six different constellation combinations, i.e., GPS-, GLONASS-, BDS-only, GPS + GLONASS, GPS + BDS, and GPS + GLONASS + BDS + Galileo precise point positioning (PPP). Tropospheric gradients are estimated with eight different temporal resolutions, from 1 h to 24 h, to investigate the impact of estimating high-resolution gradients on position estimates. The standard deviation (STD) is used as an indicator of positioning repeatability. The results show that estimating tropospheric gradients with high temporal resolution can achieve better positioning performance than the traditional strategy in which tropospheric gradients are estimated on a daily basis. Moreover, the impact of estimating tropospheric gradients with different temporal resolutions at various elevation cutoff angles (from 3° to 20°) is investigated. It can be observed that with increasing elevation cutoff angles, the improvement in positioning repeatability is decreased.

  19. Multi-GNSS real-time precise orbit/clock/UPD products and precise positioning service at GFZ

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Ge, Maorong; Liu, Yang; Fritsche, Mathias; Wickert, Jens; Schuh, Harald

    2016-04-01

    The rapid development of multi-constellation GNSSs (Global Navigation Satellite Systems, e.g., BeiDou, Galileo, GLONASS, GPS) and the IGS (International GNSS Service) Multi-GNSS Experiment (MGEX) bring great opportunities and challenges for real-time precise positioning service. In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. A rigorous multi-GNSS analysis is performed to achieve the best possible consistency by processing the observations from different GNSS together in one common parameter estimation procedure. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the Multi-GNSS Experiment (MGEX) and International GNSS Service (IGS) data streams including stations all over the world. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70%, while the positioning accuracy is improved by about 25%. Some outliers in the GPS-only solutions vanish when multi-GNSS observations are processed simultaneous. The availability and reliability of GPS precise positioning decrease dramatically as the elevation cutoff increases. However, the accuracy of multi-GNSS precise point positioning (PPP) is hardly decreased and few centimeters are still achievable in the horizontal components even with 40° elevation cutoff.

  20. Dichotomisation using a distributional approach when the outcome is skewed.

    PubMed

    Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L

    2015-04-24

    Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.

  1. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1983-01-01

    Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.

  2. Lag-One Autocorrelation in Short Series: Estimation and Hypotheses Testing

    ERIC Educational Resources Information Center

    Solanas, Antonio; Manolov, Rumen; Sierra, Vicenta

    2010-01-01

    In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is…

  3. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    NASA Astrophysics Data System (ADS)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  4. First Results of Field Absolute Calibration of the GPS Receiver Antenna at Wuhan University.

    PubMed

    Hu, Zhigang; Zhao, Qile; Chen, Guo; Wang, Guangxing; Dai, Zhiqiang; Li, Tao

    2015-11-13

    GNSS receiver antenna phase center variations (PCVs), which arise from the non-spherical phase response of GNSS signals have to be well corrected for high-precision GNSS applications. Without using a precise antenna phase center correction (PCC) model, the estimated position of a station monument will lead to a bias of up to several centimeters. The Chinese large-scale research project "Crustal Movement Observation Network of China" (CMONOC), which requires high-precision positions in a comprehensive GPS observational network motived establishment of a set of absolute field calibrations of the GPS receiver antenna located at Wuhan University. In this paper the calibration facilities are firstly introduced and then the multipath elimination and PCV estimation strategies currently used are elaborated. The validation of estimated PCV values of test antenna are finally conducted, compared with the International GNSS Service (IGS) type values. Examples of TRM57971.00 NONE antenna calibrations from our calibration facility demonstrate that the derived PCVs and IGS type mean values agree at the 1 mm level.

  5. [Radiance Simulation of BUV Hyperspectral Sensor on Multi Angle Observation, and Improvement to Initial Total Ozone Estimating Model of TOMS V8 Total Ozone Algorithm].

    PubMed

    Lü, Chun-guang; Wang, Wei-he; Yang, Wen-bo; Tian, Qing-iju; Lu, Shan; Chen, Yun

    2015-11-01

    New hyperspectral sensor to detect total ozone is considered to be carried on geostationary orbit platform in the future, because local troposphere ozone pollution and diurnal variation of ozone receive more and more attention. Sensors carried on geostationary satellites frequently obtain images on the condition of larger observation angles so that it has higher requirements of total ozone retrieval on these observation geometries. TOMS V8 algorithm is developing and widely used in low orbit ozone detecting sensors, but it still lack of accuracy on big observation geometry, therefore, how to improve the accuracy of total ozone retrieval is still an urgent problem that demands immediate solution. Using moderate resolution atmospheric transmission, MODT-RAN, synthetic UV backscatter radiance in the spectra region from 305 to 360 nm is simulated, which refers to clear sky, multi angles (12 solar zenith angles and view zenith angles) and 26 standard profiles, moreover, the correlation and trends between atmospheric total ozone and backward scattering of the earth UV radiation are analyzed based on the result data. According to these result data, a new modified initial total ozone estimation model in TOMS V8 algorithm is considered to be constructed in order to improve the initial total ozone estimating accuracy on big observation geometries. The analysis results about total ozone and simulated UV backscatter radiance shows: Radiance in 317.5 nm (R₃₁₇.₅) decreased as the total ozone rise. Under the small solar zenith Angle (SZA) and the same total ozone, R₃₁₇.₅ decreased with the increase of view zenith Angle (VZA) but increased on the large SZA. Comparison of two fit models shows: without the condition that both SZA and VZA are large (> 80°), exponential fitting model and logarithm fitting model all show high fitting precision (R² > 0.90), and precision of the two decreased as the SZA and VZA rise. In most cases, the precision of logarithm fitting mode is about 0.9% higher than exponential fitting model. With the increasing of VZA or SZA, the fitting precision gradually lower, and the fall is more in the larger VZA or SZA. In addition, the precision of fitting mode exist a plateau in the small SZA range. The modified initial total ozone estimating model (ln(I) vs. Ω) is established based on logarithm fitting mode, and compared with traditional estimating model (I vs. ln(Ω)), that shows: the RMSE of ln(I) vs. Ω and I vs. ln(Ω) all have the down trend with the rise of total ozone. In the low region of total ozone (175-275 DU), the RMSE is obvious higher than high region (425-525 DU), moreover, a RMSE peak and a trough exist in 225 and 475 DU respectively. With the increase of VZA and SZA, the RMSE of two initial estimating models are overall rise, and the upraising degree is ln(I) vs. Ω obvious with the growing of SZA and VZA. The estimating result by modified model is better than traditional model on the whole total ozone range (RMSE is 0.087%-0.537% lower than traditional model), especially on lower total ozone region and large observation geometries. Traditional estimating model relies on the precision of exponential fitting model, and modified estimating model relies on the precision of logarithm fitting model. The improvement of the estimation accuracy by modified initial total ozone estimating model expand the application range of TOMS V8 algorithm. For sensor carried on geostationary orbit platform, there is no doubt that the modified estimating model can help improve the inversion accuracy on wide spatial and time range This modified model could give support and reference to TOMS algorithm update in the future.

  6. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    NASA Astrophysics Data System (ADS)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  7. Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds

    PubMed Central

    Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.

    2013-01-01

    Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392

  8. Maximum likelihood estimation of correction for dilution bias in simple linear regression using replicates from subjects with extreme first measurements.

    PubMed

    Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn

    2008-09-30

    The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.

  9. Risk-based damage potential and loss estimation of extreme flooding scenarios in the Austrian Federal Province of Tyrol

    NASA Astrophysics Data System (ADS)

    Huttenlau, M.; Stötter, J.; Stiefelmeyer, H.

    2010-12-01

    Within the last decades serious flooding events occurred in many parts of Europe and especially in 2005 the Austrian Federal Province of Tyrol was serious affected. These events in general and particularly the 2005 event have sensitised decision makers and the public. Beside discussions pertaining to protection goals and lessons learnt, the issue concerning potential consequences of extreme and severe flooding events has been raised. Additionally to the general interest of the public, decision makers of the insurance industry, public authorities, and responsible politicians are especially confronted with the question of possible consequences of extreme events. Answers thereof are necessary for the implementation of preventive appropriate risk management strategies. Thereby, property and liability losses reflect a large proportion of the direct tangible losses. These are of great interest for the insurance sector and can be understood as main indicators to interpret the severity of potential events. The natural scientific-technical risk analysis concept provides a predefined and structured framework to analyse the quantities of affected elements at risk, their corresponding damage potentials, and the potential losses. Generally, this risk concept framework follows the process steps hazard analysis, exposition analysis, and consequence analysis. Additionally to the conventional hazard analysis, the potential amount of endangered elements and their corresponding damage potentials were analysed and, thereupon, concrete losses were estimated. These took the specific vulnerability of the various individual elements at risk into consideration. The present flood risk analysis estimates firstly the general exposures of the risk indicators in the study area and secondly analyses the specific exposures and consequences of five extreme event scenarios. In order to precisely identify, localize, and characterize the relevant risk indicators of buildings, dwellings and inventory, vehicles, and individuals, a detailed geodatabase of the existing stock of elements and values was established on a single object level. Therefore, the localized and functional differentiated stock of elements was assessed monetarily on the basis of derived representative mean insurance values. Thus, well known difference factors between the analysis of the stock of elements and values on local and on regional scale could be reduced considerably. The spatial join of the results of the hazard analysis with the stock of elements and values enables the identification and quantification of the elements at risk and their corresponding damage potential. Thereupon, Extreme Scenario Losses (ESL) were analysed under consideration of different vulnerability approaches which describe the individual element's specific susceptibility. This results in scenario-specific ranges of ESL rather than in single values. The exposure analysis of the general endangerment in Tyrol identifies (i) 105 330 individuals, (ii) 20 272 buildings and 50 157 dwellings with a corresponding damage potential of approx. EUR 20 bn. and (iii) 62 494 vehicles with a corresponding damage potential of EUR 1 bn. Depending on the individual extreme event scenarios, the ESL solely to buildings and inventory vary between EUR 0.9-1.3 bn. for the scenario with the least ESL and EUR 2.2-2.5 bn. for the most serious scenarios. The correlation of the private property losses to buildings and inventory with further direct tangible loss categories on the basis of investigation after the event in 2005, results in potential direct tangible ESL of up to EUR 7.6 bn. Apart from the specific study results a general finding shows that beside the further development of modelling capabilities and scenario concepts, the key to considerably decrease uncertainties of integral flood risk analyses is the development and implementation of more precise methods. These are to determine the stock of elements and values and to evaluate the vulnerability or susceptibility of affected structures to certain flood characteristics more differentiated.

  10. A survey sampling approach for pesticide monitoring of community water systems using groundwater as a drinking water source.

    PubMed

    Whitmore, Roy W; Chen, Wenlin

    2013-12-04

    The ability to infer human exposure to substances from drinking water using monitoring data helps determine and/or refine potential risks associated with drinking water consumption. We describe a survey sampling approach and its application to an atrazine groundwater monitoring study to adequately characterize upper exposure centiles and associated confidence intervals with predetermined precision. Study design and data analysis included sampling frame definition, sample stratification, sample size determination, allocation to strata, analysis weights, and weighted population estimates. Sampling frame encompassed 15 840 groundwater community water systems (CWS) in 21 states throughout the U. S. Median, and 95th percentile atrazine concentrations were 0.0022 and 0.024 ppb, respectively, for all CWS. Statistical estimates agreed with historical monitoring results, suggesting that the study design was adequate and robust. This methodology makes no assumptions regarding the occurrence distribution (e.g., lognormality); thus analyses based on the design-induced distribution provide the most robust basis for making inferences from the sample to target population.

  11. Borrowing of strength and study weights in multivariate and network meta-analysis.

    PubMed

    Jackson, Dan; White, Ian R; Price, Malcolm; Copas, John; Riley, Richard D

    2017-12-01

    Multivariate and network meta-analysis have the potential for the estimated mean of one effect to borrow strength from the data on other effects of interest. The extent of this borrowing of strength is usually assessed informally. We present new mathematical definitions of 'borrowing of strength'. Our main proposal is based on a decomposition of the score statistic, which we show can be interpreted as comparing the precision of estimates from the multivariate and univariate models. Our definition of borrowing of strength therefore emulates the usual informal assessment. We also derive a method for calculating study weights, which we embed into the same framework as our borrowing of strength statistics, so that percentage study weights can accompany the results from multivariate and network meta-analyses as they do in conventional univariate meta-analyses. Our proposals are illustrated using three meta-analyses involving correlated effects for multiple outcomes, multiple risk factor associations and multiple treatments (network meta-analysis).

  12. Borrowing of strength and study weights in multivariate and network meta-analysis

    PubMed Central

    Jackson, Dan; White, Ian R; Price, Malcolm; Copas, John; Riley, Richard D

    2016-01-01

    Multivariate and network meta-analysis have the potential for the estimated mean of one effect to borrow strength from the data on other effects of interest. The extent of this borrowing of strength is usually assessed informally. We present new mathematical definitions of ‘borrowing of strength’. Our main proposal is based on a decomposition of the score statistic, which we show can be interpreted as comparing the precision of estimates from the multivariate and univariate models. Our definition of borrowing of strength therefore emulates the usual informal assessment. We also derive a method for calculating study weights, which we embed into the same framework as our borrowing of strength statistics, so that percentage study weights can accompany the results from multivariate and network meta-analyses as they do in conventional univariate meta-analyses. Our proposals are illustrated using three meta-analyses involving correlated effects for multiple outcomes, multiple risk factor associations and multiple treatments (network meta-analysis). PMID:26546254

  13. Risk factors for recurrence of venous thromboembolism associated with the use of oral contraceptives.

    PubMed

    Vaillant-Roussel, Hélène; Ouchchane, Lemlih; Dauphin, Claire; Philippe, Pierre; Ruivard, Marc

    2011-11-01

    Combined oral contraceptives (COC) increase the risk of venous thromboembolism (VTE), but the risk of recurrent VTE is not precisely determined. In this retrospective cohort study, we sought the risk factors for recurrence after a first VTE that occurred in women taking COC. Time-to-event analysis was done with Kaplan-Meier estimates. In total, 172 patients were included (43% with pulmonary embolism): 82% had no other clinical risk factor for VTE. Among the 160 patients who stopped anticoagulation, the cumulative incidence of recurrent VTE was 5.1% after 1 year and 14.2% after 5 years. Significant factors associated with recurrence were renewed use of COC [hazard ratio (HR)=8.2 (2.1-32.2)], antiphospholipid syndrome [HR=4.1 (1.3-12.5)] and protein C deficiency or factor II G20210A [HR=2.7 (1.1-7)]. Pure-progestin contraception [HR=1.3 (0.5-3.0)] or factor V Leiden [HR=1.3 (0.5-3.4)] did not increase recurrence. Postsurgical VTE had a lower risk of recurrence [HR=0.1 (0.0-0.9)]. Further studies are warranted to determine whether testing for antiphospholipid syndrome, protein C deficiency or the factor II G20210A could modify the duration of anticoagulation. This study confirms the safety of pure-progestin contraception. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    PubMed

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  15. A Model of Gravity Vector Measurement Noise for Estimating Accelerometer Bias in Gravity Disturbance Compensation.

    PubMed

    Tie, Junbo; Cao, Juliang; Chang, Lubing; Cai, Shaokun; Wu, Meiping; Lian, Junxiang

    2018-03-16

    Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method.

  16. A Model of Gravity Vector Measurement Noise for Estimating Accelerometer Bias in Gravity Disturbance Compensation

    PubMed Central

    Cao, Juliang; Cai, Shaokun; Wu, Meiping; Lian, Junxiang

    2018-01-01

    Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method. PMID:29547552

  17. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    PubMed Central

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742

  18. Health risks of climate change: an assessment of uncertainties and its implications for adaptation policies.

    PubMed

    Wardekker, J Arjan; de Jong, Arie; van Bree, Leendert; Turkenburg, Wim C; van der Sluijs, Jeroen P

    2012-09-19

    Projections of health risks of climate change are surrounded with uncertainties in knowledge. Understanding of these uncertainties will help the selection of appropriate adaptation policies. We made an inventory of conceivable health impacts of climate change, explored the type and level of uncertainty for each impact, and discussed its implications for adaptation policy. A questionnaire-based expert elicitation was performed using an ordinal scoring scale. Experts were asked to indicate the level of precision with which health risks can be estimated, given the present state of knowledge. We assessed the individual scores, the expertise-weighted descriptive statistics, and the argumentation given for each score. Suggestions were made for how dealing with uncertainties could be taken into account in climate change adaptation policy strategies. The results showed that the direction of change could be indicated for most anticipated health effects. For several potential effects, too little knowledge exists to indicate whether any impact will occur, or whether the impact will be positive or negative. For several effects, rough 'order-of-magnitude' estimates were considered possible. Factors limiting health impact quantification include: lack of data, multi-causality, unknown impacts considering a high-quality health system, complex cause-effect relations leading to multi-directional impacts, possible changes of present-day response-relations, and difficulties in predicting local climate impacts. Participants considered heat-related mortality and non-endemic vector-borne diseases particularly relevant for climate change adaptation. For possible climate related health impacts characterised by ignorance, adaptation policies that focus on enhancing the health system's and society's capability of dealing with possible future changes, uncertainties and surprises (e.g. through resilience, flexibility, and adaptive capacity) are most appropriate. For climate related health effects for which rough risk estimates are available, 'robust decision-making' is recommended. For health effects with limited societal and policy relevance, we recommend focusing on no-regret measures. For highly relevant health effects, precautionary measures can be considered. This study indicated that analysing and characterising uncertainty by means of a typology can be a very useful approach for selection and prioritization of preferred adaptation policies to reduce future climate related health risks.

  19. The Estimation of Precisions in the Planning of Uas Photogrammetric Surveys

    NASA Astrophysics Data System (ADS)

    Passoni, D.; Federici, B.; Ferrando, I.; Gagliolo, S.; Sguerso, D.

    2018-05-01

    The Unmanned Aerial System (UAS) is widely used in the photogrammetric surveys both of structures and of small areas. Geomatics focuses the attention on the metric quality of the final products of the survey, creating several 3D modelling applications from UAS images. As widely known, the quality of results derives from the quality of images acquisition phase, which needs an a priori estimation of the expected precisions. The planning phase is typically managed using dedicated tools, adapted from the traditional aerial-photogrammetric flight plan. But UAS flight has features completely different from the traditional one. Hence, the use of UAS for photogrammetric applications today requires a growth in knowledge in planning. The basic idea of this research is to provide a drone photogrammetric flight planning tools considering the required metric precisions, given a priori the classical parameters of a photogrammetric planning: flight altitude, overlaps and geometric parameters of the camera. The created "office suite" allows a realistic planning of a photogrammetric survey, starting from an approximate knowledge of the Digital Surface Model (DSM), and the effective attitude parameters, changing along the route. The planning products are the overlapping of the images, the Ground Sample Distance (GSD) and the precision on each pixel taking into account the real geometry. The different tested procedures, the obtained results and the solution proposed for the a priori estimates of the precisions in the particular case of UAS surveys are here reported.

  20. Determination of the top quark mass circa 2013: methods, subtleties, perspectives

    NASA Astrophysics Data System (ADS)

    Juste, Aurelio; Mantry, Sonny; Mitov, Alexander; Penin, Alexander; Skands, Peter; Varnes, Erich; Vos, Marcel; Wimpenny, Stephen

    2014-10-01

    We present an up-to-date overview of the problem of top quark mass determination. We assess the need for precision in the top mass extraction in the LHC era together with the main theoretical and experimental issues arising in precision top mass determination. We collect and document existing results on top mass determination at hadron colliders and map the prospects for future precision top mass determination at e+e- colliders. We present a collection of estimates for the ultimate precision of various methods for top quark mass extraction at the LHC.

  1. Risk of upper gastrointestinal bleeding with selective serotonin reuptake inhibitors with or without concurrent nonsteroidal anti-inflammatory use: a systematic review and meta-analysis.

    PubMed

    Anglin, Rebecca; Yuan, Yuhong; Moayyedi, Paul; Tse, Frances; Armstrong, David; Leontiadis, Grigorios I

    2014-06-01

    There is emerging concern that selective serotonin reuptake inhibitors (SSRIs) may be associated with an increased risk of upper gastrointestinal (GI) bleeding, and that this risk may be further increased by concurrent use of nonsteroidal anti-inflammatory (NSAID) medications. Previous reviews of a relatively small number of studies have reported a substantial risk of upper GI bleeding with SSRIs; however, more recent studies have produced variable results. The objective of this study was to obtain a more precise estimate of the risk of upper GI bleeding with SSRIs, with or without concurrent NSAID use. MEDLINE, EMBASE, PsycINFO, the Cochrane central register of controlled trials (through April 2013), and US and European conference proceedings were searched. Controlled trials, cohort, case-control, and cross-sectional studies that reported the incidence of upper GI bleeding in adults on SSRIs with or without concurrent NSAID use, compared with placebo or no treatment were included. Data were extracted independently by two authors. Dichotomous data were pooled to obtain odds ratio (OR) of the risk of upper GI bleeding with SSRIs +/- NSAID, with a 95% confidence interval (CI). The main outcome and measure of the study was the risk of upper GI bleeding with SSRIs compared with placebo or no treatment. Fifteen case-control studies (including 393,268 participants) and four cohort studies were included in the analysis. There was an increased risk of upper GI bleeding with SSRI medications in the case-control studies (OR=1.66, 95% CI=1.44,1.92) and cohort studies (OR=1.68, 95% CI=1.13,2.50). The number needed to harm for upper GI bleeding with SSRI treatment in a low-risk population was 3,177, and in a high-risk population it was 881. The risk of upper GI bleeding was further increased with the use of both SSRI and NSAID medications (OR=4.25, 95% CI=2.82,6.42). SSRI medications are associated with a modest increase in the risk of upper GI bleeding, which is lower than has previously been estimated. This risk is significantly elevated when SSRI medications are used in combination with NSAIDs, and physicians prescribing these medications together should exercise caution and discuss this risk with patients.

  2. Synergies between geomorphic hazard and risk and sediment cascade research fields: exploiting geomorphic processes' susceptibility analyses to derive potential sediment sources in the Oltet, river catchment, southern Romania

    NASA Astrophysics Data System (ADS)

    Jurchescu, Marta-Cristina

    2015-04-01

    Identifying sediment sources and sediment availability represents a major problem and one of the first concerns in the field of sediment cascade. This paper addresses the on-site effects associated with sediment transfer, investigating the degree to which studies pertaining to the field of geomorphic hazard and risk research could be exploited in sediment budget estimations. More precisely, the paper investigates whether results obtained in assessing susceptibility to various geomorphic processes (landslides, soil erosion, gully erosion) could be transferred to the study of sediment sources within a basin. The study area is a medium-sized catchment (> 2400 km2) in southern Romania encompassing four different geomorphic units (mountains, hills, piedmont and plain). The region is highly affected by a wide range of geomorphic processes which supply sediments to the drainage network. The presence of a reservoir at the river outlet emphasizes the importance of estimating sediment budgets. The susceptibility analyses are conducted separately for each type of the considered processes in a top-down framework, i.e. at two different scales, using scale-adapted methods and validation techniques in each case, as widely-recognized in the hazard and risk research literature. The analyses start at a regional scale, which has in view the entire catchment, using readily available data on conditioning factors. In a second step, the suceptibility analyses are carried out at a medium scale for selected hotspot-compartments of the catchment. In order to appraise the extent to which susceptibility results are relevant in interpreting sediment sources at catchment scale, scale-induced differences are analysed in the case of each process. Based on the amount of uncertainty revealed by each regional-scale analysis in comparison to the medium-scale ones, decisions are made on whether the first are acceptable to the aim of identifying potential sediment source areas or if they should be refined using more precise methods and input data. The three final basin-wide susceptibility maps are eventually coverted, on a threshold basis, to maps showing the potential areas of sediment production by landslides, soil erosion and gully erosion respectively. These are then combined into one single map of potential sediment sources. The susceptibility assessments indicate that the basin compartments most prone to landslides and soil erosion correspond to the Subcarpathian hills, while the one most threatened by gully erosion corresponds to the piedmont relief. The final map of potential sediment sources shows that approximately 34% of the study catchment is occupied by areas potentially generating sediment through landslides and gully erosion, extending over most of the high piedmont and Subcarpathian hills. The results prove that there is an important link between the two research fields, i.e. geomorphic hazard and risk and sediment cascade, by allowing the transfer of knowledge from geomorphic processes' susceptibility analyses to the estimation of potential sediment sources within catchments. The synergy between the two fields raises further challenges to be tackled in future (e.g. how to derive sediment transfer rates from quantitative hazard estimates).

  3. Study on the flood simulation techniques for estimation of health risk in Dhaka city, Bangladesh

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Suetsugi, T.; Sunada, K.; ICRE

    2011-12-01

    Although some studies have been carried out on the spread of infectious disease with the flooding, the relation between flooding and the infectious expansion has not been clarified yet. The improvement of the calculation precision of inundation and its relation with the infectious disease, surveyed epidemiologically, are therefore investigated in a case study in Dhaka city, Bangladesh. The inundation was computed using a flood simulation model that is numerical 2D-model. The "sensitivity to inundation" of hydraulic factors such as drainage channel, dike, and the building occupied ratio was examined because of the lack of digital data set related to flood simulation. Each element in the flood simulation model was incorporated progressively and results were compared with the calculation result as inspection materials by the inundation classification from the existing study (Mollah et al., 2007). The results show that the influences by ''dyke'' and "drainage channel" factors are remarkable to water level near each facility. The inundation level and duration have influence on wide areas when "building occupied ratio" is also considered. The correlation between maximum inundation depth and health risk (DALY, Mortality, Morbidity) was found, but the validation of the inundation model for this case has not been performed yet. The flood simulation model needs to be validated by observed inundation depth. The drainage facilities such as sewer network or the pumping system will be also considered in the further research to improve the precision of the inundation model.

  4. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data.

    PubMed

    Buderman, Frances E; Diefenbach, Duane R; Casalena, Mary Jo; Rosenberry, Christopher S; Wallingford, Bret D

    2014-04-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50-100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  5. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  6. Evaluation of three paediatric weight estimation methods in Singapore.

    PubMed

    Loo, Pei Ying; Chong, Shu-Ling; Lek, Ngee; Bautista, Dianne; Ng, Kee Chong

    2013-04-01

    Rapid paediatric weight estimation methods in the emergency setting have not been evaluated for South East Asian children. This study aims to assess the accuracy and precision of three such methods in Singapore children: Broselow-Luten (BL) tape, Advanced Paediatric Life Support (APLS) (estimated weight (kg) = 2 (age + 4)) and Luscombe (estimated weight (kg) = 3 (age) + 7) formulae. We recruited 875 patients aged 1-10 years in a Paediatric Emergency Department in Singapore over a 2-month period. For each patient, true weight and height were determined. True height was cross-referenced to the BL tape markings and used to derive estimated weight (virtual BL tape method), while patient's round-down age (in years) was used to derive estimated weights using APLS and Luscombe formulae, respectively. The percentage difference between the true and estimated weights was calculated. For each method, the bias and extent of agreement were quantified using Bland-Altman method (mean percentage difference (MPD) and 95% limits of agreement (LOA)). The proportion of weight estimates within 10% of true weight (p₁₀) was determined. The BL tape method marginally underestimated weights (MPD +0.6%; 95% LOA -26.8% to +28.1%; p₁₀ 58.9%). The APLS formula underestimated weights (MPD +7.6%; 95% LOA -26.5% to +41.7%; p₁₀ 45.7%). The Luscombe formula overestimated weights (MPD -7.4%; 95% LOA -51.0% to +36.2%; p₁₀ 37.7%). Of the three methods we evaluated, the BL tape method provided the most accurate and precise weight estimation for Singapore children. The APLS and Luscombe formulae underestimated and overestimated the children's weights, respectively, and were considerably less precise. © 2013 The Authors. Journal of Paediatrics and Child Health © 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  7. Accounting for tagging-to-harvest mortality in a Brownie tag-recovery model by incorporating radio-telemetry data

    USGS Publications Warehouse

    Buderman, Frances E.; Diefenbach, Duane R.; Casalena, Mary Jo; Rosenberry, Christopher S.; Wallingford, Bret D.

    2014-01-01

    The Brownie tag-recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known-fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward-tagged animals in a Brownie tag-recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging-to-harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo,to estimate harvest rates, and data from white-tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known-fate tag-recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.

  8. Statistical Indexes for Monitoring Item Behavior under Computer Adaptive Testing Environment.

    ERIC Educational Resources Information Center

    Zhu, Renbang; Yu, Feng; Liu, Su

    A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…

  9. Estimating Uncertainty in Annual Forest Inventory Estimates

    Treesearch

    Ronald E. McRoberts; Veronica C. Lessard

    1999-01-01

    The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...

  10. Bayesian techniques for surface fuel loading estimation

    Treesearch

    Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell

    2016-01-01

    A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...

  11. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  12. A mixture model for robust registration in Kinect sensor

    NASA Astrophysics Data System (ADS)

    Peng, Li; Zhou, Huabing; Zhu, Shengguo

    2018-03-01

    The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.

  13. Statistical interactions and Bayes estimation of log odds in case-control studies.

    PubMed

    Satagopan, Jaya M; Olson, Sara H; Elston, Robert C

    2017-04-01

    This paper is concerned with the estimation of the logarithm of disease odds (log odds) when evaluating two risk factors, whether or not interactions are present. Statisticians define interaction as a departure from an additive model on a certain scale of measurement of the outcome. Certain interactions, known as removable interactions, may be eliminated by fitting an additive model under an invertible transformation of the outcome. This can potentially provide more precise estimates of log odds than fitting a model with interaction terms. In practice, we may also encounter nonremovable interactions. The model must then include interaction terms, regardless of the choice of the scale of the outcome. However, in practical settings, we do not know at the outset whether an interaction exists, and if so whether it is removable or nonremovable. Rather than trying to decide on significance levels to test for the existence of removable and nonremovable interactions, we develop a Bayes estimator based on a squared error loss function. We demonstrate the favorable bias-variance trade-offs of our approach using simulations, and provide empirical illustrations using data from three published endometrial cancer case-control studies. The methods are implemented in an R program, and available freely at http://www.mskcc.org/biostatistics/~satagopj .

  14. Assessment of coarse sediment mobility in the Black Canyon of the Gunnison River, Colorado.

    PubMed

    Dubinski, Ian M; Wohl, Ellen

    2007-07-01

    The Gunnison River in the Black Canyon of the Gunnison National Park (BCNP) near Montrose, Colorado is a mixed gravel and bedrock river with ephemeral side tributaries. Flow rates are controlled immediately upstream by a diversion tunnel and three reservoirs. The management of the hydraulic control structures has decreased low-frequency, high-stage flows, which are the dominant geomorphic force in bedrock channel systems. We developed a simple model to estimate the extent of sediment mobilization at a given flow in the BCNP and to evaluate changes in the extent and frequency of sediment mobilization for flow regimes before and after flow regulation in 1966. Our methodology provides a screening process for identifying and prioritizing areas in terms of sediment mobility criteria when more precise systematic field data are unavailable. The model uses the ratio between reach-averaged bed shear stress and critical shear stress to estimate when a particular grain size is mobilized for a given reach. We used aerial photography from 1992, digital elevation models, and field surveys to identify individual reaches and estimate reach-averaged hydraulic geometry. Pebble counts of talus and debris fan deposits were used to estimate regional colluvial grain-size distributions. Our results show that the frequency of flows mobilizing river bank sediment along a majority of the Gunnison River in the BCNP has significantly declined since 1966. The model results correspond well to those obtained from more detailed, site-specific field studies carried out by other investigators. Decreases in the frequency of significant sediment-mobilizing flows were more pronounced for regions within the BCNP where the channel gradient is lower. Implications of these results for management include increased risk of encroachment of vegetation on the active channel and long-term channel narrowing by colluvial deposits. It must be recognized that our methodology represents a screening of regional differences in sediment mobility. More precise estimates of hydraulic and sediment parameters would likely be required for dictating quantitative management objectives within the context of sediment mobility and sensitivity to changes in the flow regime.

  15. Excel-Based Tool for Pharmacokinetically Guided Dose Adjustment of Paclitaxel.

    PubMed

    Kraff, Stefanie; Lindauer, Andreas; Joerger, Markus; Salamone, Salvatore J; Jaehde, Ulrich

    2015-12-01

    Neutropenia is a frequent and severe adverse event in patients receiving paclitaxel chemotherapy. The time above a paclitaxel threshold concentration of 0.05 μmol/L (Tc > 0.05 μmol/L) is a strong predictor for paclitaxel-associated neutropenia and has been proposed as a target pharmacokinetic (PK) parameter for paclitaxel therapeutic drug monitoring and dose adaptation. Up to now, individual Tc > 0.05 μmol/L values are estimated based on a published PK model of paclitaxel by using the software NONMEM. Because many clinicians are not familiar with the use of NONMEM, an Excel-based dosing tool was developed to allow calculation of paclitaxel Tc > 0.05 μmol/L and give clinicians an easy-to-use tool. Population PK parameters of paclitaxel were taken from a published PK model. An Alglib VBA code was implemented in Excel 2007 to compute differential equations for the paclitaxel PK model. Maximum a posteriori Bayesian estimates of the PK parameters were determined with the Excel Solver using individual drug concentrations. Concentrations from 250 patients were simulated receiving 1 cycle of paclitaxel chemotherapy. Predictions of paclitaxel Tc > 0.05 μmol/L as calculated by the Excel tool were compared with NONMEM, whereby maximum a posteriori Bayesian estimates were obtained using the POSTHOC function. There was a good concordance and comparable predictive performance between Excel and NONMEM regarding predicted paclitaxel plasma concentrations and Tc > 0.05 μmol/L values. Tc > 0.05 μmol/L had a maximum bias of 3% and an error on precision of <12%. The median relative deviation of the estimated Tc > 0.05 μmol/L values between both programs was 1%. The Excel-based tool can estimate the time above a paclitaxel threshold concentration of 0.05 μmol/L with acceptable accuracy and precision. The presented Excel tool allows reliable calculation of paclitaxel Tc > 0.05 μmol/L and thus allows target concentration intervention to improve the benefit-risk ratio of the drug. The easy use facilitates therapeutic drug monitoring in clinical routine.

  16. Prototype Earthquake Early Warning System for Areas of Highest Seismic Risk in the Western U.S.

    NASA Astrophysics Data System (ADS)

    Bock, Y.; Geng, J.; Goldberg, D.; Saunders, J. K.; Haase, J. S.; Squibb, M. B.; Melgar, D.; Crowell, B. W.; Clayton, R. W.; Yu, E.; Walls, C. P.; Mann, D.; Mencin, D.; Mattioli, G. S.

    2015-12-01

    We report on a prototype earthquake early warning system for the Western U.S. based on GNSS (GPS+GLONASS) observations, and where available collocated GNSS and accelerometer data (seismogeodesy). We estimate with latency of 2-3 seconds GNSS displacement waveforms from more than 120 stations, focusing on the southern segment of the San Andreas fault, the Hayward and Rodgers Creek faults and Cascadia. The displacements are estimated using precise point positioning with ambiguity resolution (PPP-AR), which provides for efficient processing of hundreds of "clients" within the region of interest with respect to a reference frame well outside the expected zone of deformation. The GNSS displacements are useful for alleviating magnitude saturation concerns, rapid earthquake magnitude estimation using peak ground displacements, CMT solutions and finite fault slip models. However, GNSS alone is insufficient for strict earthquake early warning (i.e., P wave detection). Therefore, we employ a self-contained seismogeodetic technique, where collocations of GNSS and accelerometer instruments are available, to estimate real-time displacement and velocity waveforms using PPP-AR with accelerometers (PPP-ARA). Using the velocity waveforms we can detect the P wave arrival for earthquakes of interest (>M 5.5), estimate a hypocenter, S wave propagation, and earthquake magnitude using Pd scaling relationships within seconds. Currently we are gearing up to receive observatory-grade accelerometer data from the CISN. We have deployed 25 inexpensive MEMS accelerometers at existing GNSS stations. The SIO Geodetic Modules that control the flow of the GNSS and accelerometer data are being upgraded with in situ PPP-ARA and P wave picking. In situ processing allows us to use the data at the highest sampling rate of the GNSS receiver (10 Hz or higher), in combination with the 100 Hz accelerometer data. Adding the GLONASS data allows for increased precision in the vertical, an important factor in P wave detection, and by reducing outliers, increasing the number of visible satellites and significantly reducing the time required for reinitialization of phase ambiguities. We plan to make our displacement and velocity waveforms available to the USGS ShakeAlert system and others in Earthworm format.

  17. Myocardial T1 mapping at 3.0 tesla using an inversion recovery spoiled gradient echo readout and bloch equation simulation with slice profile correction (BLESSPC) T1 estimation algorithm.

    PubMed

    Shao, Jiaxin; Rapacchi, Stanislas; Nguyen, Kim-Lien; Hu, Peng

    2016-02-01

    To develop an accurate and precise myocardial T1 mapping technique using an inversion recovery spoiled gradient echo readout at 3.0 Tesla (T). The modified Look-Locker inversion-recovery (MOLLI) sequence was modified to use fast low angle shot (FLASH) readout, incorporating a BLESSPC (Bloch Equation Simulation with Slice Profile Correction) T1 estimation algorithm, for accurate myocardial T1 mapping. The FLASH-MOLLI with BLESSPC fitting was compared with different approaches and sequences with regards to T1 estimation accuracy, precision and image artifact based on simulation, phantom studies, and in vivo studies of 10 healthy volunteers and three patients at 3.0 Tesla. The FLASH-MOLLI with BLESSPC fitting yields accurate T1 estimation (average error = -5.4 ± 15.1 ms, percentage error = -0.5% ± 1.2%) for T1 from 236-1852 ms and heart rate from 40-100 bpm in phantom studies. The FLASH-MOLLI sequence prevented off-resonance artifacts in all 10 healthy volunteers at 3.0T. In vivo, there was no significant difference between FLASH-MOLLI-derived myocardial T1 values and "ShMOLLI+IE" derived values (1458.9 ± 20.9 ms versus 1464.1 ± 6.8 ms, P = 0.50); However, the average precision by FLASH-MOLLI was significantly better than that generated by "ShMOLLI+IE" (1.84 ± 0.36% variance versus 3.57 ± 0.94%, P < 0.001). The FLASH-MOLLI with BLESSPC fitting yields accurate and precise T1 estimation, and eliminates banding artifacts associated with bSSFP at 3.0T. © 2015 Wiley Periodicals, Inc.

  18. Malaria prevalence metrics in low- and middle-income countries: an assessment of precision in nationally-representative surveys.

    PubMed

    Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M

    2017-11-21

    One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.

  19. Febrile seizures after 2009 influenza A (H1N1) vaccination and infection: a nationwide registry-based study.

    PubMed

    Bakken, Inger Johanne; Aaberg, Kari Modalsli; Ghaderi, Sara; Gunnes, Nina; Trogstad, Lill; Magnus, Per; Håberg, Siri Eldevik

    2015-11-09

    During the 2009 influenza A (H1N1) pandemic, a monovalent pandemic strain vaccine containing the oil-in-water adjuvant AS03 (Pandemrix®) was offered to the Norwegian population. The coverage among children reached 54%. Our aim was to estimate the risk of febrile seizure in children after exposure to pandemic influenza vaccination or infection. The study population comprised 226,889 children born 2006-2009 resident in Norway per October 1st, 2009. Febrile seizure episodes were defined by emergency hospital admissions / emergency outpatient hospital care with International Classification of Diseases, Version 10, codes R56.0 or R56.8. The self-controlled case series method was applied to estimate incidence rate ratios (IRRs) in pre-defined risk periods compared to the background period. The total observation window was ± 180 days from exposure day. Among 113,068 vaccinated children, 656 (0.6%) had at least one febrile seizure episode. The IRR of febrile seizures 1-3 days after vaccination was 2.00 (95% confidence interval [CI]: 1.15-3.51). In the period 4-7 days after vaccination, no increased risk was observed. Among the 8172 children diagnosed with pandemic influenza, 84 (1.0%) had at least one febrile seizure episode. The IRR of febrile seizures on the same day as a diagnosis of influenza was 116.70 (95% CI: 62.81-216.90). In the period 1-3 days after a diagnosis of influenza, a tenfold increased risk was observed (IRR 10.12, 95% CI: 3.82 - 26.82). In this large population-based study with precise timing of exposures and outcomes, we found a twofold increased risk of febrile seizures 1-3 days after pandemic influenza vaccination. However, we found that pandemic influenza infection was associated with a much stronger increase in risk of febrile seizures.

  20. Small Brain Lesions and Incident Stroke and Mortality: A Cohort Study.

    PubMed

    Windham, B Gwen; Deere, Bradley; Griswold, Michael E; Wang, Wanmei; Bezerra, Daniel C; Shibata, Dean; Butler, Kenneth; Knopman, David; Gottesman, Rebecca F; Heiss, Gerardo; Mosley, Thomas H

    2015-07-07

    Although cerebral lesions 3 mm or larger on imaging are associated with incident stroke, lesions smaller than 3 mm are typically ignored. To examine stroke risks associated with subclinical brain lesions (<3 mm only, ≥3 mm only, and both sizes) and white matter hyperintensities (WMHs). Community cohort from the ARIC (Atherosclerosis Risk in Communities) Study. Two ARIC sites with magnetic resonance imaging (MRI) data from 1993 to 1995. 1884 adults aged 50 to 73 years with MRI, no prior stroke, and average follow-up of 14.5 years. Lesions on MRI (by size), WMH score (scale of 0 to 9), incident stroke, all-cause mortality, and stroke-related mortality. Hazard ratios (HRs) were estimated with proportional hazards models. Compared with no lesions, stroke risk tripled with lesions smaller than 3 mm only (HR, 3.47 [95% CI, 1.86 to 6.49]), doubled with lesions 3 mm or larger only (HR, 1.94 [CI, 1.22 to 3.07]), was 8-fold higher with lesions of both sizes (HR, 8.59 [CI, 4.69 to 15.73]), and doubled with a WMH score of at least 3 (HR, 2.14 [CI, 1.45 to 3.16]). Risk for stroke-related death tripled with lesions smaller than 3 mm only (HR, 3.05 [CI, 1.04 to 8.94]) and was 7 times higher with lesions of both sizes (HR, 6.97 [CI, 2.03 to 23.93]). Few strokes (especially hemorrhagic) and few participants with lesions smaller than 3 mm only or lesions of both sizes. Very small cerebrovascular lesions may be associated with increased risks for stroke and death; presence of lesions smaller than 3 mm and 3 mm or larger may result in a particularly striking risk increase. Larger studies are needed to confirm findings and provide more precise estimates. National Heart, Lung, and Blood Institute.

  1. PRECISION MANAGEMENT OF LOCALIZED PROSTATE CANCER

    PubMed Central

    VanderWeele, David J.; Turkbey, Baris; Sowalsky, Adam G.

    2017-01-01

    Introduction The vast majority of men who are diagnosed with prostate cancer die of other causes, highlighting the importance of determining which patient has a risk of death from prostate cancer. Precision management of prostate cancer patients includes distinguishing which men have potentially lethal disease and employing strategies for determining which treatment modality appropriately balances the desire to achieve a durable response while preventing unnecessary overtreatment. Areas covered In this review, we highlight precision approaches to risk assessment and a context for the precision-guided application of definitive therapy. We focus on three dilemmas relevant to the diagnosis of localized prostate cancer: screening, the decision to treat, and postoperative management. Expert commentary In the last five years, numerous precision tools have emerged with potential benefit to the patient. However, to achieve optimal outcome, the decision to employ one or more of these tests must be considered in the context of prevailing conventional factors. Moreover, performance and interpretation of a molecular or imaging precision test remains practitioner-dependent. The next five years will witness increased marriage of molecular and imaging biomarkers for improved multi-modal diagnosis and discrimination of disease that is aggressive versus truly indolent. PMID:28133630

  2. Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; L'Esperance, A.

    2017-01-01

    A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.

  3. Improving precision of forage yield trials: A case study

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to several facto...

  4. Capabilities, Calibration, and Impact of the ISS-RAD Fast Neutron Detector

    NASA Technical Reports Server (NTRS)

    Leitgab, Martin

    2015-01-01

    In the current NASA crew radiation health risk assessment framework, estimates for the neutron contributions to crew radiation exposure largely rely on simulated data with sizeable uncertainties due to the lack of experimental measurements inside the ISS. Integrated in the ISS-RAD instrument, the ISS-RAD Fast Neutron Detector (FND) will deploy to the ISS on one of the next cargo supply missions. Together with the ISS-RAD Charged Particle Detector, the FND will perform, for the first time, routine and precise direct neutron measurements inside the ISS between 0.5 and 80 MeV. The measurements will close the NASA Medical Operations Requirement to monitor neutrons inside the ISS and impact crew radiation health risk assessments by reducing uncertainties on the neutron contribution to crew exposure, enabling more efficient mission planning. The presentation will focus on the FND detection mechanism, calibration results and expectations about the FND's interaction with the mixed radiation field inside the ISS.

  5. What does the nature of the MECP2 mutation tell us about parental origin and recurrence risk in Rett syndrome?

    PubMed

    Zhang, J; Bao, X; Cao, G; Jiang, S; Zhu, X; Lu, H; Jia, L; Pan, H; Fehr, S; Davis, M; Leonard, H; Ravine, D; Wu, X

    2012-12-01

    The MECP2 mutations occurring in the severe neurological disorder Rett syndrome are predominantly de novo, with rare familial cases. The aims of this study were to provide a precise estimate of the parental origin of MECP2 mutations using a large Chinese sample and to assess whether parental origin varied by mutation type. The parental origin was paternal in 84/88 [95.5%, (95% confidence interval 88.77-98.75)] of sporadic Chinese cases. However, in a pooled sample including data from the literature the spectrum of mutations occurring on maternally and paternally derived chromosomes differed significantly. The excess we found of 'single base pair gains or losses' on maternally derived MECP2 gene alleles suggests that this mutational category is associated with an elevated risk of gonadal mosaicism, which has implications for genetic counseling. © 2011 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.

  6. Influence of Running on Pistol Shot Hit Patterns.

    PubMed

    Kerkhoff, Wim; Bolck, Annabel; Mattijssen, Erwin J A T

    2016-01-01

    In shooting scene reconstructions, risk assessment of the situation can be important for the legal system. Shooting accuracy and precision, and thus risk assessment, might be correlated with the shooter's physical movement and experience. The hit patterns of inexperienced and experienced shooters, while shooting stationary (10 shots) and in running motion (10 shots) with a semi-automatic pistol, were compared visually (with confidence ellipses) and statistically. The results show a significant difference in precision (circumference of the hit patterns) between stationary shots and shots fired in motion for both inexperienced and experienced shooters. The decrease in precision for all shooters was significantly larger in the y-direction than in the x-direction. The precision of the experienced shooters is overall better than that of the inexperienced shooters. No significant change in accuracy (shift in the hit pattern center) between stationary shots and shots fired in motion can be seen for all shooters. © 2015 American Academy of Forensic Sciences.

  7. A Comparative Study of the Applied Methods for Estimating Deflection of the Vertical in Terrestrial Geodetic Measurements

    PubMed Central

    Vittuari, Luca; Tini, Maria Alessandra; Sarti, Pierguido; Serantoni, Eugenio; Borghi, Alessandra; Negusini, Monia; Guillaume, Sébastien

    2016-01-01

    This paper compares three different methods capable of estimating the deflection of the vertical (DoV): one is based on the joint use of high precision spirit leveling and Global Navigation Satellite Systems (GNSS), a second uses astro-geodetic measurements and the third gravimetric geoid models. The working data sets refer to the geodetic International Terrestrial Reference Frame (ITRF) co-location sites of Medicina (Northern, Italy) and Noto (Sicily), these latter being excellent test beds for our investigations. The measurements were planned and realized to estimate the DoV with a level of precision comparable to the angular accuracy achievable in high precision network measured by modern high-end total stations. The three methods are in excellent agreement, with an operational supremacy of the astro-geodetic method, being faster and more precise than the others. The method that combines leveling and GNSS has slightly larger standard deviations; although well within the 1 arcsec level, which was assumed as threshold. Finally, the geoid model based method, whose 2.5 arcsec standard deviations exceed this threshold, is also statistically consistent with the others and should be used to determine the DoV components where local ad hoc measurements are lacking. PMID:27104544

  8. Precise determination of anthropometric dimensions by means of image processing methods for estimating human body segment parameter values.

    PubMed

    Baca, A

    1996-04-01

    A method has been developed for the precise determination of anthropometric dimensions from the video images of four different body configurations. High precision is achieved by incorporating techniques for finding the location of object boundaries with sub-pixel accuracy, the implementation of calibration algorithms, and by taking into account the varying distances of the body segments from the recording camera. The system allows automatic segment boundary identification from the video image, if the boundaries are marked on the subject by black ribbons. In connection with the mathematical finite-mass-element segment model of Hatze, body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers etc.) can be computed by using the anthropometric data determined videometrically as input data. Compared to other, recently published video-based systems for the estimation of the inertial properties of body segments, the present algorithms reduce errors originating from optical distortions, inaccurate edge-detection procedures, and user-specified upper and lower segment boundaries or threshold levels for the edge-detection. The video-based estimation of human body segment parameters is especially useful in situations where ease of application and rapid availability of comparatively precise parameter values are of importance.

  9. Estimating chlorophyll with thermal and broadband multispectral high resolution imagery from an unmanned aerial system using relevance vector machines for precision agriculture

    NASA Astrophysics Data System (ADS)

    Elarab, Manal; Ticlavilca, Andres M.; Torres-Rua, Alfonso F.; Maslova, Inga; McKee, Mac

    2015-12-01

    Precision agriculture requires high-resolution information to enable greater precision in the management of inputs to production. Actionable information about crop and field status must be acquired at high spatial resolution and at a temporal frequency appropriate for timely responses. In this study, high spatial resolution imagery was obtained through the use of a small, unmanned aerial system called AggieAirTM. Simultaneously with the AggieAir flights, intensive ground sampling for plant chlorophyll was conducted at precisely determined locations. This study reports the application of a relevance vector machine coupled with cross validation and backward elimination to a dataset composed of reflectance from high-resolution multi-spectral imagery (VIS-NIR), thermal infrared imagery, and vegetative indices, in conjunction with in situ SPAD measurements from which chlorophyll concentrations were derived, to estimate chlorophyll concentration from remotely sensed data at 15-cm resolution. The results indicate that a relevance vector machine with a thin plate spline kernel type and kernel width of 5.4, having LAI, NDVI, thermal and red bands as the selected set of inputs, can be used to spatially estimate chlorophyll concentration with a root-mean-squared-error of 5.31 μg cm-2, efficiency of 0.76, and 9 relevance vectors.

  10. Influence of the optimization methods on neural state estimation quality of the drive system with elasticity.

    PubMed

    Orlowska-Kowalska, Teresa; Kaminski, Marcin

    2014-01-01

    The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.

  11. Estimating the total energy demand for supra-maximal exercise using the VO2-power regression from an incremental exercise test.

    PubMed

    Aisbett, B; Le Rossignol, P

    2003-09-01

    The VO2-power regression and estimated total energy demand for a 6-minute supra-maximal exercise test was predicted from a continuous incremental exercise test. Sub-maximal VO2-power co-ordinates were established from the last 40 seconds (s) of 150-second exercise stages. The precision of the estimated total energy demand was determined using the 95% confidence interval (95% CI) of the estimated total energy demand. The linearity of the individual VO2-power regression equations was determined using Pearson's correlation coefficient. The mean 95% CI of the estimated total energy demand was 5.9 +/- 2.5 mL O2 Eq x kg(-1) x min(-1), and the mean correlation coefficient was 0.9942 +/- 0.0042. The current study contends that the sub-maximal VO2-power co-ordinates from a continuous incremental exercise test can be used to estimate supra-maximal energy demand without compromising the precision of the accumulated oxygen deficit (AOD) method.

  12. The effect of concurrent hand movement on estimated time to contact in a prediction motion task.

    PubMed

    Zheng, Ran; Maraj, Brian K V

    2018-04-27

    In many activities, we need to predict the arrival of an occluded object. This action is called prediction motion or motion extrapolation. Previous researchers have found that both eye tracking and the internal clocking model are involved in the prediction motion task. Additionally, it is reported that concurrent hand movement facilitates the eye tracking of an externally generated target in a tracking task, even if the target is occluded. The present study examined the effect of concurrent hand movement on the estimated time to contact in a prediction motion task. We found different (accurate/inaccurate) concurrent hand movements had the opposite effect on the eye tracking accuracy and estimated TTC in the prediction motion task. That is, the accurate concurrent hand tracking enhanced eye tracking accuracy and had the trend to increase the precision of estimated TTC, but the inaccurate concurrent hand tracking decreased eye tracking accuracy and disrupted estimated TTC. However, eye tracking accuracy does not determine the precision of estimated TTC.

  13. A method for estimating current attendance on sets of campgrounds...a pilot study

    Treesearch

    Richard L. Bury; Ruth Margolies

    1964-01-01

    Statistical models were devised for estimating both daily and seasonal attendance (and corresponding precision of estimates) through correlation-regression and ratio analyses. Total daily attendance for a test set of 23 campgrounds could be estimated from attendance measured in only one of them. The chances were that estimates would be within 10 percent of true...

  14. Achieving Optimal Quantum Acceleration of Frequency Estimation Using Adaptive Coherent Control.

    PubMed

    Naghiloo, M; Jordan, A N; Murch, K W

    2017-11-03

    Precision measurements of frequency are critical to accurate time keeping and are fundamentally limited by quantum measurement uncertainties. While for time-independent quantum Hamiltonians the uncertainty of any parameter scales at best as 1/T, where T is the duration of the experiment, recent theoretical works have predicted that explicitly time-dependent Hamiltonians can yield a 1/T^{2} scaling of the uncertainty for an oscillation frequency. This quantum acceleration in precision requires coherent control, which is generally adaptive. We experimentally realize this quantum improvement in frequency sensitivity with superconducting circuits, using a single transmon qubit. With optimal control pulses, the theoretically ideal frequency precision scaling is reached for times shorter than the decoherence time. This result demonstrates a fundamental quantum advantage for frequency estimation.

  15. Efficiency and optimal allocation in the staggered entry design

    USGS Publications Warehouse

    Link, W.A.

    1993-01-01

    The staggered entry design for survival analysis specifies that r left-truncated samples are to be used in estimation of a population survival function. The ith sample is taken at time Bi, from the subpopulation of individuals having survival time exceeding Bi. This paper investigates the performance of the staggered entry design relative to the usual design in which all samples have a common time origin. The staggered entry design is shown to be an attractive alternative, even when not necessitated by logistical constraints. The staggered entry design allows for increased precision in estimation of the right tail of the survival function, especially when some of the data may be censored. A trade-off between the range of values for which the increased precision occurs and the magnitude of the increased precision is demonstrated.

  16. Sliding mode control of magnetic suspensions for precision pointing and tracking applications

    NASA Technical Reports Server (NTRS)

    Misovec, Kathleen M.; Flynn, Frederick J.; Johnson, Bruce G.; Hedrick, J. Karl

    1991-01-01

    A recently developed nonlinear control method, sliding mode control, is examined as a means of advancing the achievable performance of space-based precision pointing and tracking systems that use nonlinear magnetic actuators. Analytic results indicate that sliding mode control improves performance compared to linear control approaches. In order to realize these performance improvements, precise knowledge of the plant is required. Additionally, the interaction of an estimating scheme and the sliding mode controller has not been fully examined in the literature. Estimation schemes were designed for use with this sliding mode controller that do not seriously degrade system performance. The authors designed and built a laboratory testbed to determine the feasibility of utilizing sliding mode control in these types of applications. Using this testbed, experimental verification of the authors' analyses is ongoing.

  17. Technical Evaluation of the NASA Model for Cancer Risk to Astronauts Due to Space Radiation

    NASA Technical Reports Server (NTRS)

    2012-01-01

    At the request of NASA, the National Research Council's (NRC's) Committee for Evaluation of Space Radiation Cancer Risk Model1 reviewed a number of changes that NASA proposes to make to its model for estimating the risk of radiation-induced cancer in astronauts. The NASA model in current use was last updated in 2005, and the proposed model would incorporate recent research directed at improving the quantification and understanding of the health risks posed by the space radiation environment. NASA's proposed model is defined by the 2011 NASA report Space Radiation Cancer Risk Projections and Uncertainties--2010 . The committee's evaluation is based primarily on this source, which is referred to hereafter as the 2011 NASA report, with mention of specific sections or tables. The overall process for estimating cancer risks due to low linear energy transfer (LET) radiation exposure has been fully described in reports by a number of organizations. The approaches described in the reports from all of these expert groups are quite similar. NASA's proposed space radiation cancer risk assessment model calculates, as its main output, age- and gender-specific risk of exposure-induced death (REID) for use in the estimation of mission and astronaut-specific cancer risk. The model also calculates the associated uncertainties in REID. The general approach for estimating risk and uncertainty in the proposed model is broadly similar to that used for the current (2005) NASA model and is based on recommendations by the National Council on Radiation Protection and Measurements. However, NASA's proposed model has significant changes with respect to the following: the integration of new findings and methods into its components by taking into account newer epidemiological data and analyses, new radiobiological data indicating that quality factors differ for leukemia and solid cancers, an improved method for specifying quality factors in terms of radiation track structure concepts as opposed to the previous approach based on linear energy transfer, the development of a new solar particle event (SPE) model, and the updates to galactic cosmic ray (GCR) and shielding transport models. The newer epidemiological information includes updates to the cancer incidence rates from the life span study (LSS) of the Japanese atomic bomb survivors, transferred to the U.S. population and converted to cancer mortality rates from U.S. population statistics. In addition, the proposed model provides an alternative analysis applicable to lifetime never-smokers (NSs). Details of the uncertainty analysis in the model have also been updated and revised. NASA's proposed model and associated uncertainties are complex in their formulation and as such require a very clear and precise set of descriptions. The committee found the 2011 NASA report challenging to review largely because of the lack of clarity in the model descriptions and derivation of the various parameters used. The committee requested some clarifications from NASA throughout its review and was able to resolve many, but not all, of the ambiguities in the written description.

  18. Incidence of breast implant rupture in a 12-year retrospective cohort: Evidence of quality discrepancy depending on the range.

    PubMed

    Seigle-Murandi, Frédéric; Lefebvre, François; Bruant-Rodier, Catherine; Bodin, Frédéric

    2017-01-01

    The majority of studies assessing the rupture rate of breast implants were performed by the breast implant manufacturing industry with questionable independence. After repetitive removals of ruptured implants from the same model, our team decided to assess the rupture rate and the estimated risk thereof for most of the silicone gel-filled implants we have used since they regained market approval in France in 2001. Our study is a retrospective cohort of 809 patients operated in our University Hospital from 2001 to 2013 for cosmetic or reconstructive goals. We could track 1561 implants, 90% of them from the same manufacturer, Allergan (Irvine, CA, USA). For each of those, we gathered their exact reference, date of implantation, surgical approach, status, last follow-up visit or the eventual date, and cause of removal. Of 225 explanted devices, only 27 were ruptured, all from the Allergan brand. Risks of removal for rupture were estimated: 0.5% at 1000 days, 6% at 2000 days, and 14% at 3000 days. Risks were significantly different between the models from this same manufacturer. One of the range of macro-textured round implants showed risks of removal for rupture of 33% at 3000 days compared to 6% for the anatomically shaped range. These results suggest a qualitative discrepancy among the different ranges of breast implants of a single manufacturer within the same timeframe of implantation. To determine the in vivo lifespan of the implants that we use more precisely and sooner, we suggest that each removed implant should be analyzed for wear and tear, independently from the industry. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Tumor necrosis factor-alpha gene polymorphisms and susceptibility to ischemic heart disease

    PubMed Central

    Zhang, Peng; Wu, Xiaomei; Li, Guangxiao; He, Qiao; Dai, Huixu; Ai, Cong; Shi, Jingpu

    2017-01-01

    Abstract Background: A number of studies had reported the association between tumor necrosis factor-alpha (TNF-α) gene polymorphisms and ischemic heart disease (IHD) risk. However, the results remained controversial. Therefore, we performed a systematic review with multiple meta-analyses to provide the more precise estimations of the relationship. Methods: We systematically searched electronic databases (PubMed, the Web of Science, EMBASE, Medline, Chinese National Knowledge Infrastructure, WanFang and ChongQing VIP Database) for relevant studies published up to February 2017. The odds ratios (ORs) and 95% confidence intervals (CIs) were estimated for assessing the association. The present meta-analysis was performed using STATA 12.0 software. Results: In total, 45 articles with 17,375 cases and 15,375 controls involved were included. Pooled ORs revealed a significant association between TNF-α −308G/A gene polymorphism and IHD (A vs. G: OR = 1.22, 95% CI = 1.10–1.35; (AA + GA) vs. GG: OR = 1.18, 95% CI = 1.03–1.36; (AA vs. (GA+GG): OR = 1.37, 95% CI = 1.08–1.75)), indicating that the TNF-α −308A allele might be an important risk factor for IHD. No association between other TNF-α gene polymorphisms and susceptibility to IHD were observed. No publication bias were found. Sensitivity analyses indicated that our results were stable. Conclusion: The present study indicated a possible association between the TNF-α −308G/A gene polymorphism and IHD risk. However, evidence was limited to confirm the role of TNF-α −238G/A, −857C/T, −863C/A, −1031T/C and other TNF-α gene polymorphisms in the risk of IHD. PMID:28383437

  20. Estimating HIV incidence and detection rates from surveillance data.

    PubMed

    Posner, Stephanie J; Myers, Leann; Hassig, Susan E; Rice, Janet C; Kissinger, Patricia; Farley, Thomas A

    2004-03-01

    Markov models that incorporate HIV test information can increase precision in estimates of new infections and permit the estimation of detection rates. The purpose of this study was to assess the functioning of a Markov model for estimating new HIV infections and HIV detection rates in Louisiana using surveillance data. We expanded a discrete-time Markov model by accounting for the change in AIDS case definition made by the Centers for Disease Control and Prevention in 1993. The model was applied to quarterly HIV/AIDS surveillance data reported in Louisiana from 1981 to 1996 for various exposure and demographic subgroups. When modeling subgroups defined by exposure categories, we adjusted for the high proportion of missing exposure information among recent cases. We ascertained sensitivity to changes in various model assumptions. The model was able to produce results consistent with other sources of information in the state. Estimates of new infections indicated a transition of the HIV epidemic in Louisiana from (1) predominantly white men and men who have sex with men to (2) women, blacks, and high-risk heterosexuals. The model estimated that 61% of all HIV/AIDS cases were detected and reported by 1996, yet half of all HIV/non-AIDS cases were yet to be detected. Sensitivity analyses demonstrated that the model was robust to several uncertainties. In general, the methodology provided a useful and flexible alternative for estimating infection and detection trends using data from a U.S. surveillance program. Its use for estimating current infection will need further exploration to address assumptions related to newer treatments.

  1. Precise orbit determination of the Fengyun-3C satellite using onboard GPS and BDS observations

    NASA Astrophysics Data System (ADS)

    Li, Min; Li, Wenwen; Shi, Chuang; Jiang, Kecai; Guo, Xiang; Dai, Xiaolei; Meng, Xiangguang; Yang, Zhongdong; Yang, Guanglin; Liao, Mi

    2017-11-01

    The GNSS Occultation Sounder instrument onboard the Chinese meteorological satellite Fengyun-3C (FY-3C) tracks both GPS and BDS signals for orbit determination. One month's worth of the onboard dual-frequency GPS and BDS data during March 2015 from the FY-3C satellite is analyzed in this study. The onboard BDS and GPS measurement quality is evaluated in terms of data quantity as well as code multipath error. Severe multipath errors for BDS code ranges are observed especially for high elevations for BDS medium earth orbit satellites (MEOs). The code multipath errors are estimated as piecewise linear model in 2{°}× 2{°} grid and applied in precise orbit determination (POD) calculations. POD of FY-3C is firstly performed with GPS data, which shows orbit consistency of approximate 2.7 cm in 3D RMS (root mean square) by overlap comparisons; the estimated orbits are then used as reference orbits for evaluating the orbit precision of GPS and BDS combined POD as well as BDS-based POD. It is indicated that inclusion of BDS geosynchronous orbit satellites (GEOs) could degrade POD precision seriously. The precisions of orbit estimates by combined POD and BDS-based POD are 3.4 and 30.1 cm in 3D RMS when GEOs are involved, respectively. However, if BDS GEOs are excluded, the combined POD can reach similar precision with respect to GPS POD, showing orbit differences about 0.8 cm, while the orbit precision of BDS-based POD can be improved to 8.4 cm. These results indicate that the POD performance with onboard BDS data alone can reach precision better than 10 cm with only five BDS inclined geosynchronous satellite orbit satellites and three MEOs. As the GNOS receiver can only track six BDS satellites for orbit positioning at its maximum channel, it can be expected that the performance of POD with onboard BDS data can be further improved if more observations are generated without such restrictions.

  2. Dissociations of the number and precision of visual short-term memory representations in change detection.

    PubMed

    Xie, Weizhen; Zhang, Weiwei

    2017-11-01

    The present study dissociated the number (i.e., quantity) and precision (i.e., quality) of visual short-term memory (STM) representations in change detection using receiver operating characteristic (ROC) and experimental manipulations. Across three experiments, participants performed both recognition and recall tests of visual STM using the change-detection task and the continuous color-wheel recall task, respectively. Experiment 1 demonstrated that the estimates of the number and precision of visual STM representations based on the ROC model of change-detection performance were robustly correlated with the corresponding estimates based on the mixture model of continuous-recall performance. Experiments 2 and 3 showed that the experimental manipulation of mnemonic precision using white-noise masking and the experimental manipulation of the number of encoded STM representations using consolidation masking produced selective effects on the corresponding measures of mnemonic precision and the number of encoded STM representations, respectively, in both change-detection and continuous-recall tasks. Altogether, using the individual-differences (Experiment 1) and experimental dissociation (Experiment 2 and 3) approaches, the present study demonstrated the some-or-none nature of visual STM representations across recall and recognition.

  3. Self-cutting and risk of subsequent suicide.

    PubMed

    Carroll, R; Thomas, K H; Bramley, K; Williams, S; Griffin, L; Potokar, J; Gunnell, D

    2016-03-01

    Some studies suggest that people who self-cut have a higher risk of suicide than those who self-poison. Self-cutting ranges from superficial wrist cutting to severe self-injury involving areas such as the chest, abdomen and neck which can be life threatening. This study aimed to investigate whether the site of self-cutting was associated with risk of subsequent suicide. We followed-up 3928 people who presented to hospital following self-harm between September 2010 and December 2013 in a prospective cohort study based on the Bristol Self-harm Surveillance Register. Demographic information from these presentations was linked with coroner's data to identify subsequent suicides. People who presented with self-cutting to areas other than the arm/wrist were at increased risk of suicide compared to those who self-poisoned (HR 4.31, 95% CI 1.27-14.63, p=0.029) and this increased risk remained after controlling for age, sex, history of previous self-harm and psychiatric diagnosis (HR 4.46, 95% CI 1.50-13.25, p<0.001). We observed no such increased risk in people presenting with cutting to the arm/wrist. These data represent the experience of one city in the UK and may not be generalisable outside of this context. Furthermore, as suicide is a rare outcome the precision of our estimates is limited. Site of self-injury may be an important indicator of subsequent suicide risk. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Measuring atmospheric density using GPS-LEO tracking data

    NASA Astrophysics Data System (ADS)

    Kuang, D.; Desai, S.; Sibthorpe, A.; Pi, X.

    2014-01-01

    We present a method to estimate the total neutral atmospheric density from precise orbit determination of Low Earth Orbit (LEO) satellites. We derive the total atmospheric density by determining the drag force acting on the LEOs through centimeter-level reduced-dynamic precise orbit determination (POD) using onboard Global Positioning System (GPS) tracking data. The precision of the estimated drag accelerations is assessed using various metrics, including differences between estimated along-track accelerations from consecutive 30-h POD solutions which overlap by 6 h, comparison of the resulting accelerations with accelerometer measurements, and comparison against an existing atmospheric density model, DTM-2000. We apply the method to GPS tracking data from CHAMP, GRACE, SAC-C, Jason-2, TerraSAR-X and COSMIC satellites, spanning 12 years (2001-2012) and covering orbital heights from 400 km to 1300 km. Errors in the estimates, including those introduced by deficiencies in other modeled forces (such as solar radiation pressure and Earth radiation pressure), are evaluated and the signal and noise levels for each satellite are analyzed. The estimated density data from CHAMP, GRACE, SAC-C and TerraSAR-X are identified as having high signal and low noise levels. These data all have high correlations with anominal atmospheric density model and show common features in relative residuals with respect to the nominal model in related parameter space. On the contrary, the estimated density data from COSMIC and Jason-2 show errors larger than the actual signal at corresponding altitudes thus having little practical value for this study. The results demonstrate that this method is applicable to data from a variety of missions and can provide useful total neutral density measurements for atmospheric study up to altitude as high as 715 km, with precision and resolution between those derived from traditional special orbital perturbation analysis and those obtained from onboard accelerometers.

  5. Improving inference for aerial surveys of bears: The importance of assumptions and the cost of unnecessary complexity.

    PubMed

    Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H

    2017-07-01

    Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.

  6. Three-dimensional reduction and finite element analysis improves the treatment of pelvic malunion reconstructive surgery

    PubMed Central

    Kurz, Sascha; Pieroh, Philipp; Lenk, Maximilian; Josten, Christoph; Böhme, Jörg

    2017-01-01

    Abstract Rationale: Pelvic malunion is a rare complication and is technically challenging to correct owing to the complex three-dimensional (3D) geometry of the pelvic girdle. Hence, precise preoperative planning is required to ensure appropriate correction. Reconstructive surgery is generally a 2- or 3-stage procedure, with transiliac osteotomy serving as an alternative to address limb length discrepancy. Patient concerns: A 38-year-old female patient with a Mears type IV pelvic malunion with previous failed reconstructive surgery was admitted to our department due to progressive immobilization, increasing pain especially at the posterior pelvic arch and a leg length discrepancy. The leg discrepancy was approximately 4 cm and rotation of the right hip joint was associated with pain. Diagnosis: Radiography and computer tomography (CT) revealed a hypertrophic malunion at the site of the previous posterior osteotomy (Mears type IV) involving the anterior and middle column, according to the 3-column concept, as well as malunion of the left anterior arch (Mears type IV). Interventions: The surgery was planned virtually via 3D reconstruction, using the patient's CT, and subsequently performed via transiliac osteotomy and symphysiotomy. Finite element method (FEM) was used to plan the osteotomy and osteosynthesis as to include an estimation of the risk of implant failure. Outcomes: There was not incidence of neurological injury or infection, and the remaining leg length discrepancy was ≤ 2 cm. The patient recovered independent, pain free, mobility. Virtual 3D planning provided a more precise measurement of correction parameters than radiographic-based measurements. FEM analysis identified the highest risk for implant failure at the symphyseal plate osteosynthesis and the parasymphyseal screws. No implant failure was observed. Lessons: Transiliac osteotomy, with additional osteotomy or symphysiotomy, was a suitable surgical procedure for the correction of pelvic malunion and provided adequate correction of leg length discrepancy. Virtual 3D planning enabled precise determination of correction parameters, with FEM analysis providing an appropriate method to predict areas of implant failure. PMID:29049196

  7. Population Estimates for Chum Salmon Spawning in the Mainstem Columbia River, 2002 Technical Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawding, Dan; Hillson, Todd D.

    2003-11-15

    Accurate and precise population estimates of chum salmon (Oncorhynchus keta) spawning in the mainstem Columbia River are needed to provide a basis for informed water allocation decisions, to determine the status of chum salmon listed under the Endangered Species Act, and to evaluate the contribution of the Duncan Creek re-introduction program to mainstem spawners. Currently, mark-recapture experiments using the Jolly-Seber model provide the only framework for this type of estimation. In 2002, a study was initiated to estimate mainstem Columbia River chum salmon populations using seining data collected while capturing broodstock as part of the Duncan Creek re-introduction. The fivemore » assumptions of the Jolly-Seber model were examined using hypothesis testing within a statistical framework, including goodness of fit tests and secondary experiments. We used POPAN 6, an integrated computer system for the analysis of capture-recapture data, to obtain maximum likelihood estimates of standard model parameters, derived estimates, and their precision. A more parsimonious final model was selected using Akaike Information Criteria. Final chum salmon escapement estimates and (standard error) from seining data for the Ives Island, Multnomah, and I-205 sites are 3,179 (150), 1,269 (216), and 3,468 (180), respectively. The Ives Island estimate is likely lower than the total escapement because only the largest two of four spawning sites were sampled. The accuracy and precision of these estimates would improve if seining was conducted twice per week instead of weekly, and by incorporating carcass recoveries into the analysis. Population estimates derived from seining mark-recapture data were compared to those obtained using the current mainstem Columbia River salmon escapement methodologies. The Jolly-Seber population estimate from carcass tagging in the Ives Island area was 4,232 adults with a standard error of 79. This population estimate appears reasonable and precise but batch marks and lack of secondary studies made it difficult to test Jolly-Seber assumptions, necessary for unbiased estimates. We recommend that individual tags be applied to carcasses to provide a statistical basis for goodness of fit tests and ultimately model selection. Secondary or double marks should be applied to assess tag loss and male and female chum salmon carcasses should be enumerated separately. Carcass tagging population estimates at the two other sites were biased low due to limited sampling. The Area-Under-the-Curve escapement estimates at all three sites were 36% to 76% of Jolly-Seber estimates. Area-Under-the Curve estimates are likely biased low because previous assumptions that observer efficiency is 100% and residence time is 10 days proved incorrect. If managers continue to rely on Area-Under-the-Curve to estimate mainstem Columbia River spawners, a methodology is provided to develop annual estimates of observer efficiency and residence time, and to incorporate uncertainty into the Area-Under-the-Curve escapement estimate.« less

  8. Guidelines to indirectly measure and enhance detection efficiency of stationary PIT tag interrogation systems in streams

    USGS Publications Warehouse

    Connolly, Patrick J.; Wolf, Keith; O'Neal, Jennifer S.

    2010-01-01

    With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.

  9. Guidelines for calculating and enhancing detection efficiency of PIT tag interrogation systems

    USGS Publications Warehouse

    Connolly, Patrick J.

    2010-01-01

    With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.

  10. Using simulation to evaluate wildlife survey designs: polar bears and seals in the Chukchi Sea.

    PubMed

    Conn, Paul B; Moreland, Erin E; Regehr, Eric V; Richmond, Erin L; Cameron, Michael F; Boveng, Peter L

    2016-01-01

    Logistically demanding and expensive wildlife surveys should ideally yield defensible estimates. Here, we show how simulation can be used to evaluate alternative survey designs for estimating wildlife abundance. Specifically, we evaluate the potential of instrument-based aerial surveys (combining infrared imagery with high-resolution digital photography to detect and identify species) for estimating abundance of polar bears and seals in the Chukchi Sea. We investigate the consequences of different levels of survey effort, flight track allocation and model configuration on bias and precision of abundance estimators. For bearded seals (0.07 animals km(-2)) and ringed seals (1.29 animals km(-2)), we find that eight flights traversing ≈7840 km are sufficient to achieve target precision levels (coefficient of variation (CV)<20%) for a 2.94×10(5) km(2) study area. For polar bears (provisionally, 0.003 animals km(-2)), 12 flights traversing ≈11 760 km resulted in CVs ranging from 28 to 35%. Estimators were relatively unbiased with similar precision over different flight track allocation strategies and estimation models, although some combinations had superior performance. These findings suggest that instrument-based aerial surveys may provide a viable means for monitoring seal and polar bear populations on the surface of the sea ice over large Arctic regions. More broadly, our simulation-based approach to evaluating survey designs can serve as a template for biologists designing their own surveys.

  11. Stroke Onset Time Determination Using MRI Relaxation Times without Non-Ischaemic Reference in A Rat Stroke Model

    PubMed Central

    Knight, Michael J.; McGarry, Bryony M.; Jokivarsi, Kimmo T.; Gröhn, Olli H.J.; Kauppinen, Risto A.

    2017-01-01

    Background Objective timing of stroke in emergency departments is expected to improve patient stratification. Magnetic resonance imaging (MRI) relaxations times, T2 and T1ρ, in abnormal diffusion delineated ischaemic tissue were used as proxies of stroke time in a rat model. Methods Both ‘non-ischaemic reference’-dependent and -independent estimators were generated. Apparent diffusion coefficient (ADC), T2 and T1ρ, were sequentially quantified for up to 6 hours of stroke in rats (n = 8) at 4.7T. The ischaemic lesion was identified as a contiguous collection of voxels with low ADC. T2 and T1ρ in the ischaemic lesion and in the contralateral non-ischaemic brain tissue were determined. Differences in mean MRI relaxation times between ischaemic and non-ischaemic volumes were used to create reference-dependent estimator. For the reference-independent procedure, only the parameters associated with log-logistic fits to the T2 and T1ρ distributions within the ADC-delineated lesions were used for the onset time estimation. Result The reference-independent estimators from T2 and T1ρ data provided stroke onset time with precisions of ±32 and ±27 minutes, respectively. The reference-dependent estimators yielded respective precisions of ±47 and ±54 minutes. Conclusions A ‘non-ischaemic anatomical reference’-independent estimator for stroke onset time from relaxometric MRI data is shown to yield greater timing precision than previously obtained through reference-dependent procedures. PMID:28685128

  12. Using simulation to evaluate wildlife survey designs: polar bears and seals in the Chukchi Sea

    PubMed Central

    Conn, Paul B.; Moreland, Erin E.; Regehr, Eric V.; Richmond, Erin L.; Cameron, Michael F.; Boveng, Peter L.

    2016-01-01

    Logistically demanding and expensive wildlife surveys should ideally yield defensible estimates. Here, we show how simulation can be used to evaluate alternative survey designs for estimating wildlife abundance. Specifically, we evaluate the potential of instrument-based aerial surveys (combining infrared imagery with high-resolution digital photography to detect and identify species) for estimating abundance of polar bears and seals in the Chukchi Sea. We investigate the consequences of different levels of survey effort, flight track allocation and model configuration on bias and precision of abundance estimators. For bearded seals (0.07 animals km−2) and ringed seals (1.29 animals km−2), we find that eight flights traversing ≈7840 km are sufficient to achieve target precision levels (coefficient of variation (CV)<20%) for a 2.94×105 km2 study area. For polar bears (provisionally, 0.003 animals km−2), 12 flights traversing ≈11 760 km resulted in CVs ranging from 28 to 35%. Estimators were relatively unbiased with similar precision over different flight track allocation strategies and estimation models, although some combinations had superior performance. These findings suggest that instrument-based aerial surveys may provide a viable means for monitoring seal and polar bear populations on the surface of the sea ice over large Arctic regions. More broadly, our simulation-based approach to evaluating survey designs can serve as a template for biologists designing their own surveys. PMID:26909183

  13. Geostatistical estimation of forest biomass in interior Alaska combining Landsat-derived tree cover, sampled airborne lidar and field observations

    NASA Astrophysics Data System (ADS)

    Babcock, Chad; Finley, Andrew O.; Andersen, Hans-Erik; Pattison, Robert; Cook, Bruce D.; Morton, Douglas C.; Alonzo, Michael; Nelson, Ross; Gregoire, Timothy; Ene, Liviu; Gobakken, Terje; Næsset, Erik

    2018-06-01

    The goal of this research was to develop and examine the performance of a geostatistical coregionalization modeling approach for combining field inventory measurements, strip samples of airborne lidar and Landsat-based remote sensing data products to predict aboveground biomass (AGB) in interior Alaska's Tanana Valley. The proposed modeling strategy facilitates pixel-level mapping of AGB density predictions across the entire spatial domain. Additionally, the coregionalization framework allows for statistically sound estimation of total AGB for arbitrary areal units within the study area---a key advance to support diverse management objectives in interior Alaska. This research focuses on appropriate characterization of prediction uncertainty in the form of posterior predictive coverage intervals and standard deviations. Using the framework detailed here, it is possible to quantify estimation uncertainty for any spatial extent, ranging from pixel-level predictions of AGB density to estimates of AGB stocks for the full domain. The lidar-informed coregionalization models consistently outperformed their counterpart lidar-free models in terms of point-level predictive performance and total AGB precision. Additionally, the inclusion of Landsat-derived forest cover as a covariate further improved estimation precision in regions with lower lidar sampling intensity. Our findings also demonstrate that model-based approaches that do not explicitly account for residual spatial dependence can grossly underestimate uncertainty, resulting in falsely precise estimates of AGB. On the other hand, in a geostatistical setting, residual spatial structure can be modeled within a Bayesian hierarchical framework to obtain statistically defensible assessments of uncertainty for AGB estimates.

  14. SALT - a better way of estimating suspended sediment

    Treesearch

    R. B. Thomas

    1984-01-01

    Hardware and software supporting a sediment sampling procedure--Sampling At List Time (SALT) have been perfected. SALT provides estimates of sediment discharge having improved accuracy and estimable precision. Although the greatest benefit of SALT may accrue to those attempting to monitor ""flashy"" small streams, its superior statistical...

  15. Developing and evaluating an automated appendicitis risk stratification algorithm for pediatric patients in the emergency department.

    PubMed

    Deleger, Louise; Brodzinski, Holly; Zhai, Haijun; Li, Qi; Lingren, Todd; Kirkendall, Eric S; Alessandrini, Evaline; Solti, Imre

    2013-12-01

    To evaluate a proposed natural language processing (NLP) and machine-learning based automated method to risk stratify abdominal pain patients by analyzing the content of the electronic health record (EHR). We analyzed the EHRs of a random sample of 2100 pediatric emergency department (ED) patients with abdominal pain, including all with a final diagnosis of appendicitis. We developed an automated system to extract relevant elements from ED physician notes and lab values and to automatically assign a risk category for acute appendicitis (high, equivocal, or low), based on the Pediatric Appendicitis Score. We evaluated the performance of the system against a manually created gold standard (chart reviews by ED physicians) for recall, specificity, and precision. The system achieved an average F-measure of 0.867 (0.869 recall and 0.863 precision) for risk classification, which was comparable to physician experts. Recall/precision were 0.897/0.952 in the low-risk category, 0.855/0.886 in the high-risk category, and 0.854/0.766 in the equivocal-risk category. The information that the system required as input to achieve high F-measure was available within the first 4 h of the ED visit. Automated appendicitis risk categorization based on EHR content, including information from clinical notes, shows comparable performance to physician chart reviewers as measured by their inter-annotator agreement and represents a promising new approach for computerized decision support to promote application of evidence-based medicine at the point of care.

  16. Improving Weather Forecasts Through Reduced Precision Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hatfield, Samuel; Düben, Peter; Palmer, Tim

    2017-04-01

    We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18

  17. Design of a short nonuniform acquisition protocol for quantitative analysis in dynamic cardiac SPECT imaging - a retrospective 123 I-MIBG animal study.

    PubMed

    Zan, Yunlong; Long, Yong; Chen, Kewei; Li, Biao; Huang, Qiu; Gullberg, Grant T

    2017-07-01

    Our previous works have found that quantitative analysis of 123 I-MIBG kinetics in the rat heart with dynamic single-photon emission computed tomography (SPECT) offers the potential to quantify the innervation integrity at an early stage of left ventricular hypertrophy. However, conventional protocols involving a long acquisition time for dynamic imaging reduce the animal survival rate and thus make longitudinal analysis difficult. The goal of this work was to develop a procedure to reduce the total acquisition time by selecting nonuniform acquisition times for projection views while maintaining the accuracy and precision of estimated physiologic parameters. Taking dynamic cardiac imaging with 123 I-MIBG in rats as an example, we generated time activity curves (TACs) of regions of interest (ROIs) as ground truths based on a direct four-dimensional reconstruction of experimental data acquired from a rotating SPECT camera, where TACs represented as the coefficients of B-spline basis functions were used to estimate compartmental model parameters. By iteratively adjusting the knots (i.e., control points) of B-spline basis functions, new TACs were created according to two rules: accuracy and precision. The accuracy criterion allocates the knots to achieve low relative entropy between the estimated left ventricular blood pool TAC and its ground truth so that the estimated input function approximates its real value and thus the procedure yields an accurate estimate of model parameters. The precision criterion, via the D-optimal method, forces the estimated parameters to be as precise as possible, with minimum variances. Based on the final knots obtained, a new protocol of 30 min was built with a shorter acquisition time that maintained a 5% error in estimating rate constants of the compartment model. This was evaluated through digital simulations. The simulation results showed that our method was able to reduce the acquisition time from 100 to 30 min for the cardiac study of rats with 123 I-MIBG. Compared to a uniform interval dynamic SPECT protocol (1 s acquisition interval, 30 min acquisition time), the newly proposed protocol with nonuniform interval achieved comparable (K1 and k2, P = 0.5745 for K1 and P = 0.0604 for k2) or better (Distribution Volume, DV, P = 0.0004) performance for parameter estimates with less storage and shorter computational time. In this study, a procedure was devised to shorten the acquisition time while maintaining the accuracy and precision of estimated physiologic parameters in dynamic SPECT imaging. The procedure was designed for 123 I-MIBG cardiac imaging in rat studies; however, it has the potential to be extended to other applications, including patient studies involving the acquisition of dynamic SPECT data. © 2017 American Association of Physicists in Medicine.

  18. Nonlinear Statistical Estimation with Numerical Maximum Likelihood

    DTIC Science & Technology

    1974-10-01

    probably most directly attributable to the speed, precision and compactness of the linear programming algorithm exercised ; the mutual primal-dual...discriminant analysis is to classify the individual as a member of T# or IT, 1 2 according to the relative...Introduction to the Dissertation 1 Introduction to Statistical Estimation Theory 3 Choice of Estimator.. .Density Functions 12 Choice of Estimator

  19. Small area estimation in forests affected by wildfire in the Interior West

    Treesearch

    G. G. Moisen; J. A. Blackard; M. Finco

    2004-01-01

    Recent emphasis has been placed on estimating amount and characteristics of forests affected by wildfire in the Interior West. Data collected by FIA is intended for estimation over large geographic areas and is too sparse to construct sufficiently precise estimates within burn perimeters. This paper illustrates how recently built MODISbased maps of forest/nonforest and...

  20. Developing accurate survey methods for estimating population sizes and trends of the critically endangered Nihoa Millerbird and Nihoa Finch.

    USGS Publications Warehouse

    Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris

    2012-01-01

    Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa

  1. The influence of taxon sampling on Bayesian divergence time inference under scenarios of rate heterogeneity among lineages.

    PubMed

    Soares, André E R; Schrago, Carlos G

    2015-01-07

    Although taxon sampling is commonly considered an important issue in phylogenetic inference, it is rarely considered in the Bayesian estimation of divergence times. In fact, the studies conducted to date have presented ambiguous results, and the relevance of taxon sampling for molecular dating remains unclear. In this study, we developed a series of simulations that, after six hundred Bayesian molecular dating analyses, allowed us to evaluate the impact of taxon sampling on chronological estimates under three scenarios of among-lineage rate heterogeneity. The first scenario allowed us to examine the influence of the number of terminals on the age estimates based on a strict molecular clock. The second scenario imposed an extreme example of lineage specific rate variation, and the third scenario permitted extensive rate variation distributed along the branches. We also analyzed empirical data on selected mitochondrial genomes of mammals. Our results showed that in the strict molecular-clock scenario (Case I), taxon sampling had a minor impact on the accuracy of the time estimates, although the precision of the estimates was greater with an increased number of terminals. The effect was similar in the scenario (Case III) based on rate variation distributed among the branches. Only under intensive rate variation among lineages (Case II) taxon sampling did result in biased estimates. The results of an empirical analysis corroborated the simulation findings. We demonstrate that taxonomic sampling affected divergence time inference but that its impact was significant if the rates deviated from those derived for the strict molecular clock. Increased taxon sampling improved the precision and accuracy of the divergence time estimates, but the impact on precision is more relevant. On average, biased estimates were obtained only if lineage rate variation was pronounced. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Task based exposure assessment in ergonomic epidemiology: a study of upper arm elevation in the jobs of machinists, car mechanics, and house painters

    PubMed Central

    Svendsen, S; Mathiassen, S; Bonde, J

    2005-01-01

    Aims: To explore the precision of task based estimates of upper arm elevation in three occupational groups, compared to direct measurements of job exposure. Methods: Male machinists (n = 26), car mechanics (n = 23), and house painters (n = 23) were studied. Whole day recordings of upper arm elevation were obtained for four consecutive working days, and associated task information was collected in diaries. For each individual, task based estimates of job exposure were calculated by weighting task exposures from a collective database by task proportions according to the diaries. These estimates were validated against directly measured job exposures using linear regression. The performance of the task based approach was expressed through the gain in precision of occupational group mean exposures that could be obtained by adding subjects with task based estimates to a group of subjects with measured job exposures in a "validation" design. Results: In all three occupations, tasks differed in mean exposure, and task proportions varied between individuals. Task based estimation proved inefficient, with squared correlation coefficients only occasionally exceeding 0.2 for the relation between task based and measured job exposures. Consequently, it was not possible to substantially improve the precision of an estimated group mean by including subjects whose job exposures were based on task information. Conclusions: Task based estimates of mechanical job exposure can be very imprecise, and only marginally better than estimates based on occupation. It is recommended that investigators in ergonomic epidemiology consider the prospects of task based exposure assessment carefully before placing resources at obtaining task information. Strategies disregarding tasks may be preferable in many cases. PMID:15613604

  3. Temporal variations of potential fecundity of southern blue whiting (Micromesistius australis australis) in the Southeast Pacific

    NASA Astrophysics Data System (ADS)

    Flores, Andrés; Wiff, Rodrigo; Díaz, Eduardo; Carvajal, Bernardita

    2017-08-01

    Fecundity is a key aspect of fish species reproductive biology because it relates directly to total egg production. Yet, despite such importance, fecundity estimates are lacking or scarce for several fish species. The gravimetric method is the most-used one to estimate fecundity by essentially scaling up the oocyte density to the ovary weight. It is a relatively simple and precise technique, but also time consuming because it requires counting all oocytes in an ovary subsample. The auto-diametric method, on the other hand, is relatively new for estimating fecundity, representing a rapid alternative, because it requires only an estimation of mean oocyte density from mean oocyte diameter. Using the extensive database available from commercial fishery and design surveys for southern blue whiting Micromesistius australis australis in the Southeast Pacific, we compared estimates of fecundity using both gravimetric and auto-diametric methods. Temporal variations in potential fecundity from the auto-diametric method were evaluated using generalised linear models considering predictors from maternal characteristics such as female size, condition factor, oocyte size, and gonadosomatic index. A global and time-invariant auto-diametric equation was evaluated using a simulation procedure based on non-parametric bootstrap. Results indicated there were not significant differences regarding fecundity estimates between the gravimetric and auto-diametric method (p > 0.05). Simulation showed the application of a global equation is unbiased and sufficiently precise to estimate time-invariant fecundity of this species. Temporal variations on fecundity were explained by maternal characteristic, revealing signals of fecundity down-regulation. We discuss how oocyte size and nutritional condition (measured as condition factor) are one of the important factors determining fecundity. We highlighted also the relevance of choosing the appropriate sampling period to conduct maturity studies and ensure precise estimates of fecundity of this species.

  4. Monitoring gray wolf populations using multiple survey methods

    USGS Publications Warehouse

    Ausband, David E.; Rich, Lindsey N.; Glenn, Elizabeth M.; Mitchell, Michael S.; Zager, Pete; Miller, David A.W.; Waits, Lisette P.; Ackerman, Bruce B.; Mack, Curt M.

    2013-01-01

    The behavioral patterns and large territories of large carnivores make them challenging to monitor. Occupancy modeling provides a framework for monitoring population dynamics and distribution of territorial carnivores. We combined data from hunter surveys, howling and sign surveys conducted at predicted wolf rendezvous sites, and locations of radiocollared wolves to model occupancy and estimate the number of gray wolf (Canis lupus) packs and individuals in Idaho during 2009 and 2010. We explicitly accounted for potential misidentification of occupied cells (i.e., false positives) using an extension of the multi-state occupancy framework. We found agreement between model predictions and distribution and estimates of number of wolf packs and individual wolves reported by Idaho Department of Fish and Game and Nez Perce Tribe from intensive radiotelemetry-based monitoring. Estimates of individual wolves from occupancy models that excluded data from radiocollared wolves were within an average of 12.0% (SD = 6.0) of existing statewide minimum counts. Models using only hunter survey data generally estimated the lowest abundance, whereas models using all data generally provided the highest estimates of abundance, although only marginally higher. Precision across approaches ranged from 14% to 28% of mean estimates and models that used all data streams generally provided the most precise estimates. We demonstrated that an occupancy model based on different survey methods can yield estimates of the number and distribution of wolf packs and individual wolf abundance with reasonable measures of precision. Assumptions of the approach including that average territory size is known, average pack size is known, and territories do not overlap, must be evaluated periodically using independent field data to ensure occupancy estimates remain reliable. Use of multiple survey methods helps to ensure that occupancy estimates are robust to weaknesses or changes in any 1 survey method. Occupancy modeling may be useful for standardizing estimates across large landscapes, even if survey methods differ across regions, allowing for inferences about broad-scale population dynamics of wolves.

  5. Phobos mass estimations from MEX and Viking 1 data: influence of different noise sources and estimation strategies

    NASA Astrophysics Data System (ADS)

    Kudryashova, M.; Rosenblatt, P.; Marty, J.-C.

    2015-08-01

    The mass of Phobos is an important parameter which, together with second-order gravity field coefficients and libration amplitude, constrains internal structure and nature of the moon. And thus, it needs to be known with high precision. Nevertheless, Phobos mass (GM, more precisely) estimated by different authors based on diverse data-sets and methods, varies by more than their 1-sigma error. The most complete lists of GM values are presented in the works of R. Jacobson (2010) and M. Paetzold et al. (2014) and include the estimations in the interval from (5.39 ± 0:03).10^5 (Smith et al., 1995) till (8.5 ± 0.7).10^5[m^3/s^2] (Williams et al., 1988). Furthermore, even the comparison of the estimations coming from the same estimation procedure applied to the consecutive flybys of the same spacecraft (s/c) shows big variations in GMs. The indicated behavior is very pronounced in the GM estimations stemming from the Viking1 flybys in February 1977 (as well as from MEX flybys, though in a smaller amplitude) and in this work we made an attempt to figure out its roots. The errors of Phobos GM estimations depend on the precision of the model (e.g. accuracy of Phobos a priori ephemeris and its a priori GM value) as well as on the radio-tracking measurements quality (noise, coverage, flyby distance). In the present work we are testing the impact of mentioned above error sources by means of simulations. We also consider the effect of the uncertainties in a priori Phobos positions on the GM estimations from real observations. Apparently, the strategy (i.e. splitting real observations in data-arcs, whether they stem from the close approaches of Phobos by spacecraft or from analysis of the s/c orbit evolution around Mars) of the estimations has an impact on the Phobos GM estimation.

  6. A sampling system for estimating the cultivation of wheat (Triticum aestivum L) from LANDSAT data. M.S. Thesis - 21 Jul. 1983

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Moreira, M. A.

    1983-01-01

    Using digitally processed MSS/LANDSAT data as auxiliary variable, a methodology to estimate wheat (Triticum aestivum L) area by means of sampling techniques was developed. To perform this research, aerial photographs covering 720 sq km in Cruz Alta test site at the NW of Rio Grande do Sul State, were visually analyzed. LANDSAT digital data were analyzed using non-supervised and supervised classification algorithms; as post-processing the classification was submitted to spatial filtering. To estimate wheat area, the regression estimation method was applied and different sample sizes and various sampling units (10, 20, 30, 40 and 60 sq km) were tested. Based on the four decision criteria established for this research, it was concluded that: (1) as the size of sampling units decreased the percentage of sampled area required to obtain similar estimation performance also decreased; (2) the lowest percentage of the area sampled for wheat estimation with relatively high precision and accuracy through regression estimation was 90% using 10 sq km s the sampling unit; and (3) wheat area estimation by direct expansion (using only aerial photographs) was less precise and accurate when compared to those obtained by means of regression estimation.

  7. [Age and time estimation during different types of activity].

    PubMed

    Gareev, E M; Osipova, L G

    1980-01-01

    The study was concerned with the age characteristics of verbal and operative estimation of time intervals filled with different types of mental and physical activity as well as those free of it. The experiment was conducted on 85 subjects, 7--24 years of age. In all age groups and in both forms of time estimation (except verbal estimation in 10--12 years old children) there was a significant connection between the interval estimation and the type of activity. In adults and in 7--8 years old children, the connection was significantly tighter in operative estimations than in verbal ones. Unlike senior school children and adults, in 7--12 years old children there were sharp differences in precision between operative and verbal estimations and a discordance of their changes under the influence of activity. Precision and variability were rather similar in all age groups. It is suggested that the obtained data show heterochronity and a different rate of development of the higher nervous activity mechanisms providing for reflection of time in the form of verbal and voluntary motor reactions to the given interval.

  8. Periodontal profile classes predict periodontal disease progression and tooth loss.

    PubMed

    Morelli, Thiago; Moss, Kevin L; Preisser, John S; Beck, James D; Divaris, Kimon; Wu, Di; Offenbacher, Steven

    2018-02-01

    Current periodontal disease taxonomies have limited utility for predicting disease progression and tooth loss; in fact, tooth loss itself can undermine precise person-level periodontal disease classifications. To overcome this limitation, the current group recently introduced a novel patient stratification system using latent class analyses of clinical parameters, including patterns of missing teeth. This investigation sought to determine the clinical utility of the Periodontal Profile Classes and Tooth Profile Classes (PPC/TPC) taxonomy for risk assessment, specifically for predicting periodontal disease progression and incident tooth loss. The analytic sample comprised 4,682 adult participants of two prospective cohort studies (Dental Atherosclerosis Risk in Communities Study and Piedmont Dental Study) with information on periodontal disease progression and incident tooth loss. The PPC/TPC taxonomy includes seven distinct PPCs (person-level disease pattern and severity) and seven TPCs (tooth-level disease). Logistic regression modeling was used to estimate relative risks (RR) and 95% confidence intervals (CI) for the association of these latent classes with disease progression and incident tooth loss, adjusting for examination center, race, sex, age, diabetes, and smoking. To obtain personalized outcome propensities, risk estimates associated with each participant's PPC and TPC were combined into person-level composite risk scores (Index of Periodontal Risk [IPR]). Individuals in two PPCs (PPC-G: Severe Disease and PPC-D: Tooth Loss) had the highest tooth loss risk (RR = 3.6; 95% CI = 2.6 to 5.0 and RR = 3.8; 95% CI = 2.9 to 5.1, respectively). PPC-G also had the highest risk for periodontitis progression (RR = 5.7; 95% CI = 2.2 to 14.7). Personalized IPR scores were positively associated with both periodontitis progression and tooth loss. These findings, upon additional validation, suggest that the periodontal/tooth profile classes and the derived personalized propensity scores provide clinical periodontal definitions that reflect disease patterns in the population and offer a useful system for patient stratification that is predictive for disease progression and tooth loss. © 2018 American Academy of Periodontology.

  9. The tumor necrosis factor-α-238 polymorphism and digestive system cancer risk: a meta-analysis.

    PubMed

    Hui, Ming; Yan, Xiaojuan; Jiang, Ying

    2016-08-01

    Many studies have reported the association between tumor necrosis factor-α (TNF-α)-238 polymorphism and digestive system cancer susceptibility, but the results were inconclusive. We performed a meta-analysis to derive a more precise estimation of the relationship between TNF-α-238 G/A polymorphism and digestive system cancer risk. Pooled analysis for the TNF-α-238 G/A polymorphism contained 26 studies with a total of 4849 cases and 8567 controls. The meta-analysis observed a significant association between TNF-α-238 G/A polymorphism and digestive system cancer risk in the overall population (GA vs GG: OR 1.19, 95 % CI 1.00-1.40, P heterpgeneity = 0.016; A vs G: OR 1.19, 95 % CI 1.03-1.39, P heterpgeneity = 0.015; dominant model: OR 1.20, 95 % CI 1.02-1.41, P heterpgeneity = 0.012). In the analysis of the ethnic subgroups, however, similar results were observed only in the Asian population, but not in the Caucasian population. Therefore, this meta-analysis suggests that TNF-α-238 G/A polymorphism is associated with a significantly increased risk of digestive system cancer. Further large and well-designed studies are needed to confirm these findings.

  10. FiGHTS: a preliminary screening tool for adolescent firearms-carrying.

    PubMed

    Hayes, D Neil; Sege, Robert

    2003-12-01

    Adolescent firearms-carrying is a risk factor for serious injury and death. Clinical screening tools for firearms-carrying have not yet been developed. We present the development of a preliminary screening test for adolescent firearms-carrying based on the growing body of knowledge of firearms-related risk factors. A convenience sample of 15,000 high school students from the 1999 National Youth Risk Behavior Survey was analyzed for the purpose of model building. Known risk factors for firearms-carrying were candidates for 2 models predicting recent firearms-carrying. The "brief FiGHTS score" screening tool excluded terms related to sexual behavior, significant substance abuse, or criminal behavior (Fi=fighting, G=gender, H=hurt while fighting, T=threatened, S=smoker). An "extended FiGHTS score," which included 13 items, was developed for more precise estimates. The brief FiGHTS score had a sensitivity of 82%, a specificity of 71%, and an area under the receiver operating characteristic (ROC) curve of 0.84. The extended FiGHTS score had an area under the ROC curve of 0.90. Both models performed well in a validation data set of 55,000 students. The brief and extended FiGHTS scores have high sensitivity and specificity for predicting firearms-carrying and may be appropriate for clinical testing.

  11. Development of a qualitative pathogen risk-assessment methodology for municipal-sludge landfilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-04-01

    This report addresses potential risks from microbiological pathogens present in municipal sludge disposal in landfills. Municipal sludges contain a wide variety of bacteria, viruses, protozoa, helminths, and fungi. Survival characteristics of pathogens are critical factors in assessing the risks associated with potential transport of microorganisms from the sludge-soil matrix to the ground-water environment of landfills. Various models are discussed for predicting microbial die-off. The order of persistence in the environment from longest to shortest survival time appears to be helminth eggs > viruses > bacteria > protozoan cysts. Whether or not a pathogen reaches ground-water and is transported to drinking-watermore » wells depends on a number of factors, including initial concentration of the pathogen, survival of the pathogen, number of pathogens that reach the sludge-soil interface, degree of removal through the unsaturated and saturated-soil zones, and the hydraulic gradient. The degree to which each of these factors will influence the probability of pathogens entering ground-water cannot be determined precisely. Information on the fate of pathogens at existing landfills is sorely lacking. Additional laboratory and field studies are needed to determine the degree of pathogen leaching, survival and transport in ground-water in order to estimate potential risks from pathogens at sludge landfills with reasonable validity.« less

  12. Does the response to alcohol taxes differ across racial/ethnic groups? Some evidence from 1984-2009 Behavioral Risk Factor Surveillance System

    PubMed Central

    An, Ruopeng; Sturm, Roland

    2011-01-01

    Background Excessive alcohol use remains an important lifestyle-related contributor to morbidity and mortality in the U.S. and worldwide. It is well documented that drinking patterns differ across racial/ethnic groups, but not how those different consumption patterns would respond to tax changes. Therefore, policy makers are not informed on whether the effects of tax increases on alcohol abuse are shared equally by the whole population, or policies in addition to taxation should be pursued to reach certain sociodemographic groups. Aims of the Study To estimate differential demand responses to alcohol excise taxes across racial/ethnic groups in the U.S. Methods Individual data from the Behavioral Risk Factor Surveillance System 1984-2009 waves (N= 3,921,943, 39.3% male; 81.3% White, 7.8% African American, 5.8% Hispanic, 1.9% Asian or Pacific Islander, 1.4% Native American, and 1.8% other race/multi-race) are merged with tax data by residential state and interview month. Dependent variables include consumption of any alcohol and number of drinks consumed per month. Demand responses to alcohol taxes are estimated for each race/ethnicity in separate regressions conditional on individual characteristics, state and time fixed effects, and state-specific secular trends. Results The null hypothesis on the identical tax effects among all races/ethnicities is strongly rejected (P < 0.0001), although pairwise comparisons using t-test are often not statistically significant due to a lack of precision. Our point estimates suggest that the tax effect on any alcohol consumption is largest among White and smallest among Hispanic. Among existing drinkers, Native American and other race/multi-race are most responsive to tax effects while Hispanic least. For all races/ethnicities, the estimated tax effects on consumption are large and significant among light drinkers (1-40 drinks per month), but shrink substantially for moderate (41-99) and heavy drinkers (≥ 100). Discussion Extensive research has been conducted on overall demand responses to alcohol excise taxes, but not on heterogeneity across various racial/ethnic groups. Only one similar prior study exists, but used a much smaller dataset. The authors did not identify differential effects. With this much larger dataset, we found some evidence for different responses across races/ethnicities to alcohol taxes, although we lack precision for individual group estimates. Limitations of our study include the absence of intrastate tax variations, no information on what type of alcohol is consumed, lack of controls for subgroup baseline alcohol consumption rates, and measurement error in self-reported alcohol use data. Implications for Health Policies Tax policies aimed to reduce alcohol-related health and social problems should consider whether they target the most harmful drinking behaviors, affect subgroups in unintended ways, or influence some groups disproportionately. This requires information on heterogeneity across subpopulations. Our results are a first step in this direction and suggest that there exists a differential impact across races/ethnicities, which may further increase health disparities. Tax increases also appear to be less effective among the heaviest consumers who are associated with highest risk. Implications for Further Research More research, including replications in different settings, is required to obtain better estimates on differential responses to alcohol tax across races/ethnicities. Population heterogeneity is also more complex than our first cut by race/ethnicity and needs more fine-grained analyses and model structures. PMID:21552394

  13. Does the response to alcohol taxes differ across racial/ethnic groups? Some evidence from 1984-2009 Behavioral Risk Factor Surveillance System.

    PubMed

    An, Ruopeng; Sturm, Roland

    2011-03-01

    Excessive alcohol use remains an important lifestyle-related contributor to morbidity and mortality in the U.S. and worldwide. It is well documented that drinking patterns differ across racial/ethnic groups, but not how those different consumption patterns would respond to tax changes. Therefore, policy makers are not informed on whether the effects of tax increases on alcohol abuse are shared equally by the whole population, or policies in addition to taxation should be pursued to reach certain sociodemographic groups. To estimate differential demand responses to alcohol excise taxes across racial/ethnic groups in the U.S. Individual data from the Behavioral Risk Factor Surveillance System 1984-2009 waves (N= 3,921,943, 39.3% male; 81.3% White, 7.8% African American, 5.8% Hispanic, 1.9% Asian or Pacific Islander, 1.4% Native American, and 1.8% other race/multi-race) are merged with tax data by residential state and interview month. Dependent variables include consumption of any alcohol and number of drinks consumed per month. Demand responses to alcohol taxes are estimated for each race/ethnicity in separate regressions conditional on individual characteristics, state and time fixed effects, and state-specific secular trends. The null hypothesis on the identical tax effects among all races/ethnicities is strongly rejected (P < 0.0001), although pairwise comparisons using t-test are often not statistically significant due to a lack of precision. Our point estimates suggest that the tax effect on any alcohol consumption is largest among White and smallest among Hispanic. Among existing drinkers, Native American and other race/multi-race are most responsive to tax effects while Hispanic least. For all races/ethnicities, the estimated tax effects on consumption are large and significant among light drinkers (1-40 drinks per month), but shrink substantially for moderate (41-99) and heavy drinkers (≥ 100). Extensive research has been conducted on overall demand responses to alcohol excise taxes, but not on heterogeneity across various racial/ethnic groups. Only one similar prior study exists, but used a much smaller dataset. The authors did not identify differential effects. With this much larger dataset, we found some evidence for different responses across races/ethnicities to alcohol taxes, although we lack precision for individual group estimates. Limitations of our study include the absence of intrastate tax variations, no information on what type of alcohol is consumed, lack of controls for subgroup baseline alcohol consumption rates, and measurement error in self-reported alcohol use data. Tax policies aimed to reduce alcohol-related health and social problems should consider whether they target the most harmful drinking behaviors, affect subgroups in unintended ways, or influence some groups disproportionately. This requires information on heterogeneity across subpopulations. Our results are a first step in this direction and suggest that there exists a differential impact across races/ethnicities, which may further increase health disparities. Tax increases also appear to be less effective among the heaviest consumers who are associated with highest risk. More research, including replications in different settings, is required to obtain better estimates on differential responses to alcohol tax across races/ethnicities. Population heterogeneity is also more complex than our first cut by race/ethnicity and needs more fine-grained analyses and model structures.

  14. An Inertial Sensor-Based Method for Estimating the Athlete's Relative Joint Center Positions and Center of Mass Kinematics in Alpine Ski Racing

    PubMed Central

    Fasel, Benedikt; Spörri, Jörg; Schütz, Pascal; Lorenzetti, Silvio; Aminian, Kamiar

    2017-01-01

    For the purpose of gaining a deeper understanding of the relationship between external training load and health in competitive alpine skiing, an accurate and precise estimation of the athlete's kinematics is an essential methodological prerequisite. This study proposes an inertial sensor-based method to estimate the athlete's relative joint center positions and center of mass (CoM) kinematics in alpine skiing. Eleven inertial sensors were fixed to the lower and upper limbs, trunk, and head. The relative positions of the ankle, knee, hip, shoulder, elbow, and wrist joint centers, as well as the athlete's CoM kinematics were validated against a marker-based optoelectronic motion capture system during indoor carpet skiing. For all joints centers analyzed, position accuracy (mean error) was below 110 mm and precision (error standard deviation) was below 30 mm. CoM position accuracy and precision were 25.7 and 6.7 mm, respectively. Both the accuracy and precision of the system to estimate the distance between the ankle of the outside leg and CoM (measure quantifying the skier's overall vertical motion) were found to be below 11 mm. Some poorer accuracy and precision values (below 77 mm) were observed for the athlete's fore-aft position (i.e., the projection of the outer ankle-CoM vector onto the line corresponding to the projection of ski's longitudinal axis on the snow surface). In addition, the system was found to be sensitive enough to distinguish between different types of turns (wide/narrow). Thus, the method proposed in this paper may also provide a useful, pervasive way to monitor and control adverse external loading patterns that occur during regular on-snow training. Moreover, as demonstrated earlier, such an approach might have a certain potential to quantify competition time, movement repetitions and/or the accelerations acting on the different segments of the human body. However, prior to getting feasible for applications in daily training, future studies should primarily focus on a simplification of the sensor setup, as well as a fusion with global navigation satellite systems (i.e., the estimation of the absolute joint and CoM positions). PMID:29163196

  15. An Inertial Sensor-Based Method for Estimating the Athlete's Relative Joint Center Positions and Center of Mass Kinematics in Alpine Ski Racing.

    PubMed

    Fasel, Benedikt; Spörri, Jörg; Schütz, Pascal; Lorenzetti, Silvio; Aminian, Kamiar

    2017-01-01

    For the purpose of gaining a deeper understanding of the relationship between external training load and health in competitive alpine skiing, an accurate and precise estimation of the athlete's kinematics is an essential methodological prerequisite. This study proposes an inertial sensor-based method to estimate the athlete's relative joint center positions and center of mass (CoM) kinematics in alpine skiing. Eleven inertial sensors were fixed to the lower and upper limbs, trunk, and head. The relative positions of the ankle, knee, hip, shoulder, elbow, and wrist joint centers, as well as the athlete's CoM kinematics were validated against a marker-based optoelectronic motion capture system during indoor carpet skiing. For all joints centers analyzed, position accuracy (mean error) was below 110 mm and precision (error standard deviation) was below 30 mm. CoM position accuracy and precision were 25.7 and 6.7 mm, respectively. Both the accuracy and precision of the system to estimate the distance between the ankle of the outside leg and CoM (measure quantifying the skier's overall vertical motion) were found to be below 11 mm. Some poorer accuracy and precision values (below 77 mm) were observed for the athlete's fore-aft position (i.e., the projection of the outer ankle-CoM vector onto the line corresponding to the projection of ski's longitudinal axis on the snow surface). In addition, the system was found to be sensitive enough to distinguish between different types of turns (wide/narrow). Thus, the method proposed in this paper may also provide a useful, pervasive way to monitor and control adverse external loading patterns that occur during regular on-snow training. Moreover, as demonstrated earlier, such an approach might have a certain potential to quantify competition time, movement repetitions and/or the accelerations acting on the different segments of the human body. However, prior to getting feasible for applications in daily training, future studies should primarily focus on a simplification of the sensor setup, as well as a fusion with global navigation satellite systems (i.e., the estimation of the absolute joint and CoM positions).

  16. PRECISION OF ATMOSPHERIC DRY DEPOSITION DATA FROM THE CLEAN AIR STATUS AND TRENDS NETWORK (CASTNET)

    EPA Science Inventory

    A collocated, dry deposition sampling program was begun in January 1987 by the US Environmental Protection Agency to provide ongoing estimates of the overall precision of dry deposition and supporting data entering the Clean Air Status and Trends Network (CASTNet) archives Duplic...

  17. Optimal Design for Two-Level Random Assignment and Regression Discontinuity Studies

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.; Dye, Charles

    2016-01-01

    An important concern when planning research studies is to obtain maximum precision of an estimate of a treatment effect given a budget constraint. When research designs have a "multilevel" or "hierarchical" structure changes in sample size at different levels of the design will impact precision differently. Furthermore, there…

  18. Principles of precision medicine in stroke.

    PubMed

    Hinman, Jason D; Rost, Natalia S; Leung, Thomas W; Montaner, Joan; Muir, Keith W; Brown, Scott; Arenillas, Juan F; Feldmann, Edward; Liebeskind, David S

    2017-01-01

    The era of precision medicine has arrived and conveys tremendous potential, particularly for stroke neurology. The diagnosis of stroke, its underlying aetiology, theranostic strategies, recurrence risk and path to recovery are populated by a series of highly individualised questions. Moreover, the phenotypic complexity of a clinical diagnosis of stroke makes a simple genetic risk assessment only partially informative on an individual basis. The guiding principles of precision medicine in stroke underscore the need to identify, value, organise and analyse the multitude of variables obtained from each individual to generate a precise approach to optimise cerebrovascular health. Existing data may be leveraged with novel technologies, informatics and practical clinical paradigms to apply these principles in stroke and realise the promise of precision medicine. Importantly, precision medicine in stroke will only be realised once efforts to collect, value and synthesise the wealth of data collected in clinical trials and routine care starts. Stroke theranostics, the ultimate vision of synchronising tailored therapeutic strategies based on specific diagnostic data, demand cerebrovascular expertise on big data approaches to clinically relevant paradigms. This review considers such challenges and delineates the principles on a roadmap for rational application of precision medicine to stroke and cerebrovascular health. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  19. The detection of cryptic Plasmodium infection among villagers in Attapeu province, Lao PDR.

    PubMed

    Iwagami, Moritoshi; Keomalaphet, Sengdeuane; Khattignavong, Phonepadith; Soundala, Pheovaly; Lorphachan, Lavy; Matsumoto-Takahashi, Emilie; Strobel, Michel; Reinharz, Daniel; Phommasansack, Manisack; Hongvanthong, Bouasy; Brey, Paul T; Kano, Shigeyuki

    2017-12-01

    Although the malaria burden in the Lao PDR has gradually decreased, the elimination of malaria by 2030 presents many challenges. Microscopy and malaria rapid diagnostic tests (RDTs) are used to diagnose malaria in the Lao PDR; however, some studies have reported the prevalence of sub-microscopic Plasmodium infections or asymptomatic Plasmodium carriers in endemic areas. Thus, highly sensitive detection methods are needed to understand the precise malaria situation in these areas. A cross-sectional malaria field survey was conducted in 3 highly endemic malaria districts (Xaysetha, Sanamxay, Phouvong) in Attapeu province, Lao PDR in 2015, to investigate the precise malaria endemicity in the area; 719 volunteers from these villages participated in the survey. Microscopy, RDTs and a real-time nested PCR were used to detect Plasmodium infections and their results were compared. A questionnaire survey of all participants was also conducted to estimate risk factors of Plasmodium infection. Numbers of infections detected by the three methods were microscopy: P. falciparum (n = 1), P. vivax (n = 2); RDTs: P. falciparum (n = 2), P. vivax (n = 3); PCR: Plasmodium (n = 47; P. falciparum [n = 4], P. vivax [n = 41], mixed infection [n = 2]; 6.5%, 47/719). Using PCR as a reference, the sensitivity and specificity of microscopy were 33.3% and 100.0%, respectively, for detecting P. falciparum infection, and 7.0% and 100.0%, for detecting P. vivax infection. Among the 47 participants with parasitemia, only one had a fever (≥37.5°C) and 31 (66.0%) were adult males. Risk factors of Plasmodium infection were males and soldiers, whereas a risk factor of asymptomatic Plasmodium infection was a history of ≥3 malaria episodes. There were many asymptomatic Plasmodium carriers in the study areas of Attapeu province in 2015. Adult males, probably soldiers, were at high risk for malaria infection. P. vivax, the dominant species, accounted for 87.2% of the Plasmodium infections among the participants. To achieve malaria elimination in the Lao PDR, highly sensitive diagnostic tests, including PCR-based diagnostic methods should be used, and plans targeting high-risk populations and elimination of P. vivax should be designed and implemented.

  20. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  1. Precision in the perception of direction of a moving pattern

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.

    1988-01-01

    The precision of the model of pattern motion analysis put forth by Adelson and Movshon (1982) who proposed that humans determine the direction of a moving plaid (the sum of two sinusoidal gratings of different orientations) in two steps is qualitatively examined. The volocities of the grating components are first estimated, then combined using the intersection of constraints to determine the velocity of the plaid as a whole. Under the additional assumption that the noise sources for the component velocities are independent, an approximate expression can be derived for the precision in plaid direction as a function of the precision in the speed and direction of the components. Monte Carlo simulations verify that the expression is valid to within 5 percent over the natural range of the parameters. The expression is then used to predict human performance based on available estimates of human precision in the judgment of single component speed. Human performance is predicted to deteriorate by a factor of 3 as half the angle between the wavefronts (theta) decreases from 60 to 30 deg, but actual performance does not. The mean direction discrimination for three human observers was 4.3 plus or minus 0.9 deg (SD) for theta = 60 deg and 5.9 plus or minus 1.2 for theta = 30 deg. This discrepancy can be resolved in two ways. If the noises in the internal representations of the component speeds are smaller than the available estimates or if these noises are not independent, then the psychophysical results are consistent with the Adelson-Movshon hypothesis.

  2. qCSF in Clinical Application: Efficient Characterization and Classification of Contrast Sensitivity Functions in Amblyopia

    PubMed Central

    Hou, Fang; Huang, Chang-Bing; Lesmes, Luis; Feng, Li-Xia; Tao, Liming; Zhou, Yi-Feng; Lu, Zhong-Lin

    2010-01-01

    Purpose. The qCSF method is a novel procedure for rapid measurement of spatial contrast sensitivity functions (CSFs). It combines Bayesian adaptive inference with a trial-to-trial information gain strategy, to directly estimate four parameters defining the observer's CSF. In the present study, the suitability of the qCSF method for clinical application was examined. Methods. The qCSF method was applied to rapidly assess spatial CSFs in 10 normal and 8 amblyopic participants. The qCSF was evaluated for accuracy, precision, test–retest reliability, suitability of CSF model assumptions, and accuracy of amblyopia screening. Results. qCSF estimates obtained with as few as 50 trials matched those obtained with 300 Ψ trials. The precision of qCSF estimates obtained with 120 and 130 trials, in normal subjects and amblyopes, matched the precision of 300 Ψ trials. For both groups and both methods, test–retest sensitivity estimates were well matched (all R > 0.94). The qCSF model assumptions were valid for 8 of 10 normal participants and all amblyopic participants. Measures of the area under log CSF (AULCSF) and the cutoff spatial frequency (cutSF) were lower in the amblyopia group; these differences were captured within 50 qCSF trials. Amblyopia was detected at an approximately 80% correct rate in 50 trials, when a logistic regression model was used with AULCSF and cutSF as predictors. Conclusions. The qCSF method is sufficiently rapid, accurate, and precise in measuring CSFs in normal and amblyopic persons. It has great potential for clinical practice. PMID:20484592

  3. The Quality of Reporting of Measures of Precision in Animal Experiments in Implant Dentistry: A Methodological Study.

    PubMed

    Faggion, Clovis Mariano; Aranda, Luisiana; Diaz, Karla Tatiana; Shih, Ming-Chieh; Tu, Yu-Kang; Alarcón, Marco Antonio

    2016-01-01

    Information on precision of treatment-effect estimates is pivotal for understanding research findings. In animal experiments, which provide important information for supporting clinical trials in implant dentistry, inaccurate information may lead to biased clinical trials. The aim of this methodological study was to determine whether sample size calculation, standard errors, and confidence intervals for treatment-effect estimates are reported accurately in publications describing animal experiments in implant dentistry. MEDLINE (via PubMed), Scopus, and SciELO databases were searched to identify reports involving animal experiments with dental implants published from September 2010 to March 2015. Data from publications were extracted into a standardized form with nine items related to precision of treatment estimates and experiment characteristics. Data selection and extraction were performed independently and in duplicate, with disagreements resolved by discussion-based consensus. The chi-square and Fisher exact tests were used to assess differences in reporting according to study sponsorship type and impact factor of the journal of publication. The sample comprised reports of 161 animal experiments. Sample size calculation was reported in five (2%) publications. P values and confidence intervals were reported in 152 (94%) and 13 (8%) of these publications, respectively. Standard errors were reported in 19 (12%) publications. Confidence intervals were better reported in publications describing industry-supported animal experiments (P = .03) and with a higher impact factor (P = .02). Information on precision of estimates is rarely reported in publications describing animal experiments in implant dentistry. This lack of information makes it difficult to evaluate whether the translation of animal research findings to clinical trials is adequate.

  4. Neighborhood Sociodemographic Predictors of Serious Emotional Disturbance (SED) in Schools: Demonstrating a Small Area Estimation Method in the National Comorbidity Survey (NCS-A) Adolescent Supplement

    PubMed Central

    Alegría, Margarita; Kessler, Ronald C.; McLaughlin, Katie A.; Gruber, Michael J.; Sampson, Nancy A.; Zaslavsky, Alan M.

    2014-01-01

    We evaluate the precision of a model estimating school prevalence of SED using a small area estimation method based on readily-available predictors from area-level census block data and school principal questionnaires. Adolescents at 314 schools participated in the National Comorbidity Supplement, a national survey of DSM-IV disorders among adolescents. A multilevel model indicated that predictors accounted for under half of the variance in school-level SED and even less when considering block-group predictors or principal report alone. While Census measures and principal questionnaires are significant predictors of individual-level SED, associations are too weak to generate precise school-level predictions of SED prevalence. PMID:24740174

  5. Entangling measurements for multiparameter estimation with two qubits

    NASA Astrophysics Data System (ADS)

    Roccia, Emanuele; Gianani, Ilaria; Mancino, Luca; Sbroscia, Marco; Somma, Fabrizia; Genoni, Marco G.; Barbieri, Marco

    2018-01-01

    Careful tailoring the quantum state of probes offers the capability of investigating matter at unprecedented precisions. Rarely, however, the interaction with the sample is fully encompassed by a single parameter, and the information contained in the probe needs to be partitioned on multiple parameters. There exist, then, practical bounds on the ultimate joint-estimation precision set by the unavailability of a single optimal measurement for all parameters. Here, we discuss how these considerations are modified for two-level quantum probes — qubits — by the use of two copies and entangling measurements. We find that the joint estimation of phase and phase diffusion benefits from such collective measurement, while for multiple phases no enhancement can be observed. We demonstrate this in a proof-of-principle photonics setup.

  6. Periodontal Disease and Incident Lung Cancer Risk: A Meta-Analysis of Cohort Studies.

    PubMed

    Zeng, Xian-Tao; Xia, Ling-Yun; Zhang, Yong-Gang; Li, Sheng; Leng, Wei-Dong; Kwong, Joey S W

    2016-10-01

    Periodontal disease is linked to a number of systemic diseases such as cardiovascular diseases and diabetes mellitus. Recent evidence has suggested periodontal disease might be associated with lung cancer. However, their precise relationship is yet to be explored. Hence, this study aims to investigate the association of periodontal disease and risk of incident lung cancer using a meta-analytic approach. PubMed, Scopus, and ScienceDirect were searched up to June 10, 2015. Cohort and nested case-control studies investigating risk of lung cancer in patients with periodontal disease were included. Hazard ratios (HRs) were calculated, as were their 95% confidence intervals (CIs) using a fixed-effect inverse-variance model. Statistical heterogeneity was explored using the Q test as well as the I(2) statistic. Publication bias was assessed by visual inspection of funnel plots symmetry and Egger's test. Five cohort studies were included, involving 321,420 participants in this meta-analysis. Summary estimates based on adjusted data showed that periodontal disease was associated with a significant risk of lung cancer (HR = 1.24, 95% CI = 1.13 to 1.36; I(2) = 30%). No publication bias was detected. Subgroup analysis indicated that the association of periodontal disease and lung cancer remained significant in the female population. Evidence from cohort studies suggests that patients with periodontal disease are at increased risk of developing lung cancer.

  7. Long-Term Effects of Radiation Exposure among Adult Survivors of Childhood Cancer: Results from the Childhood Cancer Survivor Study

    PubMed Central

    Armstrong, Gregory T.; Stovall, Marilyn; Robison, Leslie L.

    2010-01-01

    In the last four decades, advances in therapies for primary cancers have improved overall survival for childhood cancer. Currently, almost 80% of children will survive beyond 5 years from diagnosis of their primary malignancy. These improved outcomes have resulted in a growing population of childhood cancer survivors. Radiation therapy, while an essential component of primary treatment for many childhood malignancies, has been associated with risk of long-term adverse outcomes. The Childhood Cancer Survivor Study (CCSS), a retrospective cohort of over 14,000 survivors of childhood cancer diagnosed between 1970 and 1986, has been an important resource to quantify associations between radiation therapy and risk of long-term adverse health and quality of life outcomes. Radiation therapy has been associated with increased risk for late mortality, development of second neoplasms, obesity, and pulmonary, cardiac and thyroid dysfunction as well as an increased overall risk for chronic health conditions. Importantly, the CCSS has provided more precise estimates for a number of dose–response relationships, including those for radiation therapy and development of subsequent malignant neoplasms of the central nervous system, thyroid and breast. Ongoing study of childhood cancer survivors is needed to establish long-term risks and to evaluate the impact of newer techniques such as conformal radiation therapy or proton-beam therapy. PMID:21128808

  8. Altitude-related hypoxia: risk assessment and management for passengers on commerical aircraft.

    PubMed

    Mortazavi, Amir; Eisenberg, Mark J; Langleben, David; Ernst, Pierre; Schiff, Renee L

    2003-09-01

    Individuals with pulmonary and cardiac disorders are particularly at risk of developing hypoxemia at altitude. Our objective is to describe the normal and maladaptive physiological responses to altitude-related hypoxia, to review existing methods and guidelines for preflight assessment of air travelers, and to provide recommendations for treatment of hypoxia at altitude. Falling partial pressure of oxygen with altitude results in a number of physiologic adaptations including hyperventilation, pulmonary vasoconstriction, altered ventilation/perfusion matching, and increased sympathetic tone. According to three guideline statements, the arterial pressure of oxygen (PaO2) should be maintained above 50 to 55 mm Hg at all altitudes. General indicators such as oxygen saturation and sea level blood gases may be useful in predicting altitude hypoxia. More specialized techniques for estimation of altitude PaO2, such as regression equations, hypoxia challenge testing, and hypobaric chamber exposure have also been examined. A regression equation using sea level PaO2 and spirometric parameters can be used to estimate PaO2 at altitude. Hypoxia challenge testing, performed by exposing subjects to lower inspired FIO2 at sea level may be more precise. Hypobaric chamber exposure, the gold standard, mimics lower barometric pressure, but is mainly used in research. Oxygen supplementation during air travel is needed for individuals with an estimated PaO2 (8000 ft) below 50 mmHg. There are a number of guidelines for the pre-flight assessment of patients with pulmonary and/or cardiac diseases. However, these data are based on small studies in patients with a limited group of diseases.

  9. Evaluation of optimum room entry times for radiation therapists after high energy whole pelvic photon treatments.

    PubMed

    Ho, Lavine; White, Peter; Chan, Edward; Chan, Kim; Ng, Janet; Tam, Timothy

    2012-01-01

    Linear accelerators operating at or above 10 MV produce neutrons by photonuclear reactions and induce activation in machine components, which are a source of potential exposure for radiation therapists. This study estimated gamma dose contributions to radiation therapists during high energy, whole pelvic, photon beam treatments and determined the optimum room entry times, in terms of safety of radiation therapists. Two types of technique (anterior-posterior opposing and 3-field technique) were studied. An Elekta Precise treatment system, operating up to 18 MV, was investigated. Measurements with an area monitoring device (a Mini 900R radiation monitor) were performed, to calculate gamma dose rates around the radiotherapy facility. Measurements inside the treatment room were performed when the linear accelerator was in use. The doses received by radiation therapists were estimated, and optimum room entry times were determined. The highest gamma dose rates were approximately 7 μSv/h inside the treatment room, while the doses in the control room were close to background (~0 μSv/h) for all techniques. The highest personal dose received by radiation therapists was estimated at 5 mSv/yr. To optimize protection, radiation therapists should wait for up to11 min after beam-off prior to room entry. The potential risks to radiation therapists with standard safety procedures were well below internationally recommended values, but risks could be further decreased by delaying room entry times. Dependent on the technique used, optimum entry times ranged between 7 to 11 min. A balance between moderate treatment times versus reduction in measured equivalent doses should be considered.

  10. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  11. SU-E-T-365: Estimation of Neutron Ambient Dose Equivalents for Radioprotection Exposed Workers in Radiotherapy Facilities Based On Characterization Patient Risk Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Irazola, L; Terron, J; Sanchez-Doblado, F

    2015-06-15

    Purpose: Previous measurements with Bonner spheres{sup 1} showed that normalized neutron spectra are equal for the majority of the existing linacs{sup 2}. This information, in addition to thermal neutron fluences obtained in the characterization procedure{sup 3}3, would allow to estimate neutron doses accidentally received by exposed workers, without the need of an extra experimental measurement. Methods: Monte Carlo (MC) simulations demonstrated that the thermal neutron fluence distribution inside the bunker is quite uniform, as a consequence of multiple scatter in the walls{sup 4}. Although inverse square law is approximately valid for the fast component, a more precise calculation could bemore » obtained with a generic fast fluence distribution map around the linac, from MC simulations{sup 4}. Thus, measurements of thermal neutron fluences performed during the characterization procedure{sup 3}, together with a generic unitary spectra{sup 2}, would allow to estimate the total neutron fluences and H*(10) at any point{sup 5}. As an example, we compared estimations with Bonner sphere measurements{sup 1}, for two points in five facilities: 3 Siemens (15–23 MV), Elekta (15 MV) and Varian (15 MV). Results: Thermal neutron fluences obtained from characterization, are within (0.2–1.6×10{sup 6}) cm−{sup 2}•Gy{sup −1} for the five studied facilities. This implies ambient equivalent doses ranging from (0.27–2.01) mSv/Gy 50 cm far from the isocenter and (0.03–0.26) mSv/Gy at detector location with an average deviation of ±12.1% respect to Bonner measurements. Conclusion: The good results obtained demonstrate that neutron fluence and H*(10) can be estimated based on: (a) characterization procedure established for patient risk estimation in each facility, (b) generic unitary neutron spectrum and (c) generic MC map distribution of the fast component. [1] Radiat. Meas (2010) 45: 1391 – 1397; [2] Phys. Med. Biol (2012) 5 7:6167–6191; [3] Med. Phys (2015) 42:276 - 281. [4] IFMBE (2012) 39: 1245–1248. [5] ICRU Report 57 (1998)« less

  12. Meta-analysis: the association of oesophageal adenocarcinoma with symptoms of gastro-oesophageal reflux

    PubMed Central

    Rubenstein, J. H.; Taylor, J. B.

    2012-01-01

    Background Endoscopic screening has been proposed for patients with symptoms of gastro-oesophageal reflux disease (GERD) in the hope of reducing mortality from oesophageal adenocarcinoma. Assessing the net benefits of such a strategy requires a precise understanding of the cancer risk in the screened population. Aim To estimate precisely the association between symptoms of GERD and oesophageal adenocarcinoma. Methods Systematic review and meta-analysis of population-based studies with strict ascertainment of exposure and outcomes. Results Five eligible studies were identified. At least weekly symptoms of GERD increased the odds of oesophageal adenocarcinoma fivefold (odds ratio = 4.92; 95% confidence interval = 3.90, 6.22), and daily symptoms increased the odds sevenfold (random effects summary odds ratio = 7.40, 95% confidence interval = 4.94, 11.1), each compared with individuals without symptoms or less frequent symptoms. Duration of symptoms was also associated with oesophageal adenocarcinoma, but with very heterogeneous results, and unclear thresholds. Conclusions Frequent GERD symptoms are strongly associated with oesophageal adenocarcinoma. These results should be useful in developing epidemiological models of the development of oesophageal adenocarcinoma, and in models of interventions aimed at reducing mortality from this cancer. PMID:20955441

  13. A generalised random encounter model for estimating animal density with remote sensor data.

    PubMed

    Lucas, Tim C D; Moorcroft, Elizabeth A; Freeman, Robin; Rowcliffe, J Marcus; Jones, Kate E

    2015-05-01

    Wildlife monitoring technology is advancing rapidly and the use of remote sensors such as camera traps and acoustic detectors is becoming common in both the terrestrial and marine environments. Current methods to estimate abundance or density require individual recognition of animals or knowing the distance of the animal from the sensor, which is often difficult. A method without these requirements, the random encounter model (REM), has been successfully applied to estimate animal densities from count data generated from camera traps. However, count data from acoustic detectors do not fit the assumptions of the REM due to the directionality of animal signals.We developed a generalised REM (gREM), to estimate absolute animal density from count data from both camera traps and acoustic detectors. We derived the gREM for different combinations of sensor detection widths and animal signal widths (a measure of directionality). We tested the accuracy and precision of this model using simulations of different combinations of sensor detection widths and animal signal widths, number of captures and models of animal movement.We find that the gREM produces accurate estimates of absolute animal density for all combinations of sensor detection widths and animal signal widths. However, larger sensor detection and animal signal widths were found to be more precise. While the model is accurate for all capture efforts tested, the precision of the estimate increases with the number of captures. We found no effect of different animal movement models on the accuracy and precision of the gREM.We conclude that the gREM provides an effective method to estimate absolute animal densities from remote sensor count data over a range of sensor and animal signal widths. The gREM is applicable for count data obtained in both marine and terrestrial environments, visually or acoustically (e.g. big cats, sharks, birds, echolocating bats and cetaceans). As sensors such as camera traps and acoustic detectors become more ubiquitous, the gREM will be increasingly useful for monitoring unmarked animal populations across broad spatial, temporal and taxonomic scales.

  14. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  15. Using spatial mark-recapture for conservation monitoring of grizzly bear populations in Alberta.

    PubMed

    Boulanger, John; Nielsen, Scott E; Stenhouse, Gordon B

    2018-03-26

    One of the challenges in conservation is determining patterns and responses in population density and distribution as it relates to habitat and changes in anthropogenic activities. We applied spatially explicit capture recapture (SECR) methods, combined with density surface modelling from five grizzly bear (Ursus arctos) management areas (BMAs) in Alberta, Canada, to assess SECR methods and to explore factors influencing bear distribution. Here we used models of grizzly bear habitat and mortality risk to test local density associations using density surface modelling. Results demonstrated BMA-specific factors influenced density, as well as the effects of habitat and topography on detections and movements of bears. Estimates from SECR were similar to those from closed population models and telemetry data, but with similar or higher levels of precision. Habitat was most associated with areas of higher bear density in the north, whereas mortality risk was most associated (negatively) with density of bears in the south. Comparisons of the distribution of mortality risk and habitat revealed differences by BMA that in turn influenced local abundance of bears. Combining SECR methods with density surface modelling increases the resolution of mark-recapture methods by directly inferring the effect of spatial factors on regulating local densities of animals.

  16. Epidemiology of Recurrent Acute and Chronic Pancreatitis: Similarities and Differences.

    PubMed

    Machicado, Jorge D; Yadav, Dhiraj

    2017-07-01

    Emerging data in the past few years suggest that acute, recurrent acute (RAP), and chronic pancreatitis (CP) represent a disease continuum. This review discusses the similarities and differences in the epidemiology of RAP and CP. RAP is a high-risk group, comprised of individuals at varying risk of progression. The premise is that RAP is an intermediary stage in the pathogenesis of CP, and a subset of RAP patients during their natural course transition to CP. Although many clinical factors have been identified, accurately predicting the probability of disease course in individual patients remains difficult. Future studies should focus on providing more precise estimates of the risk of disease transition in a cohort of patients, quantification of clinical events during the natural course of disease, and discovery of biomarkers of the different stages of the disease continuum. Availability of clinically relevant endpoints and linked biomarkers will allow more accurate prediction of the natural course of disease over intermediate- or long-term-based characteristics of an individual patient. These endpoints will also provide objective measures for use in clinical trials of interventions that aim to alter the natural course of disease.

  17. Correction algorithm for online continuous flow δ13C and δ18O carbonate and cellulose stable isotope analyses

    NASA Astrophysics Data System (ADS)

    Evans, M. N.; Selmer, K. J.; Breeden, B. T.; Lopatka, A. S.; Plummer, R. E.

    2016-09-01

    We describe an algorithm to correct for scale compression, runtime drift, and amplitude effects in carbonate and cellulose oxygen and carbon isotopic analyses made on two online continuous flow isotope ratio mass spectrometry (CF-IRMS) systems using gas chromatographic (GC) separation. We validate the algorithm by correcting measurements of samples of known isotopic composition which are not used to estimate the corrections. For carbonate δ13C (δ18O) data, median precision of validation estimates for two reference materials and two calibrated working standards is 0.05‰ (0.07‰); median bias is 0.04‰ (0.02‰) over a range of 49.2‰ (24.3‰). For α-cellulose δ13C (δ18O) data, median precision of validation estimates for one reference material and five working standards is 0.11‰ (0.27‰); median bias is 0.13‰ (-0.10‰) over a range of 16.1‰ (19.1‰). These results are within the 5th-95th percentile range of subsequent routine runtime validation exercises in which one working standard is used to calibrate the other. Analysis of the relative importance of correction steps suggests that drift and scale-compression corrections are most reliable and valuable. If validation precisions are not already small, routine cross-validated precision estimates are improved by up to 50% (80%). The results suggest that correction for systematic error may enable these particular CF-IRMS systems to produce δ13C and δ18O carbonate and cellulose isotopic analyses with higher validated precision, accuracy, and throughput than is typically reported for these systems. The correction scheme may be used in support of replication-intensive research projects in paleoclimatology and other data-intensive applications within the geosciences.

  18. Contact lens overrefraction variability in corneal power estimation after refractive surgery.

    PubMed

    Joslin, Charlotte E; Koster, James; Tu, Elmer Y

    2005-12-01

    To evaluate the accuracy and precision of the contact lens overrefraction (CLO) method in determining corneal refractive power in post-refractive-surgery eyes. Refractive Surgery Service and Contact Lens Service, University of Illinois, Chicago, Illinois, USA. Fourteen eyes of 7 subjects who had a single myopic laser in situ keratomileusis procedure within 12 months with refractive stability were included in this prospective case series. The CLO method was compared with the historical method of predicting the corneal power using 4 different lens fitting strategies and 3 refractive pupil scan sizes (3 mm, 5 mm, and total pupil). Rigid lenses included 3 9.0 mm overall diameter lenses fit flat, steep, and an average of the 2, and a 15.0 mm diameter lens steep fit. Cycloplegic CLO was performed using the autorefractor function of the Nidek OPD-Scan ARK-10000. Results with each strategy were compared with the corneal power estimated with the historical method. The bias (mean of the difference), 95% limits of agreement, and difference versus mean plots for each strategy are presented. In each subject, the CLO-estimated corneal power varied based on lens fit. On average, the bias between CLO and historical methods ranged from -0.38 to +2.42 diopters (D) and was significantly different from 0 in all but 3 strategies. Substantial variability in precision existed between fitting strategies, with the range of the 95% limits of agreement approximating 0.50 D in 2 strategies and 2.59 D in the worst-case scenario. The least precise fitting strategy was use of flat-fitting 9.0 mm diameter lenses. The accuracy and precision of the CLO method of estimating corneal power in post-refractive-surgery eyes was highly variable on the basis of how rigid lense were fit. One of the most commonly used fitting strategies in clinical practice--flat-fitting a 9.0 diameter lens-resulted in the poorest accuracy and precision. Results also suggest use of large-diameter lenses may improve outcomes.

  19. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    PubMed

    Hajdziona, Marta; Molski, Andrzej

    2011-02-07

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  20. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

Top