Sample records for multiple censoring points

  1. Methodological issues underlying multiple decrement life table analysis.

    PubMed

    Mode, C J; Avery, R C; Littman, G S; Potter, R G

    1977-02-01

    In this paper, the actuarial method of multiple decrement life table analysis of censored, longitudinal data is examined. The discussion is organized in terms of the first segment of usage of an intrauterine device. Weaknesses of the actuarial approach are pointed out, and an alternative approach, based on the classical model of competing risks, is proposed. Finally, the actuarial and the alternative method of analyzing censored data are compared, using data from the Taichung Medical Study on Intrauterine Devices.

  2. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    PubMed

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  3. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  4. Analyzing survival curves at a fixed point in time for paired and clustered right-censored data

    PubMed Central

    Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De

    2018-01-01

    In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280

  5. The Censored Mean-Level Detector for Multiple Target Environments.

    DTIC Science & Technology

    1984-03-01

    rate ( CFAR ) detectors known as censored mean-level detectors ( CMLD ). The CMLD , a special case of which is the mean-level detector (or zell-averaged...detectors known as censored mean- level detectors ( CMLD ). The CMLD , a special case of which is the mean-level detector (or cell-averaged CFAR detector), is...CENSORED MEAN-LEVEL DETECTOR The censored mean-level detector ( CMLD ) is a generalization of the traditional mean-level detector (MLD) or cell-averaged CFAR

  6. A review of statistical issues with progression-free survival as an interval-censored time-to-event endpoint.

    PubMed

    Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang

    2013-01-01

    Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.

  7. Using multiple classifiers for predicting the risk of endovascular aortic aneurysm repair re-intervention through hybrid feature selection.

    PubMed

    Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter Je; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong

    2017-11-01

    Feature selection is essential in medical area; however, its process becomes complicated with the presence of censoring which is the unique character of survival analysis. Most survival feature selection methods are based on Cox's proportional hazard model, though machine learning classifiers are preferred. They are less employed in survival analysis due to censoring which prevents them from directly being used to survival data. Among the few work that employed machine learning classifiers, partial logistic artificial neural network with auto-relevance determination is a well-known method that deals with censoring and perform feature selection for survival data. However, it depends on data replication to handle censoring which leads to unbalanced and biased prediction results especially in highly censored data. Other methods cannot deal with high censoring. Therefore, in this article, a new hybrid feature selection method is proposed which presents a solution to high level censoring. It combines support vector machine, neural network, and K-nearest neighbor classifiers using simple majority voting and a new weighted majority voting method based on survival metric to construct a multiple classifier system. The new hybrid feature selection process uses multiple classifier system as a wrapper method and merges it with iterated feature ranking filter method to further reduce features. Two endovascular aortic repair datasets containing 91% censored patients collected from two centers were used to construct a multicenter study to evaluate the performance of the proposed approach. The results showed the proposed technique outperformed individual classifiers and variable selection methods based on Cox's model such as Akaike and Bayesian information criterions and least absolute shrinkage and selector operator in p values of the log-rank test, sensitivity, and concordance index. This indicates that the proposed classifier is more powerful in correctly predicting the risk of re-intervention enabling doctor in selecting patients' future follow-up plan.

  8. Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.

    PubMed

    Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen

    2015-05-01

    Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.

  9. Assessing assay agreement estimation for multiple left-censored data: a multiple imputation approach.

    PubMed

    Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu

    2014-12-30

    Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  11. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    PubMed

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Covariate analysis of bivariate survival data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less

  13. Cure rate model with interval censored data.

    PubMed

    Kim, Yang-Jin; Jhun, Myoungshic

    2008-01-15

    In cancer trials, a significant fraction of patients can be cured, that is, the disease is completely eliminated, so that it never recurs. In general, treatments are developed to both increase the patients' chances of being cured and prolong the survival time among non-cured patients. A cure rate model represents a combination of cure fraction and survival model, and can be applied to many clinical studies over several types of cancer. In this article, the cure rate model is considered in the interval censored data composed of two time points, which include the event time of interest. Interval censored data commonly occur in the studies of diseases that often progress without symptoms, requiring clinical evaluation for detection (Encyclopedia of Biostatistics. Wiley: New York, 1998; 2090-2095). In our study, an approximate likelihood approach suggested by Goetghebeur and Ryan (Biometrics 2000; 56:1139-1144) is used to derive the likelihood in interval censored data. In addition, a frailty model is introduced to characterize the association between the cure fraction and survival model. In particular, the positive association between the cure fraction and the survival time is incorporated by imposing a common normal frailty effect. The EM algorithm is used to estimate parameters and a multiple imputation based on the profile likelihood is adopted for variance estimation. The approach is applied to the smoking cessation study in which the event of interest is a smoking relapse and several covariates including an intensive care treatment are evaluated to be effective for both the occurrence of relapse and the non-smoking duration. Copyright (c) 2007 John Wiley & Sons, Ltd.

  14. Multiple imputation for cure rate quantile regression with censored data.

    PubMed

    Wu, Yuanshan; Yin, Guosheng

    2017-03-01

    The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.

  15. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  16. Markov chains and semi-Markov models in time-to-event analysis.

    PubMed

    Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J

    2013-10-25

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.

  17. Markov chains and semi-Markov models in time-to-event analysis

    PubMed Central

    Abner, Erin L.; Charnigo, Richard J.; Kryscio, Richard J.

    2014-01-01

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields. PMID:24818062

  18. An application of a zero-inflated lifetime distribution with multiple and incomplete data sources

    DOE PAGES

    Hamada, M. S.; Margevicius, K. J.

    2016-02-11

    In this study, we analyze data sampled from a population of parts in which an associated anomaly can occur at assembly or after assembly. Using a zero-inflated lifetime distribution to fit left-censored and right-censored data as well data from a supplementary sample, we make predictions about the proportion of the population with anomalies today and in the future. Goodness-of-fit is also addressed.

  19. The Gumbel hypothesis test for left censored observations using regional earthquake records as an example

    NASA Astrophysics Data System (ADS)

    Thompson, E. M.; Hewlett, J. B.; Baise, L. G.; Vogel, R. M.

    2011-01-01

    Annual maximum (AM) time series are incomplete (i.e., censored) when no events are included above the assumed censoring threshold (i.e., magnitude of completeness). We introduce a distrtibutional hypothesis test for left-censored Gumbel observations based on the probability plot correlation coefficient (PPCC). Critical values of the PPCC hypothesis test statistic are computed from Monte-Carlo simulations and are a function of sample size, censoring level, and significance level. When applied to a global catalog of earthquake observations, the left-censored Gumbel PPCC tests are unable to reject the Gumbel hypothesis for 45 of 46 seismic regions. We apply four different field significance tests for combining individual tests into a collective hypothesis test. None of the field significance tests are able to reject the global hypothesis that AM earthquake magnitudes arise from a Gumbel distribution. Because the field significance levels are not conclusive, we also compute the likelihood that these field significance tests are unable to reject the Gumbel model when the samples arise from a more complex distributional alternative. A power study documents that the censored Gumbel PPCC test is unable to reject some important and viable Generalized Extreme Value (GEV) alternatives. Thus, we cannot rule out the possibility that the global AM earthquake time series could arise from a GEV distribution with a finite upper bound, also known as a reverse Weibull distribution. Our power study also indicates that the binomial and uniform field significance tests are substantially more powerful than the more commonly used Bonferonni and false discovery rate multiple comparison procedures.

  20. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  1. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    NASA Astrophysics Data System (ADS)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  2. Estimation of indirect effect when the mediator is a censored variable.

    PubMed

    Wang, Jian; Shete, Sanjay

    2017-01-01

    A mediation model explores the direct and indirect effects of an initial variable ( X) on an outcome variable ( Y) by including a mediator ( M). In many realistic scenarios, investigators observe censored data instead of the complete data. Current research in mediation analysis for censored data focuses mainly on censored outcomes, but not censored mediators. In this study, we proposed a strategy based on the accelerated failure time model and a multiple imputation approach. We adapted a measure of the indirect effect for the mediation model with a censored mediator, which can assess the indirect effect at both the group and individual levels. Based on simulation, we established the bias in the estimations of different paths (i.e. the effects of X on M [ a], of M on Y [ b] and of X on Y given mediator M [ c']) and indirect effects when analyzing the data using the existing approaches, including a naïve approach implemented in software such as Mplus, complete-case analysis, and the Tobit mediation model. We conducted simulation studies to investigate the performance of the proposed strategy compared to that of the existing approaches. The proposed strategy accurately estimates the coefficients of different paths, indirect effects and percentages of the total effects mediated. We applied these mediation approaches to the study of SNPs, age at menopause and fasting glucose levels. Our results indicate that there is no indirect effect of association between SNPs and fasting glucose level that is mediated through the age at menopause.

  3. Dental age assessment of adolescents and emerging adults in United Kingdom Caucasians using censored data for stage H of third molar roots.

    PubMed

    Boonpitaksathit, Teelana; Hunt, Nigel; Roberts, Graham J; Petrie, Aviva; Lucas, Victoria S

    2011-10-01

    The root of the third permanent molar is the only dental structure that continues development after completion of growth of the second permanent molar. It is claimed that the lack of a clearly defined end point for completion of growth of the third permanent molar means that this tooth cannot be used for dental age assessment. The aim of this study was to estimate the mean age of attainment of the four stages (E, F, G, and H) of root development of the third molar. The way in which the end point of completion of stage H can be identified is described. A total of 1223 dental panoramic tomographs (DPTs) available in the archives of the Eastman Dental Hospital, London, were used for this study. The ages of the subjects ranged from 12.6 to 24.9 years with 63 per cent of the sample being female. Demirjan's tooth development stages (TDSs), for the first and second molars, were applied to the third molars by a single examiner. For each of stages E, F, and G and for stage H censored data, the mean ages of the males and females were compared, separately within each tooth morphology type using the two sample t-test (P < 0.01). The same test was used to compare the mean ages of the upper and lower third molars on each side, separately for each gender. The mean age of attainment and the 99 per cent confidence interval (CI) for each TDS were calculated for each third molar. The final stage H data were appropriately censored to exclude data above the age of completion of root growth. The results showed that, for each gender, the age in years at which individuals attained each of the four TDSs was approximately normally distributed. The mean age for appropriately censored data was always lower than the corresponding mean age of the inappropriately censored data for stage H (male UR8 19.57, UL8 19.53, LL8 19.91, and LR8 20.02 and female UR8 20.08, UL8 20.13, LL8 20.78, and LR8 20.70). This inappropriately censored data overestimated the mean age for stage H. The appropriately censored data for the TDSs of the third molar may be used to estimate the age of adolescents and emerging adults assuming average growth and development and recent attainment of stage H.

  4. Dental Age Estimation (DAE): Data management for tooth development stages including the third molar. Appropriate censoring of Stage H, the final stage of tooth development.

    PubMed

    Roberts, Graham J; McDonald, Fraser; Andiappan, Manoharan; Lucas, Victoria S

    2015-11-01

    The final stage of dental development of third molars is usually helpful to indicate whether or not a subject is aged over 18 years. A complexity is that the final stage of development is unlimited in its upper border. Investigators usually select an inappropriate upper age limit or censor point for this tooth development stage. The literature was searched for appropriate data sets for dental age estimation and those that provided the count (n), the mean (x¯), and the standard deviation (sd) for each of the tooth development stages. The Demirjian G and Demirjian H were used for this study. Upper and lower limits of the Stage G and Stage H data were calculated limiting the data to plus or minus three standard deviations from the mean. The upper border of Stage H was limited by appropriate censoring at the maximum value for Stage G. The maximum age at attainment from published data, for Stage H, ranged from 22.60 years to 34.50 years. These data were explored to demonstrate how censoring provides an estimate for the correct maximum age for the final stage of Stage H as 21.64 years for UK Caucasians. This study shows that confining the data array of individual tooth developments stages to ± 3sd provides a reliable and logical way of censoring the data for tooth development stages with a Normal distribution of data. For Stage H this is inappropriate as it is unbounded in its upper limit. The use of a censored data array for Stage H using Percentile values is appropriate. This increases the reliability of using third molar Stage H alone to determine whether or not an individual is over 18 years old. For Stage H, individual ancestral groups should be censored using the same technique. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  5. Effects of Cinacalcet on Fracture Events in Patients Receiving Hemodialysis: The EVOLVE Trial.

    PubMed

    Moe, Sharon M; Abdalla, Safa; Chertow, Glenn M; Parfrey, Patrick S; Block, Geoffrey A; Correa-Rotter, Ricardo; Floege, Jürgen; Herzog, Charles A; London, Gerard M; Mahaffey, Kenneth W; Wheeler, David C; Dehmel, Bastian; Goodman, William G; Drüeke, Tilman B

    2015-06-01

    Fractures are frequent in patients receiving hemodialysis. We tested the hypothesis that cinacalcet would reduce the rate of clinical fractures in patients receiving hemodialysis using data from the Evaluation of Cinacalcet HCl Therapy to Lower Cardiovascular Events trial, a placebo-controlled trial that randomized 3883 hemodialysis patients with secondary hyperparathyroidism to receive cinacalcet or placebo for ≤64 months. This study was a prespecified secondary analysis of the trial whose primary end point was all-cause mortality and non-fatal cardiovascular events, and one of the secondary end points was first clinical fracture event. Clinical fractures were observed in 255 of 1935 (13.2%) patients randomized to placebo and 238 of 1948 (12.2%) patients randomized to cinacalcet. In an unadjusted intention-to-treat analysis, the relative hazard for fracture (cinacalcet versus placebo) was 0.89 (95% confidence interval [95% CI], 0.75 to 1.07). After adjustment for baseline characteristics and multiple fractures, the relative hazard was 0.83 (95% CI, 0.72 to 0.98). Using a prespecified lag-censoring analysis (a measure of actual drug exposure), the relative hazard for fracture was 0.72 (95% CI, 0.58 to 0.90). When participants were censored at the time of cointerventions (parathyroidectomy, transplant, or provision of commercial cinacalcet), the relative hazard was 0.71 (95% CI, 0.58 to 0.87). Fracture rates were higher in older compared with younger patients and the effect of cinacalcet appeared more pronounced in older patients. In conclusion, using an unadjusted intention-to-treat analysis, cinacalcet did not reduce the rate of clinical fracture. However, when accounting for differences in baseline characteristics, multiple fractures, and/or events prompting discontinuation of study drug, cinacalcet reduced the rate of clinical fracture by 16%-29%. Copyright © 2015 by the American Society of Nephrology.

  6. Economic values under inappropriate normal distribution assumptions.

    PubMed

    Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R

    2012-08-01

    The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.

  7. Estimation for coefficient of variation of an extension of the exponential distribution under type-II censoring scheme

    NASA Astrophysics Data System (ADS)

    Bakoban, Rana A.

    2017-08-01

    The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.

  8. Substituting values for censored data from Texas, USA, reservoirs inflated and obscured trends in analyses commonly used for water quality target development.

    PubMed

    Grantz, Erin; Haggard, Brian; Scott, J Thad

    2018-06-12

    We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.

  9. Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.

    PubMed

    Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi

    2013-12-01

    Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.

  10. Correcting for dependent censoring in routine outcome monitoring data by applying the inverse probability censoring weighted estimator.

    PubMed

    Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M

    2018-02-01

    Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.

  11. Bayesian dynamic regression models for interval censored survival data with application to children dental health.

    PubMed

    Wang, Xiaojing; Chen, Ming-Hui; Yan, Jun

    2013-07-01

    Cox models with time-varying coefficients offer great flexibility in capturing the temporal dynamics of covariate effects on event times, which could be hidden from a Cox proportional hazards model. Methodology development for varying coefficient Cox models, however, has been largely limited to right censored data; only limited work on interval censored data has been done. In most existing methods for varying coefficient models, analysts need to specify which covariate coefficients are time-varying and which are not at the time of fitting. We propose a dynamic Cox regression model for interval censored data in a Bayesian framework, where the coefficient curves are piecewise constant but the number of pieces and the jump points are covariate specific and estimated from the data. The model automatically determines the extent to which the temporal dynamics is needed for each covariate, resulting in smoother and more stable curve estimates. The posterior computation is carried out via an efficient reversible jump Markov chain Monte Carlo algorithm. Inference of each coefficient is based on an average of models with different number of pieces and jump points. A simulation study with three covariates, each with a coefficient of different degree in temporal dynamics, confirmed that the dynamic model is preferred to the existing time-varying model in terms of model comparison criteria through conditional predictive ordinate. When applied to a dental health data of children with age between 7 and 12 years, the dynamic model reveals that the relative risk of emergence of permanent tooth 24 between children with and without an infected primary predecessor is the highest at around age 7.5, and that it gradually reduces to one after age 11. These findings were not seen from the existing studies with Cox proportional hazards models.

  12. A Fast EM Algorithm for Fitting Joint Models of a Binary Response and Multiple Longitudinal Covariates Subject to Detection Limits

    PubMed Central

    Bernhardt, Paul W.; Zhang, Daowen; Wang, Huixia Judy

    2014-01-01

    Joint modeling techniques have become a popular strategy for studying the association between a response and one or more longitudinal covariates. Motivated by the GenIMS study, where it is of interest to model the event of survival using censored longitudinal biomarkers, a joint model is proposed for describing the relationship between a binary outcome and multiple longitudinal covariates subject to detection limits. A fast, approximate EM algorithm is developed that reduces the dimension of integration in the E-step of the algorithm to one, regardless of the number of random effects in the joint model. Numerical studies demonstrate that the proposed approximate EM algorithm leads to satisfactory parameter and variance estimates in situations with and without censoring on the longitudinal covariates. The approximate EM algorithm is applied to analyze the GenIMS data set. PMID:25598564

  13. How to Deal with Interval-Censored Data Practically while Assessing the Progression-Free Survival: A Step-by-Step Guide Using SAS and R Software.

    PubMed

    Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn

    2016-12-01

    We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.

  14. Multivariate longitudinal data analysis with censored and intermittent missing responses.

    PubMed

    Lin, Tsung-I; Lachos, Victor H; Wang, Wan-Lun

    2018-05-08

    The multivariate linear mixed model (MLMM) has emerged as an important analytical tool for longitudinal data with multiple outcomes. However, the analysis of multivariate longitudinal data could be complicated by the presence of censored measurements because of a detection limit of the assay in combination with unavoidable missing values arising when subjects miss some of their scheduled visits intermittently. This paper presents a generalization of the MLMM approach, called the MLMM-CM, for a joint analysis of the multivariate longitudinal data with censored and intermittent missing responses. A computationally feasible expectation maximization-based procedure is developed to carry out maximum likelihood estimation within the MLMM-CM framework. Moreover, the asymptotic standard errors of fixed effects are explicitly obtained via the information-based method. We illustrate our methodology by using simulated data and a case study from an AIDS clinical trial. Experimental results reveal that the proposed method is able to provide more satisfactory performance as compared with the traditional MLMM approach. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Post-processing of multi-model ensemble river discharge forecasts using censored EMOS

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2014-05-01

    When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a daily basis over a period of three years. For the two catchments considered, this resulted in well calibrated and sharp forecast distributions over all lead-times from 1 to 114 h. Training observations tended to be better indicators for the dependence structure than the raw ensemble.

  16. Bivariate Left-Censored Bayesian Model for Predicting Exposure: Preliminary Analysis of Worker Exposure during the Deepwater Horizon Oil Spill.

    PubMed

    Groth, Caroline; Banerjee, Sudipto; Ramachandran, Gurumurthy; Stenzel, Mark R; Sandler, Dale P; Blair, Aaron; Engel, Lawrence S; Kwok, Richard K; Stewart, Patricia A

    2017-01-01

    In April 2010, the Deepwater Horizon oil rig caught fire and exploded, releasing almost 5 million barrels of oil into the Gulf of Mexico over the ensuing 3 months. Thousands of oil spill workers participated in the spill response and clean-up efforts. The GuLF STUDY being conducted by the National Institute of Environmental Health Sciences is an epidemiological study to investigate potential adverse health effects among these oil spill clean-up workers. Many volatile chemicals were released from the oil into the air, including total hydrocarbons (THC), which is a composite of the volatile components of oil including benzene, toluene, ethylbenzene, xylene, and hexane (BTEXH). Our goal is to estimate exposure levels to these toxic chemicals for groups of oil spill workers in the study (hereafter called exposure groups, EGs) with likely comparable exposure distributions. A large number of air measurements were collected, but many EGs are characterized by datasets with a large percentage of censored measurements (below the analytic methods' limits of detection) and/or a limited number of measurements. We use THC for which there was less censoring to develop predictive linear models for specific BTEXH air exposures with higher degrees of censoring. We present a novel Bayesian hierarchical linear model that allows us to predict, for different EGs simultaneously, exposure levels of a second chemical while accounting for censoring in both THC and the chemical of interest. We illustrate the methodology by estimating exposure levels for several EGs on the Development Driller III, a rig vessel charged with drilling one of the relief wells. The model provided credible estimates in this example for geometric means, arithmetic means, variances, correlations, and regression coefficients for each group. This approach should be considered when estimating exposures in situations when multiple chemicals are correlated and have varying degrees of censoring. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  17. Survival Data and Regression Models

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2014-12-01

    We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.

  18. Accelerated failure time models for semi-competing risks data in the presence of complex censoring.

    PubMed

    Lee, Kyu Ha; Rondeau, Virginie; Haneuse, Sebastien

    2017-12-01

    Statistical analyses that investigate risk factors for Alzheimer's disease (AD) are often subject to a number of challenges. Some of these challenges arise due to practical considerations regarding data collection such that the observation of AD events is subject to complex censoring including left-truncation and either interval or right-censoring. Additional challenges arise due to the fact that study participants under investigation are often subject to competing forces, most notably death, that may not be independent of AD. Towards resolving the latter, researchers may choose to embed the study of AD within the "semi-competing risks" framework for which the recent statistical literature has seen a number of advances including for the so-called illness-death model. To the best of our knowledge, however, the semi-competing risks literature has not fully considered analyses in contexts with complex censoring, as in studies of AD. This is particularly the case when interest lies with the accelerated failure time (AFT) model, an alternative to the traditional multiplicative Cox model that places emphasis away from the hazard function. In this article, we outline a new Bayesian framework for estimation/inference of an AFT illness-death model for semi-competing risks data subject to complex censoring. An efficient computational algorithm that gives researchers the flexibility to adopt either a fully parametric or a semi-parametric model specification is developed and implemented. The proposed methods are motivated by and illustrated with an analysis of data from the Adult Changes in Thought study, an on-going community-based prospective study of incident AD in western Washington State. © 2017, The International Biometric Society.

  19. Assessing the impact of censoring of costs and effects on health-care decision-making: an example using the Atrial Fibrillation Follow-up Investigation of Rhythm Management (AFFIRM) study.

    PubMed

    Fenwick, Elisabeth; Marshall, Deborah A; Blackhouse, Gordon; Vidaillet, Humberto; Slee, April; Shemanski, Lynn; Levy, Adrian R

    2008-01-01

    Losses to follow-up and administrative censoring can cloud the interpretation of trial-based economic evaluations. A number of investigators have examined the impact of different levels of adjustment for censoring, including nonadjustment, adjustment of effects only, and adjustment for both costs and effects. Nevertheless, there is a lack of research on the impact of censoring on decision-making. The objective of this study was to estimate the impact of adjustment for censoring on the interpretation of cost-effectiveness results and expected value of perfect information (EVPI), using a trial-based analysis that compared rate- and rhythm-control treatments for persons with atrial fibrillation. Three different levels of adjustment for censoring were examined: no censoring of cost and effects, censoring of effects only, and censoring of both costs and effects. In each case, bootstrapping was used to estimate the uncertainty incosts and effects, and the EVPI was calculated to determine the potential worth of further research. Censoring did not impact the adoption decision. Nevertheless, this was not the case for the decision uncertainty or the EVPI. For a threshold of $50,000 per life-year, the EVPI varied between $626,000 (partial censoring) to $117 million (full censoring) for the eligible US population. The level of adjustment for censoring in trial-based cost-effectiveness analyses can impact on the decisions to fund a new technology and to devote resources for further research. Only when censoring is taken into account for both costs and effects are these decisions appropriately addressed.

  20. Censored Glauber Dynamics for the Mean Field Ising Model

    NASA Astrophysics Data System (ADS)

    Ding, Jian; Lubetzky, Eyal; Peres, Yuval

    2009-11-01

    We study Glauber dynamics for the Ising model on the complete graph on n vertices, known as the Curie-Weiss Model. It is well known that at high temperature ( β<1) the mixing time is Θ( nlog n), whereas at low temperature ( β>1) it is exp ( Θ( n)). Recently, Levin, Luczak and Peres considered a censored version of this dynamics, which is restricted to non-negative magnetization. They proved that for fixed β>1, the mixing-time of this model is Θ( nlog n), analogous to the high-temperature regime of the original dynamics. Furthermore, they showed cutoff for the original dynamics for fixed β<1. The question whether the censored dynamics also exhibits cutoff remained unsettled. In a companion paper, we extended the results of Levin et al. into a complete characterization of the mixing-time for the Curie-Weiss model. Namely, we found a scaling window of order 1/sqrt{n} around the critical temperature β c =1, beyond which there is cutoff at high temperature. However, determining the behavior of the censored dynamics outside this critical window seemed significantly more challenging. In this work we answer the above question in the affirmative, and establish the cutoff point and its window for the censored dynamics beyond the critical window, thus completing its analogy to the original dynamics at high temperature. Namely, if β=1+ δ for some δ>0 with δ 2 n→∞, then the mixing-time has order ( n/ δ)log ( δ 2 n). The cutoff constant is (1/2+[2(ζ2 β/ δ-1)]-1), where ζ is the unique positive root of g( x)=tanh ( β x)- x, and the cutoff window has order n/ δ.

  1. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    PubMed

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  2. Rank-based estimation in the {ell}1-regularized partly linear model for censored outcomes with application to integrated analyses of clinical predictors and gene expression data.

    PubMed

    Johnson, Brent A

    2009-10-01

    We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.

  3. Automatic Censoring CFAR Detector Based on Ordered Data Difference for Low-Flying Helicopter Safety

    PubMed Central

    Jiang, Wen; Huang, Yulin; Yang, Jianyu

    2016-01-01

    Being equipped with a millimeter-wave radar allows a low-flying helicopter to sense the surroundings in real time, which significantly increases its safety. However, nonhomogeneous clutter environments, such as a multiple target situation and a clutter edge environment, can dramatically affect the radar signal detection performance. In order to improve the radar signal detection performance in nonhomogeneous clutter environments, this paper proposes a new automatic censored cell averaging CFAR detector. The proposed CFAR detector does not require any prior information about the background environment and uses the hypothesis test of the first-order difference (FOD) result of ordered data to reject the unwanted samples in the reference window. After censoring the unwanted ranked cells, the remaining samples are combined to form an estimate of the background power level, thus getting better radar signal detection performance. The simulation results show that the FOD-CFAR detector provides low loss CFAR performance in a homogeneous environment and also performs robustly in nonhomogeneous environments. Furthermore, the measured results of a low-flying helicopter validate the basic performance of the proposed method. PMID:27399714

  4. Protecting the Children: Huckleberry Finn, E.T. and the Politics of Censorship.

    ERIC Educational Resources Information Center

    Magistrale, Anthony

    1984-01-01

    Explicates core aspects of two censored narratives: the movie "E.T.: The Extraterrestrial" and the novel "Huckleberry Finn." Points out similarities between the two works and raises the issue of the estrangement of youth from adult society. (RH)

  5. A method for analyzing clustered interval-censored data based on Cox's model.

    PubMed

    Kor, Chew-Teng; Cheng, Kuang-Fu; Chen, Yi-Hau

    2013-02-28

    Methods for analyzing interval-censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval-censored data. Our method is based on Cox's proportional hazard model with piecewise-constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family-based cohort study of pandemic H1N1 influenza in Taiwan during 2009-2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Parametric Model Based On Imputations Techniques for Partly Interval Censored Data

    NASA Astrophysics Data System (ADS)

    Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah

    2017-12-01

    The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.

  7. Non-censored rib fracture data during frontal PMHS sled tests.

    PubMed

    Kemper, Andrew R; Beeman, Stephanie M; Porta, David J; Duma, Stefan M

    2016-09-01

    The purpose of this study was to obtain non-censored rib fracture data due to three-point belt loading during dynamic frontal post-mortem human surrogate (PMHS) sled tests. The PMHS responses were then compared to matched tests performed using the Hybrid-III 50(th) percentile male ATD. Matched dynamic frontal sled tests were performed on two male PMHSs, which were approximately 50(th) percentile height and weight, and the Hybrid-III 50(th) percentile male ATD. The sled pulse was designed to match the vehicle acceleration of a standard sedan during a FMVSS-208 40 kph test. Each subject was restrained with a 4 kN load limiting, driver-side, three-point seatbelt. A 59-channel chestband, aligned at the nipple line, was used to quantify the chest contour, anterior-posterior sternum deflection, and maximum anterior-posterior chest deflection for all test subjects. The internal sternum deflection of the ATD was quantified with the sternum potentiometer. For the PMHS tests, a total of 23 single-axis strain gages were attached to the bony structures of the thorax, including the ribs, sternum, and clavicle. In order to create a non-censored data set, the time history of each strain gage was analyzed to determine the timing of each rib fracture and corresponding timing of each AIS level (AIS = 1, 2, 3, etc.) with respect to chest deflection. Peak sternum deflection for PMHS 1 and PMHS 2 were 48.7 mm (19.0%) and 36.7 mm (12.2%), respectively. The peak sternum deflection for the ATD was 20.8 mm when measured by the chest potentiometer and 34.4 mm (12.0%) when measured by the chestband. Although the measured ATD sternum deflections were found to be well below the current thoracic injury criterion (63 mm) specified for the ATD in FMVSS-208, both PMHSs sustained AIS 3+ thoracic injuries. For all subjects, the maximum chest deflection measured by the chestband occurred to the right of the sternum and was found to be 83.0 mm (36.0%) for PMHS 1, 60.6 mm (23.9%) for PMHS 2, and 56.3 mm (20.0%) for the ATD. The non-censored rib fracture data in the current study (n = 2 PMHS) in conjunction with the non-censored rib fracture data from two previous table-top studies (n = 4 PMHS) show that AIS 3+ injury timing occurs prior to peak sternum compression, prior to peak maximum chest compression, and at lower compressions than might be suggested by current PMHS thoracic injury criteria developed using censored rib fracture data. In addition, the maximum chest deflection results showed a more reasonable correlation between deflection, rib fracture timing, and injury severity than sternum deflection. Overall, these data provide compelling empirical evidence that suggests a more conservative thoracic injury criterion could potentially be developed based on non-censored rib fracture data with additional testing performed over a wider range of subjects and loading conditions.

  8. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models

    PubMed Central

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384

  9. Effect of censoring trace-level water-quality data on trend-detection capability

    USGS Publications Warehouse

    Gilliom, R.J.; Hirsch, R.M.; Gilroy, E.J.

    1984-01-01

    Monte Carlo experiments were used to evaluate whether trace-level water-quality data that are routinely censored (not reported) contain valuable information for trend detection. Measurements are commonly censored if they fall below a level associated with some minimum acceptable level of reliability (detection limit). Trace-level organic data were simulated with best- and worst-case estimates of measurement uncertainty, various concentrations and degrees of linear trend, and different censoring rules. The resulting classes of data were subjected to a nonparametric statistical test for trend. For all classes of data evaluated, trends were most effectively detected in uncensored data as compared to censored data even when the data censored were highly unreliable. Thus, censoring data at any concentration level may eliminate valuable information. Whether or not valuable information for trend analysis is, in fact, eliminated by censoring of actual rather than simulated data depends on whether the analytical process is in statistical control and bias is predictable for a particular type of chemical analyses.

  10. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets: I. Summary statistics

    USGS Publications Warehouse

    Antweiler, Ronald C.; Taylor, Howard E.

    2008-01-01

    The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.

  11. Multivariate-$t$ nonlinear mixed models with application to censored multi-outcome AIDS studies.

    PubMed

    Lin, Tsung-I; Wang, Wan-Lun

    2017-10-01

    In multivariate longitudinal HIV/AIDS studies, multi-outcome repeated measures on each patient over time may contain outliers, and the viral loads are often subject to a upper or lower limit of detection depending on the quantification assays. In this article, we consider an extension of the multivariate nonlinear mixed-effects model by adopting a joint multivariate-$t$ distribution for random effects and within-subject errors and taking the censoring information of multiple responses into account. The proposed model is called the multivariate-$t$ nonlinear mixed-effects model with censored responses (MtNLMMC), allowing for analyzing multi-outcome longitudinal data exhibiting nonlinear growth patterns with censorship and fat-tailed behavior. Utilizing the Taylor-series linearization method, a pseudo-data version of expectation conditional maximization either (ECME) algorithm is developed for iteratively carrying out maximum likelihood estimation. We illustrate our techniques with two data examples from HIV/AIDS studies. Experimental results signify that the MtNLMMC performs favorably compared to its Gaussian analogue and some existing approaches. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Estimating and Testing Mediation Effects with Censored Data

    ERIC Educational Resources Information Center

    Wang, Lijuan; Zhang, Zhiyong

    2011-01-01

    This study investigated influences of censored data on mediation analysis. Mediation effect estimates can be biased and inefficient with censoring on any one of the input, mediation, and output variables. A Bayesian Tobit approach was introduced to estimate and test mediation effects with censored data. Simulation results showed that the Bayesian…

  13. Inverse probability weighted least squares regression in the analysis of time-censored cost data: an evaluation of the approach using SEER-Medicare.

    PubMed

    Griffiths, Robert I; Gleeson, Michelle L; Danese, Mark D; O'Hagan, Anthony

    2012-01-01

    To assess the accuracy and precision of inverse probability weighted (IPW) least squares regression analysis for censored cost data. By using Surveillance, Epidemiology, and End Results-Medicare, we identified 1500 breast cancer patients who died and had complete cost information within the database. Patients were followed for up to 48 months (partitions) after diagnosis, and their actual total cost was calculated in each partition. We then simulated patterns of administrative and dropout censoring and also added censoring to patients receiving chemotherapy to simulate comparing a newer to older intervention. For each censoring simulation, we performed 1000 IPW regression analyses (bootstrap, sampling with replacement), calculated the average value of each coefficient in each partition, and summed the coefficients for each regression parameter to obtain the cumulative values from 1 to 48 months. The cumulative, 48-month, average cost was $67,796 (95% confidence interval [CI] $58,454-$78,291) with no censoring, $66,313 (95% CI $54,975-$80,074) with administrative censoring, and $66,765 (95% CI $54,510-$81,843) with administrative plus dropout censoring. In multivariate analysis, chemotherapy was associated with increased cost of $25,325 (95% CI $17,549-$32,827) compared with $28,937 (95% CI $20,510-$37,088) with administrative censoring and $29,593 ($20,564-$39,399) with administrative plus dropout censoring. Adding censoring to the chemotherapy group resulted in less accurate IPW estimates. This was ameliorated, however, by applying IPW within treatment groups. IPW is a consistent estimator of population mean costs if the weight is correctly specified. If the censoring distribution depends on some covariates, a model that accommodates this dependency must be correctly specified in IPW to obtain accurate estimates. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Impact of censoring on learning Bayesian networks in survival modelling.

    PubMed

    Stajduhar, Ivan; Dalbelo-Basić, Bojana; Bogunović, Nikola

    2009-11-01

    Bayesian networks are commonly used for presenting uncertainty and covariate interactions in an easily interpretable way. Because of their efficient inference and ability to represent causal relationships, they are an excellent choice for medical decision support systems in diagnosis, treatment, and prognosis. Although good procedures for learning Bayesian networks from data have been defined, their performance in learning from censored survival data has not been widely studied. In this paper, we explore how to use these procedures to learn about possible interactions between prognostic factors and their influence on the variate of interest. We study how censoring affects the probability of learning correct Bayesian network structures. Additionally, we analyse the potential usefulness of the learnt models for predicting the time-independent probability of an event of interest. We analysed the influence of censoring with a simulation on synthetic data sampled from randomly generated Bayesian networks. We used two well-known methods for learning Bayesian networks from data: a constraint-based method and a score-based method. We compared the performance of each method under different levels of censoring to those of the naive Bayes classifier and the proportional hazards model. We did additional experiments on several datasets from real-world medical domains. The machine-learning methods treated censored cases in the data as event-free. We report and compare results for several commonly used model evaluation metrics. On average, the proportional hazards method outperformed other methods in most censoring setups. As part of the simulation study, we also analysed structural similarities of the learnt networks. Heavy censoring, as opposed to no censoring, produces up to a 5% surplus and up to 10% missing total arcs. It also produces up to 50% missing arcs that should originally be connected to the variate of interest. Presented methods for learning Bayesian networks from data can be used to learn from censored survival data in the presence of light censoring (up to 20%) by treating censored cases as event-free. Given intermediate or heavy censoring, the learnt models become tuned to the majority class and would thus require a different approach.

  15. Semiparametric regression analysis of failure time data with dependent interval censoring.

    PubMed

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-09-20

    Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Comparison of Methods for Analyzing Left-Censored Occupational Exposure Data

    PubMed Central

    Huynh, Tran; Ramachandran, Gurumurthy; Banerjee, Sudipto; Monteiro, Joao; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.

    2014-01-01

    The National Institute for Environmental Health Sciences (NIEHS) is conducting an epidemiologic study (GuLF STUDY) to investigate the health of the workers and volunteers who participated from April to December of 2010 in the response and cleanup of the oil release after the Deepwater Horizon explosion in the Gulf of Mexico. The exposure assessment component of the study involves analyzing thousands of personal monitoring measurements that were collected during this effort. A substantial portion of these data has values reported by the analytic laboratories to be below the limits of detection (LOD). A simulation study was conducted to evaluate three established methods for analyzing data with censored observations to estimate the arithmetic mean (AM), geometric mean (GM), geometric standard deviation (GSD), and the 95th percentile (X0.95) of the exposure distribution: the maximum likelihood (ML) estimation, the β-substitution, and the Kaplan–Meier (K-M) methods. Each method was challenged with computer-generated exposure datasets drawn from lognormal and mixed lognormal distributions with sample sizes (N) varying from 5 to 100, GSDs ranging from 2 to 5, and censoring levels ranging from 10 to 90%, with single and multiple LODs. Using relative bias and relative root mean squared error (rMSE) as the evaluation metrics, the β-substitution method generally performed as well or better than the ML and K-M methods in most simulated lognormal and mixed lognormal distribution conditions. The ML method was suitable for large sample sizes (N ≥ 30) up to 80% censoring for lognormal distributions with small variability (GSD = 2–3). The K-M method generally provided accurate estimates of the AM when the censoring was <50% for lognormal and mixed distributions. The accuracy and precision of all methods decreased under high variability (GSD = 4 and 5) and small to moderate sample sizes (N < 20) but the β-substitution was still the best of the three methods. When using the ML method, practitioners are cautioned to be aware of different ways of estimating the AM as they could lead to biased interpretation. A limitation of the β-substitution method is the absence of a confidence interval for the estimate. More research is needed to develop methods that could improve the estimation accuracy for small sample sizes and high percent censored data and also provide uncertainty intervals. PMID:25261453

  17. Influence of atmospheric transport on ozone and trace- level toxic air contaminants over the northeastern United States

    NASA Astrophysics Data System (ADS)

    Brankov, Elvira

    This thesis presents a methodology for examining the relationship between synoptic-scale atmospheric transport patterns and observed pollutant concentration levels. It involves calculating a large number of back-trajectories from the observational site and subjecting them to cluster analysis. The pollutant concentration data observed at that site are then segregated according to the back-trajectory clusters. If the pollutant observations extend over several seasons, it is important to filter out seasonal and long-term components from the time series data before pollutant cluster-segregation, because only the short-term component of the time series data is related to the synoptic-scale transport. Multiple comparison procedures are used to test for significant differences in the chemical composition of pollutant data associated with each cluster. This procedure is useful in indicating potential pollutant source regions and isolating meteorological regimes associated with pollutant transport from those regions. If many observational sites are available, the spatial and temporal scales of the pollution transport from a given direction can be extracted through the time-lagged inter- site correlation analysis of pollutant concentrations. The proposed methodology is applicable to any pollutant at any site if sufficiently abundant data set is available. This is illustrated through examination of five-year long time series data of ozone concentrations at several sites in the Northeast. The results provide evidence of ozone transport to these sites, revealing the characteristic spatial and temporal scales involved in the transport and identifying source regions for this pollutant. Problems related to statistical analyses of censored data are addressed in the second half of this thesis. Although censoring (reporting concentrations in a non-quantitative way) is typical for trace-level measurements, methods for statistical analysis, inference and interpretation of such data are complex and still under development. In this study, multiple comparison of censored data sets was required in order to examine the influence of synoptic- scale circulations on concentration levels of several trace-level toxic pollutants observed in the Northeast (e.g., As, Se, Mn, V, etc.). Since the traditional multiple comparison procedures are not readily applicable to such data sets, a Monte Carlo simulation study was performed to assess several nonparametric methods for multiple comparison of censored data sets. Application of an appropriate comparison procedure to clusters of toxic trace elements observed in the Northeast led to the identification of potential source regions and atmospheric patterns associated with the long-range transport of these pollutants. A method for comparison of proportions and elemental ratio calculations were used to confirm/clarify these inferences with a greater degree of confidence.

  18. Limitation of Inverse Probability-of-Censoring Weights in Estimating Survival in the Presence of Strong Selection Bias

    PubMed Central

    Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro

    2011-01-01

    In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029

  19. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets. II. Group comparisons

    USGS Publications Warehouse

    Antweiler, Ronald C.

    2015-01-01

    The main classes of statistical treatments that have been used to determine if two groups of censored environmental data arise from the same distribution are substitution methods, maximum likelihood (MLE) techniques, and nonparametric methods. These treatments along with using all instrument-generated data (IN), even those less than the detection limit, were evaluated by examining 550 data sets in which the true values of the censored data were known, and therefore “true” probabilities could be calculated and used as a yardstick for comparison. It was found that technique “quality” was strongly dependent on the degree of censoring present in the groups. For low degrees of censoring (<25% in each group), the Generalized Wilcoxon (GW) technique and substitution of √2/2 times the detection limit gave overall the best results. For moderate degrees of censoring, MLE worked best, but only if the distribution could be estimated to be normal or log-normal prior to its application; otherwise, GW was a suitable alternative. For higher degrees of censoring (each group >40% censoring), no technique provided reliable estimates of the true probability. Group size did not appear to influence the quality of the result, and no technique appeared to become better or worse than other techniques relative to group size. Finally, IN appeared to do very well relative to the other techniques regardless of censoring or group size.

  20. Small values in big data: The continuing need for appropriate metadata

    USGS Publications Warehouse

    Stow, Craig A.; Webster, Katherine E.; Wagner, Tyler; Lottig, Noah R.; Soranno, Patricia A.; Cha, YoonKyung

    2018-01-01

    Compiling data from disparate sources to address pressing ecological issues is increasingly common. Many ecological datasets contain left-censored data – observations below an analytical detection limit. Studies from single and typically small datasets show that common approaches for handling censored data — e.g., deletion or substituting fixed values — result in systematic biases. However, no studies have explored the degree to which the documentation and presence of censored data influence outcomes from large, multi-sourced datasets. We describe left-censored data in a lake water quality database assembled from 74 sources and illustrate the challenges of dealing with small values in big data, including detection limits that are absent, range widely, and show trends over time. We show that substitutions of censored data can also bias analyses using ‘big data’ datasets, that censored data can be effectively handled with modern quantitative approaches, but that such approaches rely on accurate metadata that describe treatment of censored data from each source.

  1. Simulation of parametric model towards the fixed covariate of right censored lung cancer data

    NASA Astrophysics Data System (ADS)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila

    2017-09-01

    In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.

  2. Choosing a reliability inspection plan for interval censored data

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela

    2017-04-19

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  3. Choosing a reliability inspection plan for interval censored data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Lu; Anderson-Cook, Christine Michaela

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  4. Linear Regression with a Randomly Censored Covariate: Application to an Alzheimer's Study.

    PubMed

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2017-01-01

    The association between maternal age of onset of dementia and amyloid deposition (measured by in vivo positron emission tomography (PET) imaging) in cognitively normal older offspring is of interest. In a regression model for amyloid, special methods are required due to the random right censoring of the covariate of maternal age of onset of dementia. Prior literature has proposed methods to address the problem of censoring due to assay limit of detection, but not random censoring. We propose imputation methods and a survival regression method that do not require parametric assumptions about the distribution of the censored covariate. Existing imputation methods address missing covariates, but not right censored covariates. In simulation studies, we compare these methods to the simple, but inefficient complete case analysis, and to thresholding approaches. We apply the methods to the Alzheimer's study.

  5. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  6. Regression analysis of case K interval-censored failure time data in the presence of informative censoring.

    PubMed

    Wang, Peijie; Zhao, Hui; Sun, Jianguo

    2016-12-01

    Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.

  7. Inference for the effect of treatment on survival probability in randomized trials with noncompliance and administrative censoring.

    PubMed

    Nie, Hui; Cheng, Jing; Small, Dylan S

    2011-12-01

    In many clinical studies with a survival outcome, administrative censoring occurs when follow-up ends at a prespecified date and many subjects are still alive. An additional complication in some trials is that there is noncompliance with the assigned treatment. For this setting, we study the estimation of the causal effect of treatment on survival probability up to a given time point among those subjects who would comply with the assignment to both treatment and control. We first discuss the standard instrumental variable (IV) method for survival outcomes and parametric maximum likelihood methods, and then develop an efficient plug-in nonparametric empirical maximum likelihood estimation (PNEMLE) approach. The PNEMLE method does not make any assumptions on outcome distributions, and makes use of the mixture structure in the data to gain efficiency over the standard IV method. Theoretical results of the PNEMLE are derived and the method is illustrated by an analysis of data from a breast cancer screening trial. From our limited mortality analysis with administrative censoring times 10 years into the follow-up, we find a significant benefit of screening is present after 4 years (at the 5% level) and this persists at 10 years follow-up. © 2011, The International Biometric Society.

  8. Mixture models for undiagnosed prevalent disease and interval-censored incident disease: applications to a cohort assembled from electronic health records.

    PubMed

    Cheung, Li C; Pan, Qing; Hyun, Noorie; Schiffman, Mark; Fetterman, Barbara; Castle, Philip E; Lorey, Thomas; Katki, Hormuzd A

    2017-09-30

    For cost-effectiveness and efficiency, many large-scale general-purpose cohort studies are being assembled within large health-care providers who use electronic health records. Two key features of such data are that incident disease is interval-censored between irregular visits and there can be pre-existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan-Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval-censored incident disease that we call prevalence-incidence models. Parameters for parametric prevalence-incidence models, such as the logistic regression and Weibull survival (logistic-Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non-parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan-Meier, logistic-Weibull, and non-parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan-Meier provided poor estimates while the logistic-Weibull model was a close fit to the non-parametric. Our findings support our use of logistic-Weibull models to develop the risk estimates that underlie current US risk-based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.

  9. Backdating of events in electronic primary health care data: should one censor at the date of last data collection.

    PubMed

    Sammon, Cormac J; Petersen, Irene

    2016-04-01

    Studies using primary care databases often censor follow-up at the date data are last collected from clinical computer systems (last collection date (LCD)). We explored whether this results in the selective exclusion of events entered in the electronic health records after their date of occurrence, that is, backdated events. We used data from The Health Improvement Network (THIN). Using two versions of the database, we identified events that were entered into a later (THIN14) but not an earlier version of the database (THIN13) and investigated how the number of entries changed as a function of time since LCD. Times between events and the dates they were recorded were plotted as a function of time since the LCD in an effort to determine appropriate points at which to censor follow-up. There were 356 million eligible events in THIN14 and 355 million eligible events in THIN13. When comparing the two data sets, the proportion of missing events in THIN13 was highest in the month prior to the LCD (9.6%), decreasing to 5.2% at 6 months and 3.4% at 12 months. The proportion of missing events was largest for events typically diagnosed in secondary care such as neoplasms (28% in the month prior to LCD) and negligible for events typically diagnosed in primary care such as respiratory events (2% in the month prior to LCD). Studies using primary care databases, particularly those investigating events typically diagnosed outside primary care, should censor follow-up prior to the LCD to avoid underestimation of event rates. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Using the U.S. Geological Survey National Water Quality Laboratory LT-MDL to Evaluate and Analyze Data

    USGS Publications Warehouse

    Bonn, Bernadine A.

    2008-01-01

    A long-term method detection level (LT-MDL) and laboratory reporting level (LRL) are used by the U.S. Geological Survey?s National Water Quality Laboratory (NWQL) when reporting results from most chemical analyses of water samples. Changing to this method provided data users with additional information about their data and often resulted in more reported values in the low concentration range. Before this method was implemented, many of these values would have been censored. The use of the LT-MDL and LRL presents some challenges for the data user. Interpreting data in the low concentration range increases the need for adequate quality assurance because even small contamination or recovery problems can be relatively large compared to concentrations near the LT-MDL and LRL. In addition, the definition of the LT-MDL, as well as the inclusion of low values, can result in complex data sets with multiple censoring levels and reported values that are less than a censoring level. Improper interpretation or statistical manipulation of low-range results in these data sets can result in bias and incorrect conclusions. This document is designed to help data users use and interpret data reported with the LTMDL/ LRL method. The calculation and application of the LT-MDL and LRL are described. This document shows how to extract statistical information from the LT-MDL and LRL and how to use that information in USGS investigations, such as assessing the quality of field data, interpreting field data, and planning data collection for new projects. A set of 19 detailed examples are included in this document to help data users think about their data and properly interpret lowrange data without introducing bias. Although this document is not meant to be a comprehensive resource of statistical methods, several useful methods of analyzing censored data are demonstrated, including Regression on Order Statistics and Kaplan-Meier Estimation. These two statistical methods handle complex censored data sets without resorting to substitution, thereby avoiding a common source of bias and inaccuracy.

  11. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  12. Complete hazard ranking to analyze right-censored data: An ALS survival study.

    PubMed

    Huang, Zhengnan; Zhang, Hongjiu; Boss, Jonathan; Goutman, Stephen A; Mukherjee, Bhramar; Dinov, Ivo D; Guan, Yuanfang

    2017-12-01

    Survival analysis represents an important outcome measure in clinical research and clinical trials; further, survival ranking may offer additional advantages in clinical trials. In this study, we developed GuanRank, a non-parametric ranking-based technique to transform patients' survival data into a linear space of hazard ranks. The transformation enables the utilization of machine learning base-learners including Gaussian process regression, Lasso, and random forest on survival data. The method was submitted to the DREAM Amyotrophic Lateral Sclerosis (ALS) Stratification Challenge. Ranked first place, the model gave more accurate ranking predictions on the PRO-ACT ALS dataset in comparison to Cox proportional hazard model. By utilizing right-censored data in its training process, the method demonstrated its state-of-the-art predictive power in ALS survival ranking. Its feature selection identified multiple important factors, some of which conflicts with previous studies.

  13. A random-censoring Poisson model for underreported data.

    PubMed

    de Oliveira, Guilherme Lopes; Loschi, Rosangela Helena; Assunção, Renato Martins

    2017-12-30

    A major challenge when monitoring risks in socially deprived areas of under developed countries is that economic, epidemiological, and social data are typically underreported. Thus, statistical models that do not take the data quality into account will produce biased estimates. To deal with this problem, counts in suspected regions are usually approached as censored information. The censored Poisson model can be considered, but all censored regions must be precisely known a priori, which is not a reasonable assumption in most practical situations. We introduce the random-censoring Poisson model (RCPM) which accounts for the uncertainty about both the count and the data reporting processes. Consequently, for each region, we will be able to estimate the relative risk for the event of interest as well as the censoring probability. To facilitate the posterior sampling process, we propose a Markov chain Monte Carlo scheme based on the data augmentation technique. We run a simulation study comparing the proposed RCPM with 2 competitive models. Different scenarios are considered. RCPM and censored Poisson model are applied to account for potential underreporting of early neonatal mortality counts in regions of Minas Gerais State, Brazil, where data quality is known to be poor. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Censored quantile regression with recursive partitioning-based weights

    PubMed Central

    Wey, Andrew; Wang, Lan; Rudser, Kyle

    2014-01-01

    Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800

  15. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    NASA Astrophysics Data System (ADS)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  16. Some insight on censored cost estimators.

    PubMed

    Zhao, H; Cheng, Y; Bang, H

    2011-08-30

    Censored survival data analysis has been studied for many years. Yet, the analysis of censored mark variables, such as medical cost, quality-adjusted lifetime, and repeated events, faces a unique challenge that makes standard survival analysis techniques invalid. Because of the 'informative' censorship imbedded in censored mark variables, the use of the Kaplan-Meier (Journal of the American Statistical Association 1958; 53:457-481) estimator, as an example, will produce biased estimates. Innovative estimators have been developed in the past decade in order to handle this issue. Even though consistent estimators have been proposed, the formulations and interpretations of some estimators are less intuitive to practitioners. On the other hand, more intuitive estimators have been proposed, but their mathematical properties have not been established. In this paper, we prove the analytic identity between some estimators (a statistically motivated estimator and an intuitive estimator) for censored cost data. Efron (1967) made similar investigation for censored survival data (between the Kaplan-Meier estimator and the redistribute-to-the-right algorithm). Therefore, we view our study as an extension of Efron's work to informatively censored data so that our findings could be applied to other marked variables. Copyright © 2011 John Wiley & Sons, Ltd.

  17. A multivariate cure model for left-censored and right-censored data with application to colorectal cancer screening patterns.

    PubMed

    Hagar, Yolanda C; Harvey, Danielle J; Beckett, Laurel A

    2016-08-30

    We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left-censored and right-censored, and some individuals are never screened (the 'cured' population). We propose a multivariate parametric cure model that can be used with left-censored and right-censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within-subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER-Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Evaluation of Denoising Strategies to Address Motion-Correlated Artifacts in Resting-State Functional Magnetic Resonance Imaging Data from the Human Connectome Project

    PubMed Central

    Kandala, Sridhar; Nolan, Dan; Laumann, Timothy O.; Power, Jonathan D.; Adeyemo, Babatunde; Harms, Michael P.; Petersen, Steven E.; Barch, Deanna M.

    2016-01-01

    Abstract Like all resting-state functional connectivity data, the data from the Human Connectome Project (HCP) are adversely affected by structured noise artifacts arising from head motion and physiological processes. Functional connectivity estimates (Pearson's correlation coefficients) were inflated for high-motion time points and for high-motion participants. This inflation occurred across the brain, suggesting the presence of globally distributed artifacts. The degree of inflation was further increased for connections between nearby regions compared with distant regions, suggesting the presence of distance-dependent spatially specific artifacts. We evaluated several denoising methods: censoring high-motion time points, motion regression, the FMRIB independent component analysis-based X-noiseifier (FIX), and mean grayordinate time series regression (MGTR; as a proxy for global signal regression). The results suggest that FIX denoising reduced both types of artifacts, but left substantial global artifacts behind. MGTR significantly reduced global artifacts, but left substantial spatially specific artifacts behind. Censoring high-motion time points resulted in a small reduction of distance-dependent and global artifacts, eliminating neither type. All denoising strategies left differences between high- and low-motion participants, but only MGTR substantially reduced those differences. Ultimately, functional connectivity estimates from HCP data showed spatially specific and globally distributed artifacts, and the most effective approach to address both types of motion-correlated artifacts was a combination of FIX and MGTR. PMID:27571276

  19. A cigarette manufacturer and a managed care company collaborate to censor health information targeted at employees.

    PubMed

    Muggli, Monique E; Hurt, Richard D

    2004-08-01

    A review of internal tobacco company documents showed that the tobacco company Philip Morris and the insurance company CIGNA collaborated to censor accurate information on the harm of smoking and on environmental tobacco smoke exposure from CIGNA health newsletters sent to employees of Philip Morris and its affiliates. From 1996 to 1998, 5 of the 8 CIGNA newsletters discussed in the internal tobacco documents were censored.We recommend that accrediting bodies mandate that health plans not censor employee-directed health information at the request of employers.

  20. SEMIPARAMETRIC EFFICIENT ESTIMATION FOR SHARED-FRAILTY MODELS WITH DOUBLY-CENSORED CLUSTERED DATA

    PubMed Central

    Wang, Jane-Ling

    2018-01-01

    In this paper, we investigate frailty models for clustered survival data that are subject to both left- and right-censoring, termed “doubly-censored data”. This model extends current survival literature by broadening the application of frailty models from right-censoring to a more complicated situation with additional left censoring. Our approach is motivated by a recent Hepatitis B study where the sample consists of families. We adopt a likelihood approach that aims at the nonparametric maximum likelihood estimators (NPMLE). A new algorithm is proposed, which not only works well for clustered data but also improve over existing algorithm for independent and doubly-censored data, a special case when the frailty variable is a constant equal to one. This special case is well known to be a computational challenge due to the left censoring feature of the data. The new algorithm not only resolves this challenge but also accommodate the additional frailty variable effectively. Asymptotic properties of the NPMLE are established along with semi-parametric efficiency of the NPMLE for the finite-dimensional parameters. The consistency of Bootstrap estimators for the standard errors of the NPMLE is also discussed. We conducted some simulations to illustrate the numerical performance and robustness of the proposed algorithm, which is also applied to the Hepatitis B data. PMID:29527068

  1. A consistent NPMLE of the joint distribution function with competing risks data under the dependent masking and right-censoring model.

    PubMed

    Li, Jiahui; Yu, Qiqing

    2016-01-01

    Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.

  2. An identifiable model for informative censoring

    USGS Publications Warehouse

    Link, W.A.; Wegman, E.J.; Gantz, D.T.; Miller, J.J.

    1988-01-01

    The usual model for censored survival analysis requires the assumption that censoring of observations arises only due to causes unrelated to the lifetime under consideration. It is easy to envision situations in which this assumption is unwarranted, and in which use of the Kaplan-Meier estimator and associated techniques will lead to unreliable analyses.

  3. Estimation of Recurrence of Colorectal Adenomas with Dependent Censoring Using Weighted Logistic Regression

    PubMed Central

    Hsu, Chiu-Hsieh; Li, Yisheng; Long, Qi; Zhao, Qiuhong; Lance, Peter

    2011-01-01

    In colorectal polyp prevention trials, estimation of the rate of recurrence of adenomas at the end of the trial may be complicated by dependent censoring, that is, time to follow-up colonoscopy and dropout may be dependent on time to recurrence. Assuming that the auxiliary variables capture the dependence between recurrence and censoring times, we propose to fit two working models with the auxiliary variables as covariates to define risk groups and then extend an existing weighted logistic regression method for independent censoring to each risk group to accommodate potential dependent censoring. In a simulation study, we show that the proposed method results in both a gain in efficiency and reduction in bias for estimating the recurrence rate. We illustrate the methodology by analyzing a recurrent adenoma dataset from a colorectal polyp prevention trial. PMID:22065985

  4. Sieve estimation in a Markov illness-death process under dual censoring.

    PubMed

    Boruvka, Audrey; Cook, Richard J

    2016-04-01

    Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. The concordance index C and the Mann-Whitney parameter Pr(X>Y) with randomly censored data.

    PubMed

    Koziol, James A; Jia, Zhenyu

    2009-06-01

    Harrell's c-index or concordance C has been widely used as a measure of separation of two survival distributions. In the absence of censored data, the c-index estimates the Mann-Whitney parameter Pr(X>Y), which has been repeatedly utilized in various statistical contexts. In the presence of randomly censored data, the c-index no longer estimates Pr(X>Y); rather, a parameter that involves the underlying censoring distributions. This is in contrast to Efron's maximum likelihood estimator of the Mann-Whitney parameter, which is recommended in the setting of random censorship.

  6. Time-dependent summary receiver operating characteristics for meta-analysis of prognostic studies.

    PubMed

    Hattori, Satoshi; Zhou, Xiao-Hua

    2016-11-20

    Prognostic studies are widely conducted to examine whether biomarkers are associated with patient's prognoses and play important roles in medical decisions. Because findings from one prognostic study may be very limited, meta-analyses may be useful to obtain sound evidence. However, prognostic studies are often analyzed by relying on a study-specific cut-off value, which can lead to difficulty in applying the standard meta-analysis techniques. In this paper, we propose two methods to estimate a time-dependent version of the summary receiver operating characteristics curve for meta-analyses of prognostic studies with a right-censored time-to-event outcome. We introduce a bivariate normal model for the pair of time-dependent sensitivity and specificity and propose a method to form inferences based on summary statistics reported in published papers. This method provides a valid inference asymptotically. In addition, we consider a bivariate binomial model. To draw inferences from this bivariate binomial model, we introduce a multiple imputation method. The multiple imputation is found to be approximately proper multiple imputation, and thus the standard Rubin's variance formula is justified from a Bayesian view point. Our simulation study and application to a real dataset revealed that both methods work well with a moderate or large number of studies and the bivariate binomial model coupled with the multiple imputation outperforms the bivariate normal model with a small number of studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Threshold regression to accommodate a censored covariate.

    PubMed

    Qian, Jing; Chiou, Sy Han; Maye, Jacqueline E; Atem, Folefac; Johnson, Keith A; Betensky, Rebecca A

    2018-06-22

    In several common study designs, regression modeling is complicated by the presence of censored covariates. Examples of such covariates include maternal age of onset of dementia that may be right censored in an Alzheimer's amyloid imaging study of healthy subjects, metabolite measurements that are subject to limit of detection censoring in a case-control study of cardiovascular disease, and progressive biomarkers whose baseline values are of interest, but are measured post-baseline in longitudinal neuropsychological studies of Alzheimer's disease. We propose threshold regression approaches for linear regression models with a covariate that is subject to random censoring. Threshold regression methods allow for immediate testing of the significance of the effect of a censored covariate. In addition, they provide for unbiased estimation of the regression coefficient of the censored covariate. We derive the asymptotic properties of the resulting estimators under mild regularity conditions. Simulations demonstrate that the proposed estimators have good finite-sample performance, and often offer improved efficiency over existing methods. We also derive a principled method for selection of the threshold. We illustrate the approach in application to an Alzheimer's disease study that investigated brain amyloid levels in older individuals, as measured through positron emission tomography scans, as a function of maternal age of dementia onset, with adjustment for other covariates. We have developed an R package, censCov, for implementation of our method, available at CRAN. © 2018, The International Biometric Society.

  8. Twenty five years long survival analysis of an individual shortleaf pine trees

    Treesearch

    Pradip Saud; Thomas B. Lynch; James M. Guldin

    2016-01-01

    A semi parametric cox proportion hazard model is preferred when censored data and survival time information is available (Kleinbaum and Klein 1996; Alison 2010). Censored data are observations that have incomplete information related to survival time or event time of interest. In repeated forest measurements, usually observations are either right censored or...

  9. Multilevel joint competing risk models

    NASA Astrophysics Data System (ADS)

    Karunarathna, G. H. S.; Sooriyarachchi, M. R.

    2017-09-01

    Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).

  10. Evaluation of methods for managing censored results when calculating the geometric mean.

    PubMed

    Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M

    2018-01-01

    Currently, there are conflicting views on the best statistical methods for managing censored environmental data. The method commonly applied by environmental science researchers and professionals is to substitute half the limit of reporting for derivation of summary statistics. This approach has been criticised by some researchers, raising questions around the interpretation of historical scientific data. This study evaluated four complete soil datasets, at three levels of simulated censorship, to test the accuracy of a range of censored data management methods for calculation of the geometric mean. The methods assessed included removal of censored results, substitution of a fixed value (near zero, half the limit of reporting and the limit of reporting), substitution by nearest neighbour imputation, maximum likelihood estimation, regression on order substitution and Kaplan-Meier/survival analysis. This is the first time such a comprehensive range of censored data management methods have been applied to assess the accuracy of calculation of the geometric mean. The results of this study show that, for describing the geometric mean, the simple method of substitution of half the limit of reporting is comparable or more accurate than alternative censored data management methods, including nearest neighbour imputation methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Calibrating accelerometer sensor on android phone with Accelerograph TDL 303 QS for earthquake online recorder

    NASA Astrophysics Data System (ADS)

    Riantana, R.; Darsono, D.; Triyono, A.; Azimut, H. B.

    2016-11-01

    Calibration of the android censor was done by placing the device in a mounting at side of accelerograph TDL 303 QS that will be a means of comparison. Leveling of both devices was set same, so that the state of the device can be assumed same anyway. Then applied vibrations in order to have the maximum amplitude value of both censor, so it can be found equality of the coefficient of proportionality both of them. The results on both devices obtain the Peak Ground Acceleration (PGA) as follows, on the x axis (EW) android censor is obtained PGA -2.4478145 gal than at TDL 303 QS obtained PGA -2.5504 gal, the y-axis (NS) on the censor android obtained PGA 3.0066964 gal than at TDL 303 QS obtained PGA 3.2073 gal, the z-axis (UD) on the android censor obtained PGA -14.0702377 gal than at TDL 303 QS obtained PGA -13.2927 gal, A correction value for android accelerometer censor is ± 0.1 gal for the x-axis (EW), ± 0.2 gal for the y-axis (NS), and ± 0.7 gal for the z-axis (UD).

  12. Effective Measurement of Reliability of Repairable USAF Systems

    DTIC Science & Technology

    2012-09-01

    Hansen presented a course, Concepts and Models for Repairable Systems Reliability, at the 2009 Centro de Investigacion en Mathematicas ( CIMAT ). The...recurrent event by calculating the mean quantity of recurrent events of the population of systems at risk at that point in time. The number of systems at... risk is the number of systems that are operating and providing information. [9] Information can be obscured by data censoring and truncation. One

  13. A nonparametric method for assessment of interactions in a median regression model for analyzing right censored data.

    PubMed

    Lee, MinJae; Rahbar, Mohammad H; Talebi, Hooshang

    2018-01-01

    We propose a nonparametric test for interactions when we are concerned with investigation of the simultaneous effects of two or more factors in a median regression model with right censored survival data. Our approach is developed to detect interaction in special situations, when the covariates have a finite number of levels with a limited number of observations in each level, and it allows varying levels of variance and censorship at different levels of the covariates. Through simulation studies, we compare the power of detecting an interaction between the study group variable and a covariate using our proposed procedure with that of the Cox Proportional Hazard (PH) model and censored quantile regression model. We also assess the impact of censoring rate and type on the standard error of the estimators of parameters. Finally, we illustrate application of our proposed method to real life data from Prospective Observational Multicenter Major Trauma Transfusion (PROMMTT) study to test an interaction effect between type of injury and study sites using median time for a trauma patient to receive three units of red blood cells. The results from simulation studies indicate that our procedure performs better than both Cox PH model and censored quantile regression model based on statistical power for detecting the interaction, especially when the number of observations is small. It is also relatively less sensitive to censoring rates or even the presence of conditionally independent censoring that is conditional on the levels of covariates.

  14. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  15. Two-sample tests and one-way MANOVA for multivariate biomarker data with nondetects.

    PubMed

    Thulin, M

    2016-09-10

    Testing whether the mean vector of a multivariate set of biomarkers differs between several populations is an increasingly common problem in medical research. Biomarker data is often left censored because some measurements fall below the laboratory's detection limit. We investigate how such censoring affects multivariate two-sample and one-way multivariate analysis of variance tests. Type I error rates, power and robustness to increasing censoring are studied, under both normality and non-normality. Parametric tests are found to perform better than non-parametric alternatives, indicating that the current recommendations for analysis of censored multivariate data may have to be revised. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. A proportional hazards regression model for the subdistribution with right-censored and left-truncated competing risks data

    PubMed Central

    Zhang, Xu; Zhang, Mei-Jie; Fine, Jason

    2012-01-01

    With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated. PMID:21557288

  17. JPRS Report, Near East and South Asia.

    DTIC Science & Technology

    1991-07-08

    and our culture. We collaborators two excellent connoisseurs of Berber lan- are not going to play at being censors ," Ramdane adds. guage and culture, T...its conceived plan, wars will be started between countries propaganda war, America censored war coverage. The for their consumption. There is only one...way to avoid Western media which protested censor restrictions this new international imperialist system and that is, just imposed during the

  18. Work Status Choice and the Distribution of Family Earnings.

    DTIC Science & Technology

    1984-11-01

    were in the market. Since wages for secondary earners are observable only for market participants, censoring corrections will have to be made to...obtain the true correlation of earners’ earnings. The problem of censoring corrections has been extensively studied in the female labor supply...earners are defined to be male members other than the HH. The censoring framework is fairly similar to the occupation-choice model discussed earlier

  19. Influence assessment in censored mixed-effects models using the multivariate Student’s-t distribution

    PubMed Central

    Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.

    2015-01-01

    In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871

  20. Double inverse-weighted estimation of cumulative treatment effects under nonproportional hazards and dependent censoring.

    PubMed

    Schaubel, Douglas E; Wei, Guanghui

    2011-03-01

    In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race. © 2010, The International Biometric Society.

  1. On the estimation of intracluster correlation for time-to-event outcomes in cluster randomized trials.

    PubMed

    Kalia, Sumeet; Klar, Neil; Donner, Allan

    2016-12-30

    Cluster randomized trials (CRTs) involve the random assignment of intact social units rather than independent subjects to intervention groups. Time-to-event outcomes often are endpoints in CRTs. Analyses of such data need to account for the correlation among cluster members. The intracluster correlation coefficient (ICC) is used to assess the similarity among binary and continuous outcomes that belong to the same cluster. However, estimating the ICC in CRTs with time-to-event outcomes is a challenge because of the presence of censored observations. The literature suggests that the ICC may be estimated using either censoring indicators or observed event times. A simulation study explores the effect of administrative censoring on estimating the ICC. Results show that ICC estimators derived from censoring indicators or observed event times are negatively biased. Analytic work further supports these results. Observed event times are preferred to estimate the ICC under minimum frequency of administrative censoring. To our knowledge, the existing literature provides no practical guidance on the estimation of ICC when substantial amount of administrative censoring is present. The results from this study corroborate the need for further methodological research on estimating the ICC for correlated time-to-event outcomes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Regulating cinematic stories about reproduction: pregnancy, childbirth, abortion and movie censorship in the US, 1930-1958.

    PubMed

    Kirby, David A

    2017-09-01

    In the mid-twentieth century film studios sent their screenplays to Hollywood's official censorship body, the Production Code Administration (PCA), and to the Catholic Church's Legion of Decency for approval and recommendations for revision. This article examines the negotiations between filmmakers and censorship groups in order to show the stories that censors did, and did not, want told about pregnancy, childbirth and abortion, as well as how studios fought to tell their own stories about human reproduction. I find that censors considered pregnancy to be a state of grace and a holy obligation that was restricted to married women. For censors, human reproduction was not only a private matter, it was also an unpleasant biological process whose entertainment value was questionable. They worried that realistic portrayals of pregnancy and childbirth would scare young women away from pursuing motherhood. In addition, I demonstrate how filmmakers overcame censors' strict prohibitions against abortion by utilizing ambiguity in their storytelling. Ultimately, I argue that censors believed that pregnancy and childbirth should be celebrated but not seen. But if pregnancy and childbirth were required then censors preferred mythic versions of motherhood instead of what they believed to be the sacred but horrific biological reality of human reproduction.

  3. Censoring approach to the detection limits in X-ray fluorescence analysis

    NASA Astrophysics Data System (ADS)

    Pajek, M.; Kubala-Kukuś, A.

    2004-10-01

    We demonstrate that the effect of detection limits in the X-ray fluorescence analysis (XRF), which limits the determination of very low concentrations of trace elements and results in appearance of the so-called "nondetects", can be accounted for using the statistical concept of censoring. More precisely, the results of such measurements can be viewed as the left random censored data, which can further be analyzed using the Kaplan-Meier method correcting the data for the presence of nondetects. Using this approach, the results of measured, detection limit censored concentrations can be interpreted in a nonparametric manner including the correction for the nondetects, i.e. the measurements in which the concentrations were found to be below the actual detection limits. Moreover, using the Monte Carlo simulation technique we show that by using the Kaplan-Meier approach the corrected mean concentrations for a population of the samples can be estimated within a few percent uncertainties with respect of the simulated, uncensored data. This practically means that the final uncertainties of estimated mean values are limited in fact by the number of studied samples and not by the correction procedure itself. The discussed random-left censoring approach was applied to analyze the XRF detection-limit-censored concentration measurements of trace elements in biomedical samples.

  4. The association between cinacalcet use and missed in-center hemodialysis treatment rate.

    PubMed

    Brunelli, Steven M; Sibbel, Scott; Dluzniewski, Paul J; Cooper, Kerry; Bensink, Mark E; Bradbury, Brian D

    2016-11-01

    Missed in-center hemodialysis treatments (MHT) are a general indicator of health status in hemodialysis patients. This analysis was conducted to estimate the association between cinacalcet use and MHT rate. We studied patients receiving hemodialysis and prescription benefits services from a large dialysis organization. Incident cinacalcet users were propensity score matched to controls on 31 demographic, clinical, and laboratory variables. We applied inverse probability (IP) of censoring and crossover weights to account for informative censoring. Weighted negative binomial modeling was used to estimate MHT rates and pooled logistics models were used to estimate the association between cinacalcet use and MHT. Baseline demographic and clinical variables included serum calcium, phosphorus, parathyroid hormone, and vitamin D use, and were balanced between 15,474 new cinacalcet users and 15,474 matched controls. In an analysis based on intention-to-treat principles, 40.8% of cinacalcet users and 46.5% of nonusers were censored. MHT rate was 13% lower among cinacalcet initiators versus controls: IP of censoring weighted incidence rate ratio was 0.87 (95% confidence interval [CI]: 0.84-0.90 p < 0.001). In analyses based on as-treated principles, 72.8% and 61.5% of cinacalcet users and nonusers, respectively, crossed over or were censored. MHT rate was 15% lower among cinacalcet initiators versus controls: IP of censoring/crossover weighted incidence rate ratio was 0.85 (95%CI: 0.82-0.87 p < 0.001). After controlling for indication and differential censoring, cinacalcet treatment was associated with lower MHT rates, which may reflect better health status. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    PubMed

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  6. Final Report for Dynamic Models for Causal Analysis of Panel Data. Approaches to the Censoring Problem in Analysis of Event Histories. Part III, Chapter 2.

    ERIC Educational Resources Information Center

    Tuma, Nancy Brandon; Hannan, Michael T.

    The document, part of a series of chapters described in SO 011 759, considers the problem of censoring in the analysis of event-histories (data on dated events, including dates of change from one qualitative state to another). Censoring refers to the lack of information on events that occur before or after the period for which data are available.…

  7. Is Support of Censoring Controversial Media Content for the Good of Others? Sexual Strategies and Support of Censoring Pro-Alcohol Advertising.

    PubMed

    Zhang, Jinguang

    2017-01-01

    At least in the United States, there are widespread concerns with advertising that encourages alcohol consumption, and previous research explains those concerns as aiming to protect others from the harm of excessive alcohol use. 1 Drawing on sexual strategies theory, we hypothesized that support of censoring pro-alcohol advertising is ultimately self-benefiting regardless of its altruistic effect at a proximate level. Excessive drinking positively correlates with having casual sex, and casual sex threatens monogamy, one of the major means with which people adopting a long-term sexual strategy increase their inclusive fitness. Then, one way for long-term strategists to protect monogamy, and thus their reproductive interest is to support censoring pro-alcohol advertising, thereby preventing others from becoming excessive drinkers (and consequently having casual sex) under media influence. Supporting this hypothesis, three studies consistently showed that restricted sociosexuality positively correlated with support of censoring pro-alcohol advertising before and after various value-, ideological-, and moral-foundation variables were controlled for. Also as predicted, Study 3 revealed a significant indirect effect of sociosexuality on censorship support through perceived media influence on others but not through perceived media influence on self. These findings further supported a self-interest analysis of issue opinions, extended third-person-effect research on support of censoring pro-alcohol advertising, and suggested a novel approach to analyzing media censorship support.

  8. A semiparametric Bayesian proportional hazards model for interval censored data with frailty effects.

    PubMed

    Henschel, Volkmar; Engel, Jutta; Hölzel, Dieter; Mansmann, Ulrich

    2009-02-10

    Multivariate analysis of interval censored event data based on classical likelihood methods is notoriously cumbersome. Likelihood inference for models which additionally include random effects are not available at all. Developed algorithms bear problems for practical users like: matrix inversion, slow convergence, no assessment of statistical uncertainty. MCMC procedures combined with imputation are used to implement hierarchical models for interval censored data within a Bayesian framework. Two examples from clinical practice demonstrate the handling of clustered interval censored event times as well as multilayer random effects for inter-institutional quality assessment. The software developed is called survBayes and is freely available at CRAN. The proposed software supports the solution of complex analyses in many fields of clinical epidemiology as well as health services research.

  9. Model Calibration with Censored Data

    DOE PAGES

    Cao, Fang; Ba, Shan; Brenneman, William A.; ...

    2017-06-28

    Here, the purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy-O'Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy-O'Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied tomore » study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large.« less

  10. Statistical analysis tables for truncated or censored samples

    NASA Technical Reports Server (NTRS)

    Cohen, A. C.; Cooley, C. G.

    1971-01-01

    Compilation describes characteristics of truncated and censored samples, and presents six illustrations of practical use of tables in computing mean and variance estimates for normal distribution using selected samples.

  11. Nonparametric and Semiparametric Regression Estimation for Length-biased Survival Data

    PubMed Central

    Shen, Yu; Ning, Jing; Qin, Jing

    2016-01-01

    For the past several decades, nonparametric and semiparametric modeling for conventional right-censored survival data has been investigated intensively under a noninformative censoring mechanism. However, these methods may not be applicable for analyzing right-censored survival data that arise from prevalent cohorts when the failure times are subject to length-biased sampling. This review article is intended to provide a summary of some newly developed methods as well as established methods for analyzing length-biased data. PMID:27086362

  12. Marginal regression analysis of recurrent events with coarsened censoring times.

    PubMed

    Hu, X Joan; Rosychuk, Rhonda J

    2016-12-01

    Motivated by an ongoing pediatric mental health care (PMHC) study, this article presents weakly structured methods for analyzing doubly censored recurrent event data where only coarsened information on censoring is available. The study extracted administrative records of emergency department visits from provincial health administrative databases. The available information of each individual subject is limited to a subject-specific time window determined up to concealed data. To evaluate time-dependent effect of exposures, we adapt the local linear estimation with right censored survival times under the Cox regression model with time-varying coefficients (cf. Cai and Sun, Scandinavian Journal of Statistics 2003, 30, 93-111). We establish the pointwise consistency and asymptotic normality of the regression parameter estimator, and examine its performance by simulation. The PMHC study illustrates the proposed approach throughout the article. © 2016, The International Biometric Society.

  13. Total-reflection X-ray fluorescence studies of trace elements in biomedical samples

    NASA Astrophysics Data System (ADS)

    Kubala-Kukuś, A.; Braziewicz, J.; Pajek, M.

    2004-08-01

    Application of the total-reflection X-ray fluorescence (TXRF) analysis in the studies of trace element contents in biomedical samples is discussed in the following aspects: (i) a nature of trace element concentration distributions, (ii) censoring approach to the detection limits, and (iii) a comparison of two sets of censored data. The paper summarizes the recent results achieved in this topics, in particular, the lognormal, or more general logstable, nature of concentration distribution of trace elements, the random left-censoring and the Kaplan-Meier approach accounting for detection limits and, finally, the application of the logrank test to compare the censored concentrations measured for two groups. These new aspects, which are of importance for applications of the TXRF in different fields, are discussed here in the context of TXRF studies of trace element in various samples of medical interest.

  14. Application of a Weighted Regression Model for Reporting Nutrient and Sediment Concentrations, Fluxes, and Trends in Concentration and Flux for the Chesapeake Bay Nontidal Water-Quality Monitoring Network, Results Through Water Year 2012

    USGS Publications Warehouse

    Chanat, Jeffrey G.; Moyer, Douglas L.; Blomquist, Joel D.; Hyer, Kenneth E.; Langland, Michael J.

    2016-01-13

    Inconsistencies related to changing laboratory methods were also examined via two manipulative experiments. In the first experiment, increasing and decreasing “stair-step” patterns of changes in censoring level, overall representing a factor-of-five change in the laboratory reporting limit, were artificially imposed on a 27-year record with no censoring and a period-of-record concentration trend of –68.4 percent. Trends estimated on the basis of the manipulated records were broadly similar to the original trend (–63.6 percent for decreasing censoring levels and –70.3 percent for increasing censoring levels), lending a degree of confidence that the survival regression routines upon which WRTDS is based are generally robust to data censoring. The second experiment considered an abrupt disappearance of low-concentration observations of total phosphorus, associated with a laboratory method change and not reflected through censoring, near the middle of a 28-year record. By process of elimination, an upward shift in the estimated flow-normalize concentration trend line around the same time was identified as a likely artifact resulting from the laboratory method change, although a contemporaneous change in watershed processes cannot be ruled out. Decisions as to how to treat records with potential sampling protocol or laboratory methods-related artifacts should be made on a case-by-case basis, and trend results should be appropriately qualified.

  15. Annotation, submission and screening of repetitive elements in Repbase: RepbaseSubmitter and Censor.

    PubMed

    Kohany, Oleksiy; Gentles, Andrew J; Hankus, Lukasz; Jurka, Jerzy

    2006-10-25

    Repbase is a reference database of eukaryotic repetitive DNA, which includes prototypic sequences of repeats and basic information described in annotations. Updating and maintenance of the database requires specialized tools, which we have created and made available for use with Repbase, and which may be useful as a template for other curated databases. We describe the software tools RepbaseSubmitter and Censor, which are designed to facilitate updating and screening the content of Repbase. RepbaseSubmitter is a java-based interface for formatting and annotating Repbase entries. It eliminates many common formatting errors, and automates actions such as calculation of sequence lengths and composition, thus facilitating curation of Repbase sequences. In addition, it has several features for predicting protein coding regions in sequences; searching and including Pubmed references in Repbase entries; and searching the NCBI taxonomy database for correct inclusion of species information and taxonomic position. Censor is a tool to rapidly identify repetitive elements by comparison to known repeats. It uses WU-BLAST for speed and sensitivity, and can conduct DNA-DNA, DNA-protein, or translated DNA-translated DNA searches of genomic sequence. Defragmented output includes a map of repeats present in the query sequence, with the options to report masked query sequence(s), repeat sequences found in the query, and alignments. Censor and RepbaseSubmitter are available as both web-based services and downloadable versions. They can be found at http://www.girinst.org/repbase/submission.html (RepbaseSubmitter) and http://www.girinst.org/censor/index.php (Censor).

  16. Research participant compensation: A matter of statistical inference as well as ethics.

    PubMed

    Swanson, David M; Betensky, Rebecca A

    2015-11-01

    The ethics of compensation of research subjects for participation in clinical trials has been debated for years. One ethical issue of concern is variation among subjects in the level of compensation for identical treatments. Surprisingly, the impact of variation on the statistical inferences made from trial results has not been examined. We seek to identify how variation in compensation may influence any existing dependent censoring in clinical trials, thereby also influencing inference about the survival curve, hazard ratio, or other measures of treatment efficacy. In simulation studies, we consider a model for how compensation structure may influence the censoring model. Under existing dependent censoring, we estimate survival curves under different compensation structures and observe how these structures induce variability in the estimates. We show through this model that if the compensation structure affects the censoring model and dependent censoring is present, then variation in that structure induces variation in the estimates and affects the accuracy of estimation and inference on treatment efficacy. From the perspectives of both ethics and statistical inference, standardization and transparency in the compensation of participants in clinical trials are warranted. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Estimation of distributional parameters for censored trace level water quality data: 2. Verification and applications

    USGS Publications Warehouse

    Helsel, Dennis R.; Gilliom, Robert J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.

  18. The Cosmological Evolution of Radio Sources with CENSORS

    NASA Technical Reports Server (NTRS)

    Brookes, Mairi; Best, Philip; Peacock, John; Dunlop, James; Rottgering, Huub

    2006-01-01

    The CENSORS survey, selected from the NVSS, has been followed up using EIS, K-band imaging and spectroscopic observations to produce a radio sample capable of probing the source density in the regime: z greater than 2.5. With a current spectroscopic completeness of 62%, CENSORS has been used in direct modeling of RLF evolution and in V/V(sub max) tests. There is evidence for a shallow decline in number density of source in the luminosity range 10(sup 26) - 10(sup 27)WHz(sup -1) at 1.4GHz.

  19. Probabilistic PCA of censored data: accounting for uncertainties in the visualization of high-throughput single-cell qPCR data.

    PubMed

    Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J

    2014-07-01

    High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. © The Author 2014. Published by Oxford University Press. All rights reserved.

  20. Probabilistic PCA of censored data: accounting for uncertainties in the visualization of high-throughput single-cell qPCR data

    PubMed Central

    Buettner, Florian; Moignard, Victoria; Göttgens, Berthold; Theis, Fabian J.

    2014-01-01

    Motivation: High-throughput single-cell quantitative real-time polymerase chain reaction (qPCR) is a promising technique allowing for new insights in complex cellular processes. However, the PCR reaction can be detected only up to a certain detection limit, whereas failed reactions could be due to low or absent expression, and the true expression level is unknown. Because this censoring can occur for high proportions of the data, it is one of the main challenges when dealing with single-cell qPCR data. Principal component analysis (PCA) is an important tool for visualizing the structure of high-dimensional data as well as for identifying subpopulations of cells. However, to date it is not clear how to perform a PCA of censored data. We present a probabilistic approach that accounts for the censoring and evaluate it for two typical datasets containing single-cell qPCR data. Results: We use the Gaussian process latent variable model framework to account for censoring by introducing an appropriate noise model and allowing a different kernel for each dimension. We evaluate this new approach for two typical qPCR datasets (of mouse embryonic stem cells and blood stem/progenitor cells, respectively) by performing linear and non-linear probabilistic PCA. Taking the censoring into account results in a 2D representation of the data, which better reflects its known structure: in both datasets, our new approach results in a better separation of known cell types and is able to reveal subpopulations in one dataset that could not be resolved using standard PCA. Availability and implementation: The implementation was based on the existing Gaussian process latent variable model toolbox (https://github.com/SheffieldML/GPmat); extensions for noise models and kernels accounting for censoring are available at http://icb.helmholtz-muenchen.de/censgplvm. Contact: fbuettner.phys@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24618470

  1. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    PubMed

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  2. Protecting the Innocence of Youth: Moral Sanctity Values Underlie Censorship From Young Children.

    PubMed

    Anderson, Rajen A; Masicampo, E J

    2017-11-01

    Three studies examined the relationship between people's moral values (drawing on moral foundations theory) and their willingness to censor immoral acts from children. Results revealed that diverse moral values did not predict censorship judgments. It was not the case that participants who valued loyalty and authority, respectively, sought to censor depictions of disloyal and disobedient acts. Rather, censorship intentions were predicted by a single moral value-sanctity. The more people valued sanctity, the more willing they were to censor from children, regardless of the types of violations depicted (impurity, disloyalty, disobedience, etc.). Furthermore, people who valued sanctity objected to indecent exposure only to apparently innocent and pure children-those who were relatively young and who had not been previously exposed to immoral acts. These data suggest that sanctity, purity, and the preservation of innocence underlie intentions to censor from young children.

  3. Repeated Measurements on Distinct Scales With Censoring—A Bayesian Approach Applied to Microarray Analysis of Maize

    PubMed Central

    Love, Tanzy; Carriquiry, Alicia

    2009-01-01

    We analyze data collected in a somatic embryogenesis experiment carried out on Zea mays at Iowa State University. The main objective of the study was to identify the set of genes in maize that actively participate in embryo development. Embryo tissue was sampled and analyzed at various time periods and under different mediums and light conditions. As is the case in many microarray experiments, the operator scanned each slide multiple times to find the slide-specific ‘optimal’ laser and sensor settings. The multiple readings of each slide are repeated measurements on different scales with differing censoring; they cannot be considered to be replicate measurements in the traditional sense. Yet it has been shown that the choice of reading can have an impact on genetic inference. We propose a hierarchical modeling approach to estimating gene expression that combines all available readings on each spot and accounts for censoring in the observed values. We assess the statistical properties of the proposed expression estimates using a simulation experiment. As expected, combining all available scans using an approach with good statistical properties results in expression estimates with noticeably lower bias and root mean squared error relative to other approaches that have been proposed in the literature. Inferences drawn from the somatic embryogenesis experiment, which motivated this work changed drastically when data were analyzed using the standard approaches or using the methodology we propose. PMID:19960120

  4. Estimation of distributional parameters for censored trace level water quality data: 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, Robert J.; Helsel, Dennis R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.

  5. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1986-02-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less

  6. Estimation of distributional parameters for censored trace-level water-quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1984-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less

  7. Topics on distance correlation, feature screening and lifetime expectancy with application to Beaver Dam eye study data

    NASA Astrophysics Data System (ADS)

    Kong, Jing

    This thesis includes 4 pieces of work. In Chapter 1, we present the work with a method for examining mortality as it is seen to run in families, and lifestyle factors that are also seen to run in families, in a subpopulation of the Beaver Dam Eye Study that has died by 2011. We find significant distance correlations between death ages, lifestyle factors, and family relationships. Considering only sib pairs compared to unrelated persons, distance correlation between siblings and mortality is, not surprisingly, stronger than that between more distantly related family members and mortality. Chapter 2 introduces a feature screening procedure with the use of distance correlation and covariance. We demonstrate a property for distance covariance, which is incorporated in a novel feature screening procedure based on distance correlation as a stopping criterion. The approach is further implemented to two real examples, namely the famous small round blue cell tumors data and the Cancer Genome Atlas ovarian cancer data Chapter 3 pays attention to the right censored human longevity data and the estimation of lifetime expectancy. We propose a general framework of backward multiple imputation for estimating the conditional lifetime expectancy function and the variance of the estimator in the right censoring setting and prove the properties of the estimator. In addition, we apply the method to the Beaver Dam eye study data to study human longevity, where the expected human lifetime are modeled with smoothing spline ANOVA based on the covariates including baseline age, gender, lifestyle factors and disease variables. Chapter 4 compares two imputation methods for right censored data, namely the famous Buckley-James estimator and the backward imputation method proposed in Chapter 3 and shows that backward imputation method is less biased and more robust with heterogeneity.

  8. Methods for a longitudinal quantitative outcome with a multivariate Gaussian distribution multi-dimensionally censored by therapeutic intervention.

    PubMed

    Sun, Wanjie; Larsen, Michael D; Lachin, John M

    2014-04-15

    In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.

  9. Research outcomes and recommendations for the assessment of progression in cancer clinical trials from a PhRMA working group.

    PubMed

    Stone, A M; Bushnell, W; Denne, J; Sargent, D J; Amit, O; Chen, C; Bailey-Iacona, R; Helterbrand, J; Williams, G

    2011-08-01

    Progression free survival (PFS) is increasingly used as a primary end-point in oncology clinical trials. This paper provides recommendations for optimal trial design, conduct and analysis in situations where PFS has the potential to be an acceptable end-point for regulatory approval. These recommendations are based on research performed by the Pharmaceutical Research and Manufacturers Association (PhRMA) sponsored PFS Working Group, including the re-analysis of 28 randomised Phase III trials from 12 companies/institutions. (1) In the assessment of PFS, there is a critical distinction between measurement error that results from random variation, which by itself tends to attenuate treatment effect, versus bias which increases the probability of a false negative or false positive finding. Investigator bias can be detected by auditing a random sample of patients by blinded, independent, central review (BICR). (2) ITT analyses generally resulted in smaller treatment effects (HRs closer to 1) than analyses that censor patients for potentially informative events (such as starting other anti-cancer therapy). (3) Interval censored analyses (ICA) are more robust to time-evaluation bias than the log-rank test. A sample based BICR audit may be employed in open or partially blinded trials and should not be required in true double-blind trials. Patients should be followed until progression even if they have discontinued treatment to be consistent with the ITT principle. ICAs should be a standard sensitivity analysis to assess time-evaluation bias. Implementation of these recommendations would standardize and in many cases simplify phase III oncology clinical trials that use a PFS primary end-point. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Analysis of censored data.

    PubMed

    Lucijanic, Marko; Petrovecki, Mladen

    2012-01-01

    Analyzing events over time is often complicated by incomplete, or censored, observations. Special non-parametric statistical methods were developed to overcome difficulties in summarizing and comparing censored data. Life-table (actuarial) method and Kaplan-Meier method are described with an explanation of survival curves. For the didactic purpose authors prepared a workbook based on most widely used Kaplan-Meier method. It should help the reader understand how Kaplan-Meier method is conceptualized and how it can be used to obtain statistics and survival curves needed to completely describe a sample of patients. Log-rank test and hazard ratio are also discussed.

  11. A flexible model for multivariate interval-censored survival times with complex correlation structure.

    PubMed

    Falcaro, Milena; Pickles, Andrew

    2007-02-10

    We focus on the analysis of multivariate survival times with highly structured interdependency and subject to interval censoring. Such data are common in developmental genetics and genetic epidemiology. We propose a flexible mixed probit model that deals naturally with complex but uninformative censoring. The recorded ages of onset are treated as possibly censored ordinal outcomes with the interval censoring mechanism seen as arising from a coarsened measurement of a continuous variable observed as falling between subject-specific thresholds. This bypasses the requirement for the failure times to be observed as falling into non-overlapping intervals. The assumption of a normal age-of-onset distribution of the standard probit model is relaxed by embedding within it a multivariate Box-Cox transformation whose parameters are jointly estimated with the other parameters of the model. Complex decompositions of the underlying multivariate normal covariance matrix of the transformed ages of onset become possible. The new methodology is here applied to a multivariate study of the ages of first use of tobacco and first consumption of alcohol without parental permission in twins. The proposed model allows estimation of the genetic and environmental effects that are shared by both of these risk behaviours as well as those that are specific. 2006 John Wiley & Sons, Ltd.

  12. Comparison of dynamic treatment regimes via inverse probability weighting.

    PubMed

    Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M

    2006-03-01

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  13. Censoring distances based on labeled cortical distance maps in cortical morphometry.

    PubMed

    Ceyhan, Elvan; Nishino, Tomoyuki; Alexopolous, Dimitrios; Todd, Richard D; Botteron, Kelly N; Miller, Michael I; Ratnanather, J Tilak

    2013-01-01

    It has been demonstrated that shape differences in cortical structures may be manifested in neuropsychiatric disorders. Such morphometric differences can be measured by labeled cortical distance mapping (LCDM) which characterizes the morphometry of the laminar cortical mantle of cortical structures. LCDM data consist of signed/labeled distances of gray matter (GM) voxels with respect to GM/white matter (WM) surface. Volumes and other summary measures for each subject and the pooled distances can help determine the morphometric differences between diagnostic groups, however they do not reveal all the morphometric information contained in LCDM distances. To extract more information from LCDM data, censoring of the pooled distances is introduced for each diagnostic group where the range of LCDM distances is partitioned at a fixed increment size; and at each censoring step, the distances not exceeding the censoring distance are kept. Censored LCDM distances inherit the advantages of the pooled distances but also provide information about the location of morphometric differences which cannot be obtained from the pooled distances. However, at each step, the censored distances aggregate, which might confound the results. The influence of data aggregation is investigated with an extensive Monte Carlo simulation analysis and it is demonstrated that this influence is negligible. As an illustrative example, GM of ventral medial prefrontal cortices (VMPFCs) of subjects with major depressive disorder (MDD), subjects at high risk (HR) of MDD, and healthy control (Ctrl) subjects are used. A significant reduction in laminar thickness of the VMPFC in MDD and HR subjects is observed compared to Ctrl subjects. Moreover, the GM LCDM distances (i.e., locations with respect to the GM/WM surface) for which these differences start to occur are determined. The methodology is also applicable to LCDM-based morphometric measures of other cortical structures affected by disease.

  14. Estimating the ratio of multivariate recurrent event rates with application to a blood transfusion study.

    PubMed

    Ning, Jing; Rahbar, Mohammad H; Choi, Sangbum; Piao, Jin; Hong, Chuan; Del Junco, Deborah J; Rahbar, Elaheh; Fox, Erin E; Holcomb, John B; Wang, Mei-Cheng

    2017-08-01

    In comparative effectiveness studies of multicomponent, sequential interventions like blood product transfusion (plasma, platelets, red blood cells) for trauma and critical care patients, the timing and dynamics of treatment relative to the fragility of a patient's condition is often overlooked and underappreciated. While many hospitals have established massive transfusion protocols to ensure that physiologically optimal combinations of blood products are rapidly available, the period of time required to achieve a specified massive transfusion standard (e.g. a 1:1 or 1:2 ratio of plasma or platelets:red blood cells) has been ignored. To account for the time-varying characteristics of transfusions, we use semiparametric rate models for multivariate recurrent events to estimate blood product ratios. We use latent variables to account for multiple sources of informative censoring (early surgical or endovascular hemorrhage control procedures or death). The major advantage is that the distributions of latent variables and the dependence structure between the multivariate recurrent events and informative censoring need not be specified. Thus, our approach is robust to complex model assumptions. We establish asymptotic properties and evaluate finite sample performance through simulations, and apply the method to data from the PRospective Observational Multicenter Major Trauma Transfusion study.

  15. Censorship and Junk Food Journalism.

    ERIC Educational Resources Information Center

    Jensen, Carl

    1984-01-01

    Discusses journalistic phenomenon whereby Americans are inundated with same news with only names, dates, and locations changing. Highlights include news explosion, well-documented news, why "Ten Most Censored Stories" chosen by Project Censored (Sonoma State University, California) are not covered by major news media, federal policies,…

  16. GSimp: A Gibbs sampler based left-censored missing value imputation approach for metabolomics studies

    PubMed Central

    Jia, Erik; Chen, Tianlu

    2018-01-01

    Left-censored missing values commonly exist in targeted metabolomics datasets and can be considered as missing not at random (MNAR). Improper data processing procedures for missing values will cause adverse impacts on subsequent statistical analyses. However, few imputation methods have been developed and applied to the situation of MNAR in the field of metabolomics. Thus, a practical left-censored missing value imputation method is urgently needed. We developed an iterative Gibbs sampler based left-censored missing value imputation approach (GSimp). We compared GSimp with other three imputation methods on two real-world targeted metabolomics datasets and one simulation dataset using our imputation evaluation pipeline. The results show that GSimp outperforms other imputation methods in terms of imputation accuracy, observation distribution, univariate and multivariate analyses, and statistical sensitivity. Additionally, a parallel version of GSimp was developed for dealing with large scale metabolomics datasets. The R code for GSimp, evaluation pipeline, tutorial, real-world and simulated targeted metabolomics datasets are available at: https://github.com/WandeRum/GSimp. PMID:29385130

  17. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    PubMed

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  18. The Automation System Censor Speech for the Indonesian Rude Swear Words Based on Support Vector Machine and Pitch Analysis

    NASA Astrophysics Data System (ADS)

    Endah, S. N.; Nugraheni, D. M. K.; Adhy, S.; Sutikno

    2017-04-01

    According to Law No. 32 of 2002 and the Indonesian Broadcasting Commission Regulation No. 02/P/KPI/12/2009 & No. 03/P/KPI/12/2009, stated that broadcast programs should not scold with harsh words, not harass, insult or demean minorities and marginalized groups. However, there are no suitable tools to censor those words automatically. Therefore, researches to develop a system of intelligent software to censor the words automatically are needed. To conduct censor, the system must be able to recognize the words in question. This research proposes the classification of speech divide into two classes using Support Vector Machine (SVM), first class is set of rude words and the second class is set of properly words. The speech pitch values as an input in SVM, it used for the development of the system for the Indonesian rude swear word. The results of the experiment show that SVM is good for this system.

  19. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care.

    PubMed

    Kowalski, Amanda

    2016-01-02

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.

  20. Linear regression analysis of survival data with missing censoring indicators.

    PubMed

    Wang, Qihua; Dinse, Gregg E

    2011-04-01

    Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.

  1. The Sea is Red The Sino-Soviet Rivalry and Its Naval Dimension.

    DTIC Science & Technology

    1984-05-01

    censor President Reagan’s remarks in China critical of the Soviet Union. [10]** Moreover, ties between Washington and...of the Truong Sa archipelago [Spratlys] in the Eastern Sea (82]. * The Philippines and Taiwan also hold islands in the Spratlys. Malaysia has occupied...greater threat to the Soviets in Asia. And as Zhao Ziyang points out, both China and the United States are Pacific nations and are responsible for the peace and stability of the region [117]. -36- 4A FILMED 6-85 DTIC o .1 - .

  2. Quantile Regression with Censored Data

    ERIC Educational Resources Information Center

    Lin, Guixian

    2009-01-01

    The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…

  3. Toward improved analysis of concentration data: Embracing nondetects.

    PubMed

    Shoari, Niloofar; Dubé, Jean-Sébastien

    2018-03-01

    Various statistical tests on concentration data serve to support decision-making regarding characterization and monitoring of contaminated media, assessing exposure to a chemical, and quantifying the associated risks. However, the routine statistical protocols cannot be directly applied because of challenges arising from nondetects or left-censored observations, which are concentration measurements below the detection limit of measuring instruments. Despite the existence of techniques based on survival analysis that can adjust for nondetects, these are seldom taken into account properly. A comprehensive review of the literature showed that managing policies regarding analysis of censored data do not always agree and that guidance from regulatory agencies may be outdated. Therefore, researchers and practitioners commonly resort to the most convenient way of tackling the censored data problem by substituting nondetects with arbitrary constants prior to data analysis, although this is generally regarded as a bias-prone approach. Hoping to improve the interpretation of concentration data, the present article aims to familiarize researchers in different disciplines with the significance of left-censored observations and provides theoretical and computational recommendations (under both frequentist and Bayesian frameworks) for adequate analysis of censored data. In particular, the present article synthesizes key findings from previous research with respect to 3 noteworthy aspects of inferential statistics: estimation of descriptive statistics, hypothesis testing, and regression analysis. Environ Toxicol Chem 2018;37:643-656. © 2017 SETAC. © 2017 SETAC.

  4. Box-Cox transformation of left-censored data with application to the analysis of coronary artery calcification and pharmacokinetic data.

    PubMed

    Han, Cong; Kronmal, Richard

    2004-12-15

    Box-Cox transformation is investigated for regression models for left-censored data. Examples are provided using coronary calcification data from the Multi-Ethnic Study of Atherosclerosis and pharmacokinetic data of a nicotine nasal spray. Copyright 2004 John Wiley & Sons, Ltd.

  5. Beware! Here There Be Beasties: Responding to Fundamentalist Censors.

    ERIC Educational Resources Information Center

    Traw, Rick

    1996-01-01

    Describes a heated censorship controversy experienced in 1990 in the Sioux Falls, South Dakota, school district brought by fundamentalist censors against the "Impressions" reading series. Explores specific categories of complaints, such as the supernatural, folktales, and myths. Notes the influence of religion and racism. Includes an addendum of…

  6. LIMITATIONS ON THE USES OF MULTIMEDIA EXPOSURE MEASUREMENTS FOR MULTIPATHWAY EXPOSURE ASSESSMENT - PART I: HANDLING OBSERVATIONS BELOW DETECTION LIMITS

    EPA Science Inventory

    Multimedia data from two probability-based exposure studies were investigated in terms of how censoring of non-detects affected estimation of population parameters and associations. Appropriate methods for handling censored below-detection-limit (BDL) values in this context were...

  7. James Moffett's Mistake: Ignoring the Rational Capacities of the Other

    ERIC Educational Resources Information Center

    Donehower, Kim

    2013-01-01

    Using Alasdair MacIntyre's theory of tradition-bound rationalities, this essay analyses James Moffett's depiction of the censors who opposed his "Interactions" textbook series in the Kanawha County, West Virginia, schools. Many reviewers have found Moffett's analysis of the censors in "Storm in the Mountains" even-handed and…

  8. Anatomy of the First Amendment and a Look at Its Interpretation.

    ERIC Educational Resources Information Center

    Otto, Jean H.

    1990-01-01

    Dissects features of the First Amendment, concentrating on freedom of religion, speech, and press clauses. Highlights the Hazelwood School District v. Kuhlmeier case and its reverberations. Argues that, when school officials censor, students learn that government may censor. Suggests censorship is counterproductive to schools' mission to promote…

  9. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care

    PubMed Central

    Kowalski, Amanda

    2015-01-01

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member’s injury to induce variation in an individual’s own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from −0.76 to −1.49, which are an order of magnitude larger than previous estimates. PMID:26977117

  10. How the Mind of a Censor Works: The Psychology of Censorship.

    ERIC Educational Resources Information Center

    Fine, Sara

    1996-01-01

    Explores censorship and examines it as a human dynamic. Discusses the authoritarian personality, the need to control, traditionalism and the need to belong to a group, the influence of family, denial, and authoritarian women. Describes the importance of listening to "the Censor" in order to encourage dialogue and how to use effective…

  11. Teachers Making Decisions When We Know the Censors Are Watching.

    ERIC Educational Resources Information Center

    Napier, Minta

    Attempts to suppress and even censor various texts used by English teachers often are led by members of fundamentalist Christian groups. These activists charge educators with depreciating Christian moral values and instigating a religion of "secular humanism" in the schools. Various examples of recent legal cases show how prominent the…

  12. "Tropic of Cancer" and the Censors: A Case Study and Bibliographic Guide to the Literature.

    ERIC Educational Resources Information Center

    Kincaid, Larry; Koger, Grove

    1997-01-01

    Traces the history of Henry Miller's novel "Tropic of Cancer"--censored in England and America for being too obscene--from its inception in 1932 to its vindication by the United States judicial system 30 years later. Also includes an annotated bibliography of related literature. (AEF)

  13. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR TREATMENT OF CENSORED DATA (IIT-A-4.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures undertaken to treat censored data which are below detection limits. This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by the University of Arizona NHEXAS and Battelle Laboratorie...

  14. White Racism/Black Signs: Censorship and Images of Race Relations.

    ERIC Educational Resources Information Center

    Patton, Cindy

    1995-01-01

    Discusses the simultaneous establishment of legal rights to censor film and proscriptions on particular racial representations. Describes several changes in the Hays Code that demonstrate a change in the censor's theory of the image. Suggests that these changes substituted the censorship of race-related images with a new prohibition on racial…

  15. Obscenity, Profanity and the High School Press.

    ERIC Educational Resources Information Center

    Hansen, Kent A.

    1979-01-01

    School officials cannot censor or punish profanity and vulgarity in student publications without a showing that such action is essential for the maintenance of order and discipline or protects the rights of others or that the censored material satisfies the legal tests of obscenity. Available from Willamette University College of Law, Salem, OR…

  16. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  17. A flexible cure rate model with dependent censoring and a known cure threshold.

    PubMed

    Bernhardt, Paul W

    2016-11-10

    We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. MODELING LEFT-TRUNCATED AND RIGHT-CENSORED SURVIVAL DATA WITH LONGITUDINAL COVARIATES

    PubMed Central

    Su, Yu-Ru; Wang, Jane-Ling

    2018-01-01

    There is a surge in medical follow-up studies that include longitudinal covariates in the modeling of survival data. So far, the focus has been largely on right censored survival data. We consider survival data that are subject to both left truncation and right censoring. Left truncation is well known to produce biased sample. The sampling bias issue has been resolved in the literature for the case which involves baseline or time-varying covariates that are observable. The problem remains open however for the important case where longitudinal covariates are present in survival models. A joint likelihood approach has been shown in the literature to provide an effective way to overcome those difficulties for right censored data, but this approach faces substantial additional challenges in the presence of left truncation. Here we thus propose an alternative likelihood to overcome these difficulties and show that the regression coefficient in the survival component can be estimated unbiasedly and efficiently. Issues about the bias for the longitudinal component are discussed. The new approach is illustrated numerically through simulations and data from a multi-center AIDS cohort study. PMID:29479122

  19. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  20. Cox model with interval-censored covariate in cohort studies.

    PubMed

    Ahn, Soohyun; Lim, Johan; Paik, Myunghee Cho; Sacco, Ralph L; Elkind, Mitchell S

    2018-05-18

    In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow-up. The status of the secondary event serves as a time-varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time-varying covariates. While information on a typical time-varying covariate is missing for entire follow-up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval-censored. One may view interval-censored covariate of the secondary event status as missing time-varying covariates, yet missingness is partial since partial information is provided throughout the follow-up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval-censored covariates in the Cox proportional hazards model, we propose an available-data estimator, a doubly robust-type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Fast genomic predictions via Bayesian G-BLUP and multilocus models of threshold traits including censored Gaussian data.

    PubMed

    Kärkkäinen, Hanni P; Sillanpää, Mikko J

    2013-09-04

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.

  2. Fast Genomic Predictions via Bayesian G-BLUP and Multilocus Models of Threshold Traits Including Censored Gaussian Data

    PubMed Central

    Kärkkäinen, Hanni P.; Sillanpää, Mikko J.

    2013-01-01

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618

  3. Techniques for estimating health care costs with censored data: an overview for the health services researcher

    PubMed Central

    Wijeysundera, Harindra C; Wang, Xuesong; Tomlinson, George; Ko, Dennis T; Krahn, Murray D

    2012-01-01

    Objective The aim of this study was to review statistical techniques for estimating the mean population cost using health care cost data that, because of the inability to achieve complete follow-up until death, are right censored. The target audience is health service researchers without an advanced statistical background. Methods Data were sourced from longitudinal heart failure costs from Ontario, Canada, and administrative databases were used for estimating costs. The dataset consisted of 43,888 patients, with follow-up periods ranging from 1 to 1538 days (mean 576 days). The study was designed so that mean health care costs over 1080 days of follow-up were calculated using naïve estimators such as full-sample and uncensored case estimators. Reweighted estimators – specifically, the inverse probability weighted estimator – were calculated, as was phase-based costing. Costs were adjusted to 2008 Canadian dollars using the Bank of Canada consumer price index (http://www.bankofcanada.ca/en/cpi.html). Results Over the restricted follow-up of 1080 days, 32% of patients were censored. The full-sample estimator was found to underestimate mean cost ($30,420) compared with the reweighted estimators ($36,490). The phase-based costing estimate of $37,237 was similar to that of the simple reweighted estimator. Conclusion The authors recommend against the use of full-sample or uncensored case estimators when censored data are present. In the presence of heavy censoring, phase-based costing is an attractive alternative approach. PMID:22719214

  4. Review and evaluation of performance measures for survival prediction models in external validation settings.

    PubMed

    Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z

    2017-04-18

    When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics of the validation data such as the level of censoring and the distribution of the prognostic index derived in the validation setting before choosing the performance measures.

  5. Censoring: a new approach for detection limits in total-reflection X-ray fluorescence

    NASA Astrophysics Data System (ADS)

    Pajek, M.; Kubala-Kukuś, A.; Braziewicz, J.

    2004-08-01

    It is shown that the detection limits in the total-reflection X-ray fluorescence (TXRF), which restrict quantification of very low concentrations of trace elements in the samples, can be accounted for using the statistical concept of censoring. We demonstrate that the incomplete TXRF measurements containing the so-called "nondetects", i.e. the non-measured concentrations falling below the detection limits and represented by the estimated detection limit values, can be viewed as the left random-censored data, which can be further analyzed using the Kaplan-Meier (KM) method correcting for nondetects. Within this approach, which uses the Kaplan-Meier product-limit estimator to obtain the cumulative distribution function corrected for the nondetects, the mean value and median of the detection limit censored concentrations can be estimated in a non-parametric way. The Monte Carlo simulations performed show that the Kaplan-Meier approach yields highly accurate estimates for the mean and median concentrations, being within a few percent with respect to the simulated, uncensored data. This means that the uncertainties of KM estimated mean value and median are limited in fact only by the number of studied samples and not by the applied correction procedure for nondetects itself. On the other hand, it is observed that, in case when the concentration of a given element is not measured in all the samples, simple approaches to estimate a mean concentration value from the data yield erroneous, systematically biased results. The discussed random-left censoring approach was applied to analyze the TXRF detection-limit-censored concentration measurements of trace elements in biomedical samples. We emphasize that the Kaplan-Meier approach allows one to estimate the mean concentrations being substantially below the mean level of detection limits. Consequently, this approach gives a new access to lower the effective detection limits for TXRF method, which is of prime interest for investigation of metallic impurities on the silicon wafers.

  6. Feature selection through validation and un-censoring of endovascular repair survival data for predicting the risk of re-intervention.

    PubMed

    Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter J E; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong

    2017-08-03

    Feature selection (FS) process is essential in the medical area as it reduces the effort and time needed for physicians to measure unnecessary features. Choosing useful variables is a difficult task with the presence of censoring which is the unique characteristic in survival analysis. Most survival FS methods depend on Cox's proportional hazard model; however, machine learning techniques (MLT) are preferred but not commonly used due to censoring. Techniques that have been proposed to adopt MLT to perform FS with survival data cannot be used with the high level of censoring. The researcher's previous publications proposed a technique to deal with the high level of censoring. It also used existing FS techniques to reduce dataset dimension. However, in this paper a new FS technique was proposed and combined with feature transformation and the proposed uncensoring approaches to select a reduced set of features and produce a stable predictive model. In this paper, a FS technique based on artificial neural network (ANN) MLT is proposed to deal with highly censored Endovascular Aortic Repair (EVAR). Survival data EVAR datasets were collected during 2004 to 2010 from two vascular centers in order to produce a final stable model. They contain almost 91% of censored patients. The proposed approach used a wrapper FS method with ANN to select a reduced subset of features that predict the risk of EVAR re-intervention after 5 years to patients from two different centers located in the United Kingdom, to allow it to be potentially applied to cross-centers predictions. The proposed model is compared with the two popular FS techniques; Akaike and Bayesian information criteria (AIC, BIC) that are used with Cox's model. The final model outperforms other methods in distinguishing the high and low risk groups; as they both have concordance index and estimated AUC better than the Cox's model based on AIC, BIC, Lasso, and SCAD approaches. These models have p-values lower than 0.05, meaning that patients with different risk groups can be separated significantly and those who would need re-intervention can be correctly predicted. The proposed approach will save time and effort made by physicians to collect unnecessary variables. The final reduced model was able to predict the long-term risk of aortic complications after EVAR. This predictive model can help clinicians decide patients' future observation plan.

  7. Responding Intelligently when Would-Be Censors Charge: "That Book Can Make Them...!"

    ERIC Educational Resources Information Center

    Martinson, David L.

    2007-01-01

    School administrators and teachers need to recognize that most persons--including would-be censors of school-related media communications--simply do not understand the complexities germane to measuring the impact of the mass media and the specific messages transmitted to broader audiences via a variety of media channels. In particular, what most…

  8. Semiparametric methods to contrast gap time survival functions: Application to repeat kidney transplantation.

    PubMed

    Shu, Xu; Schaubel, Douglas E

    2016-06-01

    Times between successive events (i.e., gap times) are of great importance in survival analysis. Although many methods exist for estimating covariate effects on gap times, very few existing methods allow for comparisons between gap times themselves. Motivated by the comparison of primary and repeat transplantation, our interest is specifically in contrasting the gap time survival functions and their integration (restricted mean gap time). Two major challenges in gap time analysis are non-identifiability of the marginal distributions and the existence of dependent censoring (for all but the first gap time). We use Cox regression to estimate the (conditional) survival distributions of each gap time (given the previous gap times). Combining fitted survival functions based on those models, along with multiple imputation applied to censored gap times, we then contrast the first and second gap times with respect to average survival and restricted mean lifetime. Large-sample properties are derived, with simulation studies carried out to evaluate finite-sample performance. We apply the proposed methods to kidney transplant data obtained from a national organ transplant registry. Mean 10-year graft survival of the primary transplant is significantly greater than that of the repeat transplant, by 3.9 months (p=0.023), a result that may lack clinical importance. © 2015, The International Biometric Society.

  9. Analysis of competition performance in dressage and show jumping of Dutch Warmblood horses.

    PubMed

    Rovere, G; Ducro, B J; van Arendonk, J A M; Norberg, E; Madsen, P

    2016-12-01

    Most Warmblood horse studbooks aim to improve the performance in dressage and show jumping. The Dutch Royal Warmblood Studbook (KWPN) includes the highest score achieved in competition by a horse to evaluate its genetic ability of performance. However, the records collected during competition are associated with some aspects that might affect the quality of the genetic evaluation based on these records. These aspects include the influence of rider, censoring and preselection of the data. The aim of this study was to quantify the impact of rider effect, censoring and preselection on the genetic analysis of competition data of dressage and show jumping of KWPN. Different models including rider effect were evaluated. To assess the impact of censoring, genetic parameters were estimated in data sets that differed in the degree of censoring. The effect of preselection on variance components was analysed by defining a binary trait (sport-status) depending on whether the horse has a competition record or not. This trait was included in a bivariate model with the competition trait and used all horses registered by KWPN since 1984. Results showed that performance in competition for dressage and show jumping is a heritable trait (h 2 ~ 0.11-0.13) and that it is important to account for the effect of rider in the genetic analysis. Censoring had a small effect on the genetic parameter for highest performance achieved by the horse. A moderate heritability obtained for sport-status indicates that preselection has a genetic basis, but the effect on genetic parameters was relatively small. © 2016 Blackwell Verlag GmbH.

  10. Analysis of elemental concentration censored distributions in breast malignant and breast benign neoplasm tissues

    NASA Astrophysics Data System (ADS)

    Kubala-Kukuś, A.; Banaś, D.; Braziewicz, J.; Góźdź, S.; Majewska, U.; Pajek, M.

    2007-07-01

    The total reflection X-ray fluorescence method was applied to study the trace element concentrations in human breast malignant and breast benign neoplasm tissues taken from the women who were patients of Holycross Cancer Centre in Kielce (Poland). These investigations were mainly focused on the development of new possibilities of cancer diagnosis and therapy monitoring. This systematic comparative study was based on relatively large (˜ 100) population studied, namely 26 samples of breast malignant and 68 samples of breast benign neoplasm tissues. The concentrations, being in the range from a few ppb to 0.1%, were determined for thirteen elements (from P to Pb). The results were carefully analysed to investigate the concentration distribution of trace elements in the studied samples. The measurements of concentration of trace elements by total reflection X-ray fluorescence were limited, however, by the detection limit of the method. It was observed that for more than 50% of elements determined, the concentrations were not measured in all samples. These incomplete measurements were treated within the statistical concept called left-random censoring and for the estimation of the mean value and median of censored concentration distributions, the Kaplan-Meier estimator was used. For comparison of concentrations in two populations, the log-rank test was applied, which allows to compare the censored total reflection X-ray fluorescence data. Found statistically significant differences are discussed in more details. It is noted that described data analysis procedures should be the standard tool to analyze the censored concentrations of trace elements analysed by X-ray fluorescence methods.

  11. Censoring Data for Resistance Factor Calculations in Load and Resistance Factor Design: A Preliminary Study

    Treesearch

    James W. Evans; David W. Green

    2007-01-01

    Reliability estimates for the resistance distribution of wood product properties may be made from test data where all specimens are broken (full data sets) or by using data sets where information is obtained only from the weaker pieces in the distribution (censored data). Whereas considerable information exists on property estimation from full data sets, much less...

  12. Estimating length of avian incubation and nestling stages in afrotropical forest birds from interval-censored nest records

    USGS Publications Warehouse

    Stanley, T.R.; Newmark, W.D.

    2010-01-01

    In the East Usambara Mountains in northeast Tanzania, research on the effects of forest fragmentation and disturbance on nest survival in understory birds resulted in the accumulation of 1,002 nest records between 2003 and 2008 for 8 poorly studied species. Because information on the length of the incubation and nestling stages in these species is nonexistent or sparse, our objectives in this study were (1) to estimate the length of the incubation and nestling stage and (2) to compute nest survival using these estimates in combination with calculated daily survival probability. Because our data were interval censored, we developed and applied two new statistical methods to estimate stage length. In the 8 species studied, the incubation stage lasted 9.6-21.8 days and the nestling stage 13.9-21.2 days. Combining these results with estimates of daily survival probability, we found that nest survival ranged from 6.0% to 12.5%. We conclude that our methodology for estimating stage lengths from interval-censored nest records is a reasonable and practical approach in the presence of interval-censored data. ?? 2010 The American Ornithologists' Union.

  13. Model Checking Techniques for Assessing Functional Form Specifications in Censored Linear Regression Models.

    PubMed

    León, Larry F; Cai, Tianxi

    2012-04-01

    In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.

  14. Sino-Japanese Relations: Cooperation, Competition, or Status Quo?

    DTIC Science & Technology

    2008-03-01

    prostitute and censors were concerned the film might reignite anti- Japanese sentiment.69 Regarding Prime Minister Abe’s potentially nationalistic visit...central government censored the movie “Memoirs of a Geisha” because the lead character, portrayed by a Chinese actress, could be construed as a...Thailand, Malaysia and Indonesia. Realizing the importance of the larger relationship, on September 1-3, 2007, Defense Minister Masahiko Komura met

  15. Multiple imputation for multivariate data with missing and below-threshold measurements: time-series concentrations of pollutants in the Arctic.

    PubMed

    Hopke, P K; Liu, C; Rubin, D B

    2001-03-01

    Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.

  16. Methods for analysis of the occurrence of abscess in patients with pancreatitis.

    PubMed

    Roca-Antonio, J; Escudero, L E; Gener, J; Oller, B; Rodríguez, N; Muñoz, A

    1997-01-01

    Standard survival analysis methods are useful for data involving censored cases when cures do not generally occur. If the object is to study, for instance, the development of a complication in the progress of an infectious disease, some people may be cured before complications develop. In this article, we provide methods for the analysis of data when cures do occur. An example is a study of prognostic factors for pancreatic abscess in patients with pancreatitis, some of whom leave the risk set because the pancreatitis clears. We present methods for estimating the survival curves and comparing hazard function for two objectives: (1) the occurrence of an abscess, irrespective of whether the patients are cured or not, and (2) the occurrence of an abscess for patients who, at that stage, have not been cured. We illustrate the applications of the methods using a sample of 50 patients with severe pancreatitis. To study the occurrence of an abscess, regardless of whether the patients are cured or not, we show that the appropriate strategy is to assign to the cured patients an infinite time to the appearance of an abscess. If the cured were considered censored at the moment the pancreatitis cleared, this would result in an overestimation of the hazard of presenting an abscess. On the other hand, if the objective is to compare the occurrence of abscess according to an exposure for patients who have not been cured, one needs to censor the cured patients at the time they are cured. For the analysis of survival data in the context of infectious diseases when cure is possible, it is important to use a censoring strategy that is pertinent to the specific aims of the study. Considering cures as censored at the time of cure is not always appropriate.

  17. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    PubMed

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  18. Estimation and model selection of semiparametric multivariate survival functions under general censorship

    PubMed Central

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2013-01-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286

  19. Measuring agreement of multivariate discrete survival times using a modified weighted kappa coefficient.

    PubMed

    Guo, Ying; Manatunga, Amita K

    2009-03-01

    Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We present a modified weighted kappa coefficient to measure agreement between bivariate discrete survival times. The proposed kappa coefficient accommodates censoring by redistributing the mass of censored observations within the grid where the unobserved events may potentially happen. A generalized modified weighted kappa is proposed for multivariate discrete survival times. We estimate the modified kappa coefficients nonparametrically through a multivariate survival function estimator. The asymptotic properties of the kappa estimators are established and the performance of the estimators are examined through simulation studies of bivariate and trivariate survival times. We illustrate the application of the modified kappa coefficient in the presence of censored observations with data from a prostate cancer study.

  20. An estimator of the survival function based on the semi-Markov model under dependent censorship.

    PubMed

    Lee, Seung-Yeoun; Tsai, Wei-Yann

    2005-06-01

    Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.

  1. Political science. Reverse-engineering censorship in China: randomized experimentation and participant observation.

    PubMed

    King, Gary; Pan, Jennifer; Roberts, Margaret E

    2014-08-22

    Existing research on the extensive Chinese censorship organization uses observational methods with well-known limitations. We conducted the first large-scale experimental study of censorship by creating accounts on numerous social media sites, randomly submitting different texts, and observing from a worldwide network of computers which texts were censored and which were not. We also supplemented interviews with confidential sources by creating our own social media site, contracting with Chinese firms to install the same censoring technologies as existing sites, and--with their software, documentation, and even customer support--reverse-engineering how it all works. Our results offer rigorous support for the recent hypothesis that criticisms of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored. Copyright © 2014, American Association for the Advancement of Science.

  2. seawaveQ: an R package providing a model and utilities for analyzing trends in chemical concentrations in streams with a seasonal wave (seawave) and adjustment for streamflow (Q) and other ancillary variables

    USGS Publications Warehouse

    Ryberg, Karen R.; Vecchia, Aldo V.

    2013-01-01

    The seawaveQ R package fits a parametric regression model (seawaveQ) to pesticide concentration data from streamwater samples to assess variability and trends. The model incorporates the strong seasonality and high degree of censoring common in pesticide data and users can incorporate numerous ancillary variables, such as streamflow anomalies. The model is fitted to pesticide data using maximum likelihood methods for censored data and is robust in terms of pesticide, stream location, and degree of censoring of the concentration data. This R package standardizes this methodology for trend analysis, documents the code, and provides help and tutorial information, as well as providing additional utility functions for plotting pesticide and other chemical concentration data.

  3. USSR Report, International Affairs

    DTIC Science & Technology

    1986-05-28

    examined on the material of four countries in Southeast Asia: Indonesia, Malaysia , Thailand and the Philippines. In his study, the author proceeded...television [as published] are guaranteed. There is no censorship." In other words, in the FRG there are no official censors , and in West German...34: "The Federal Republic does not need an official censor , for self-censorship—above all among the bosses of the mass media, i. e., television, radio

  4. Causal inference in survival analysis using pseudo-observations.

    PubMed

    Andersen, Per K; Syriopoulou, Elisavet; Parner, Erik T

    2017-07-30

    Causal inference for non-censored response variables, such as binary or quantitative outcomes, is often based on either (1) direct standardization ('G-formula') or (2) inverse probability of treatment assignment weights ('propensity score'). To do causal inference in survival analysis, one needs to address right-censoring, and often, special techniques are required for that purpose. We will show how censoring can be dealt with 'once and for all' by means of so-called pseudo-observations when doing causal inference in survival analysis. The pseudo-observations can be used as a replacement of the outcomes without censoring when applying 'standard' causal inference methods, such as (1) or (2) earlier. We study this idea for estimating the average causal effect of a binary treatment on the survival probability, the restricted mean lifetime, and the cumulative incidence in a competing risks situation. The methods will be illustrated in a small simulation study and via a study of patients with acute myeloid leukemia who received either myeloablative or non-myeloablative conditioning before allogeneic hematopoetic cell transplantation. We will estimate the average causal effect of the conditioning regime on outcomes such as the 3-year overall survival probability and the 3-year risk of chronic graft-versus-host disease. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Multi-disease analysis of maternal antibody decay using non-linear mixed models accounting for censoring.

    PubMed

    Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel

    2015-09-10

    Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.

  6. The use of statistical methods for censored data to evaluate the activity concentration of Pb-210 in beans (Phaseolus vulgaris L.).

    PubMed

    Mingote, Raquel M; Nogueira, Regina A

    2016-10-01

    A survey of 210 Pb activity concentration, one of the major internal natural radiation sources to man, has been carried in the most common species of beans (Phaseolus vulgaris L.) grown and consumed in Brazil. The representative bean types chosen, Carioca beans and black type sown in the Brazilian Midwestern and Southern regions, have been collected in this study and 210 Pb determined by liquid scintillation spectrometry after separation with chromatographic extraction using Sr-resin. Available values in data set of radioactivity in Brazil (GEORAD) on the 210 Pb activity concentration in black beans grown in Southeastern region have been added to the results of this study with the purpose of to amplify the population considered. Concerning the multiple detection limits and due to the high level of censored observations, a robust semi-parametric statistical method called regression on order statistics (ROS) has been employed to provide a reference value of the 210 Pb in Brazilian beans, which amounted to 41 mBq kg -1 fresh wt. The results suggest that the 210 Pb activity concentration in carioca beans is lower than in black beans. Also evaluated was the 210 Pb activity concentration in vegetable component of a typical diet, which displays lower values than those shown in the literature for food consumed in Europe. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. A generalized Grubbs-Beck test statistic for detecting multiple potentially influential low outliers in flood series

    USGS Publications Warehouse

    Cohn, T.A.; England, J.F.; Berenbrock, C.E.; Mason, R.R.; Stedinger, J.R.; Lamontagne, J.R.

    2013-01-01

    he Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as “less-than” values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.

  8. Trust, Protocol, Gender, and Power in Interwar British Biomedical Research: Kathleen Chevassut and the “Germ” of Multiple Sclerosis

    PubMed Central

    Casper, Stephen T.

    2011-01-01

    In March 1930, reports of the discovery of an organism causative of multiple sclerosis circulated in the British press. At the same time, news of a therapeutically efficacious vaccine also reached the ears of neurologists and patients afflicted with the debilitating degenerative disease. It was soon shown that no organism had been discovered. The events leading up to this ultimately painful episode reveal many of the central problems created when social conventions and a sense of decorum scripted received understanding of good scientific practice rather than actual regulatory frameworks. In the absence of such frameworks, few means were present to censor inappropriate scientific conduct. This story thus provides a window into an emergent world of state-sponsored biomedical research; a world where recrimination, gossip, misogyny, uncertainty, exaggeration, and dreams and delusions of scientific and therapeutic progress were collapsed together. PMID:20478897

  9. A generalized Grubbs-Beck test statistic for detecting multiple potentially influential low outliers in flood series

    NASA Astrophysics Data System (ADS)

    Cohn, T. A.; England, J. F.; Berenbrock, C. E.; Mason, R. R.; Stedinger, J. R.; Lamontagne, J. R.

    2013-08-01

    The Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as "less-than" values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.

  10. Another Velvet Revolution Implications of the 1989 Czech Velvet Revolution on Iran

    DTIC Science & Technology

    2011-06-01

    countries; “even censoring news from the Soviet Union, whose own period of glasnost precipitated all these gyrations.”1 Furthermore, the failure of the... America for having maliciously presented the report. For his action of passing along information to Western journalists on the reports of Smid‟s...their coverage of the demonstrations was censored . Video coverage of the demonstrations was often televised as a deterrence mechanism, meanwhile news

  11. Counterinsurgency in Brazil: Lessons of the Fighting from 1968 to 1974

    DTIC Science & Technology

    2010-04-12

    system over almost all information disseminated in the press, theaters, movies and music. Government agents worked as censors inside press agencies...articles, letter of songs and scenes from movies that were judged as being subversive were suppressed by censors . Under the military instrument of national...the maintenance of its influence in Latin America Previous to the military coup d’etat on 31 March 1964, U. S. President Lyndon Johnson had already

  12. Repression, Civil Conflict and Leadership Tenure: The Thai Case Study: 2006-2014

    DTIC Science & Technology

    2015-05-30

    peaceful protestors. The Army argues that it intervenes to prevent more violence and instability. The armed forces also censor the Internet making it...protestors . The Thai public responded negatively to violent repression, as did many of Thailand’s allies in Europe, Asia and North America . In the wake...of expression, blocking and shutting down websites and radio stations, and censoring the Internet. In addition, the new government banned gatherings

  13. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kagie, Matthew J.; Lanterman, Aaron D.

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  14. Censored rainfall modelling for estimation of fine-scale extremes

    NASA Astrophysics Data System (ADS)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  15. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Mao, Lu; Lin, D. Y.

    2016-01-01

    Abstract Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  16. A robust semi-parametric warping estimator of the survivor function with an application to two-group comparisons

    PubMed Central

    Hutson, Alan D

    2018-01-01

    In this note, we develop a new and novel semi-parametric estimator of the survival curve that is comparable to the product-limit estimator under very relaxed assumptions. The estimator is based on a beta parametrization that warps the empirical distribution of the observed censored and uncensored data. The parameters are obtained using a pseudo-maximum likelihood approach adjusting the survival curve accounting for the censored observations. In the univariate setting, the new estimator tends to better extend the range of the survival estimation given a high degree of censoring. However, the key feature of this paper is that we develop a new two-group semi-parametric exact permutation test for comparing survival curves that is generally superior to the classic log-rank and Wilcoxon tests and provides the best global power across a variety of alternatives. The new test is readily extended to the k group setting. PMID:26988931

  17. Regression analysis of informative current status data with the additive hazards model.

    PubMed

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  18. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  19. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  20. Strategic Studies Quarterly. Volume 5, Number 1, Spring 2011

    DTIC Science & Technology

    2011-01-01

    2010). 17. The White House, National Space Policy of the United States of America (Washington: White House, 28 June 2010), 3. 18. John Oneal and Bruce... censors and vigilantes) model operating on many levels at once. In this model, China is expressing a long-standing concern for the stability and...Ansfield, “China’s Censors Tackle and Trip Over the Internet,” New York Times, 8 April 2010. 32. Ching Cheong, “Fighting the Digital War with the

  1. Median nitrate concentrations in groundwater in the New Jersey Highlands Region estimated using regression models and land-surface characteristics

    USGS Publications Warehouse

    Baker, Ronald J.; Chepiga, Mary M.; Cauller, Stephen J.

    2015-01-01

    The Kaplan-Meier method of estimating summary statistics from left-censored data was applied in order to include nondetects (left-censored data) in median nitrate-concentration calculations. Median concentrations also were determined using three alternative methods of handling nondetects. Treatment of the 23 percent of samples that were nondetects had little effect on estimated median nitrate concentrations because method detection limits were mostly less than median values.

  2. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  3. A Bayesian model for time-to-event data with informative censoring

    PubMed Central

    Kaciroti, Niko A.; Raghunathan, Trivellore E.; Taylor, Jeremy M. G.; Julius, Stevo

    2012-01-01

    Randomized trials with dropouts or censored data and discrete time-to-event type outcomes are frequently analyzed using the Kaplan–Meier or product limit (PL) estimation method. However, the PL method assumes that the censoring mechanism is noninformative and when this assumption is violated, the inferences may not be valid. We propose an expanded PL method using a Bayesian framework to incorporate informative censoring mechanism and perform sensitivity analysis on estimates of the cumulative incidence curves. The expanded method uses a model, which can be viewed as a pattern mixture model, where odds for having an event during the follow-up interval (tk−1,tk], conditional on being at risk at tk−1, differ across the patterns of missing data. The sensitivity parameters relate the odds of an event, between subjects from a missing-data pattern with the observed subjects for each interval. The large number of the sensitivity parameters is reduced by considering them as random and assumed to follow a log-normal distribution with prespecified mean and variance. Then we vary the mean and variance to explore sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the expanded model, thus allowing exploration of the sensitivity to inferences as departures from the inferences under the MAR assumption. The proposed approach is applied to data from the TRial Of Preventing HYpertension. PMID:22223746

  4. A new semi-supervised learning model combined with Cox and SP-AFT models in cancer survival analysis.

    PubMed

    Chai, Hua; Li, Zi-Na; Meng, De-Yu; Xia, Liang-Yong; Liang, Yong

    2017-10-12

    Gene selection is an attractive and important task in cancer survival analysis. Most existing supervised learning methods can only use the labeled biological data, while the censored data (weakly labeled data) far more than the labeled data are ignored in model building. Trying to utilize such information in the censored data, a semi-supervised learning framework (Cox-AFT model) combined with Cox proportional hazard (Cox) and accelerated failure time (AFT) model was used in cancer research, which has better performance than the single Cox or AFT model. This method, however, is easily affected by noise. To alleviate this problem, in this paper we combine the Cox-AFT model with self-paced learning (SPL) method to more effectively employ the information in the censored data in a self-learning way. SPL is a kind of reliable and stable learning mechanism, which is recently proposed for simulating the human learning process to help the AFT model automatically identify and include samples of high confidence into training, minimizing interference from high noise. Utilizing the SPL method produces two direct advantages: (1) The utilization of censored data is further promoted; (2) the noise delivered to the model is greatly decreased. The experimental results demonstrate the effectiveness of the proposed model compared to the traditional Cox-AFT model.

  5. A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data

    PubMed Central

    Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.

    2016-01-01

    Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method’s performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the computational resources and prior information, as the method would generally provide accurate estimates and also provides the distributions of all of the parameters, which could be useful for making decisions in some applications. PMID:26209598

  6. Risk of Febrile Neutropenia Associated With Select Myelosuppressive Chemotherapy Regimens in a Large Community-Based Oncology Practice.

    PubMed

    Li, Yanli; Family, Leila; Yang, Su-Jau; Klippel, Zandra; Page, John H; Chao, Chun

    2017-09-01

    Background: NCCN has classified commonly used chemotherapy regimens into high (>20%), intermediate (10%-20%), or low (<10%) febrile neutropenia (FN) risk categories based primarily on clinical trial evidence. Many chemotherapy regimens, however, remain unclassified by NCCN or lack FN incidence data in real-world clinical practice. Patients and Methods: We evaluated incidence proportions of FN and grade 4 and 3/4 neutropenia during the first chemotherapy course among patients from Kaiser Permanente Southern California who received selected chemotherapy regimens without well-established FN risk. Patients given granulocyte colony-stimulating factor (G-CSF) prophylaxis were excluded. Sensitivity analyses were performed to account for FN misclassification and censoring. Results: From 2008 to 2013, 1,312 patients with breast cancer who received docetaxel and cyclophosphamide (TC; n=853) or docetaxel, carboplatin, and trastuzumab (TCH; n=459); 1,321 patients with colorectal cancer who received capecitabine and oxaliplatin (XELOX; n=401) or leucovorin, 5-fluorouracil, and oxaliplatin (FOLFOX6; n=920); 307 patients with non-Hodgkin's lymphoma who received bendamustine with or without rituximab; and 181 patients with multiple myeloma who received lenalidomide with or without dexamethasone were included. Crude FN risk was >20% for both breast cancer regimens (TC and TCH). Crude FN risks for XELOX, FOLFOX6, bendamustine, and lenalidomide were <10%; however, when potential FN misclassification and censoring were considered, FN risks were >10%. Conclusions: Our results support published literature highlighting the real-world, "high" FN risk of the TC and TCH regimens for breast cancer. There is strong suggestive evidence that FN risks for XELOX, FOLFOX6, bendamustine, and lenalidomide are >10%. Calculation of chemotherapy course-level FN incidence without controlling for differential censoring for patients who discontinued regimens early, or possible FN misclassification, might have resulted in bias toward an underestimation of the true FN risk. These findings help define FN risk of the selected regimens in the real-world setting and inform prophylactic G-CSF use. Copyright © 2017 by the National Comprehensive Cancer Network.

  7. The effect of hospital care on early survival after penetrating trauma.

    PubMed

    Clark, David E; Doolittle, Peter C; Winchell, Robert J; Betensky, Rebecca A

    2014-12-01

    The effectiveness of emergency medical interventions can be best evaluated using time-to-event statistical methods with time-varying covariates (TVC), but this approach is complicated by uncertainty about the actual times of death. We therefore sought to evaluate the effect of hospital intervention on mortality after penetrating trauma using a method that allowed for interval censoring of the precise times of death. Data on persons with penetrating trauma due to interpersonal assault were combined from the 2008 to 2010 National Trauma Data Bank (NTDB) and the 2004 to 2010 National Violent Death Reporting System (NVDRS). Cox and Weibull proportional hazards models for survival time (t SURV ) were estimated, with TVC assumed to have constant effects for specified time intervals following hospital arrival. The Weibull model was repeated with t SURV interval-censored to reflect uncertainty about the precise times of death, using an imputation method to accommodate interval censoring along with TVC. All models showed that mortality was increased by older age, female sex, firearm mechanism, and injuries involving the head/neck or trunk. Uncensored models showed a paradoxical increase in mortality associated with the first hour in a hospital. The interval-censored model showed that mortality was markedly reduced after admission to a hospital, with a hazard ratio (HR) of 0.68 (95% CI 0.63, 0.73) during the first 30 min declining to a HR of 0.01 after 120 min. Admission to a verified level I trauma center (compared to other hospitals in the NTDB) was associated with a further reduction in mortality, with a HR of 0.93 (95% CI 0.82, 0.97). Time-to-event models with TVC and interval censoring can be used to estimate the effect of hospital care on early mortality after penetrating trauma or other acute medical conditions and could potentially be used for interhospital comparisons.

  8. Adapting machine learning techniques to censored time-to-event health record data: A general-purpose approach using inverse probability of censoring weighting.

    PubMed

    Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J

    2016-06-01

    Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. MEDIAN-BASED INCREMENTAL COST-EFFECTIVENESS RATIOS WITH CENSORED DATA

    PubMed Central

    Bang, Heejung; Zhao, Hongwei

    2016-01-01

    Cost-effectiveness is an essential part of treatment evaluation, in addition to effectiveness. In the cost-effectiveness analysis, a measure called the incremental cost-effectiveness ratio (ICER) is widely utilized, and the mean cost and the mean (quality-adjusted) life years have served as norms to summarize cost and effectiveness for a study population. Recently, the median-based ICER was proposed for complementary or sensitivity analysis purposes. In this paper, we extend this method when some data are censored. PMID:26010599

  10. Prevalence Incidence Mixture Models

    Cancer.gov

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  11. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  12. The treatment of missing data in a large cardiovascular clinical outcomes study.

    PubMed

    Little, Roderick J; Wang, Julia; Sun, Xiang; Tian, Hong; Suh, Eun-Young; Lee, Michael; Sarich, Troy; Oppenheimer, Leonard; Plotnikov, Alexei; Wittes, Janet; Cook-Bruns, Nancy; Burton, Paul; Gibson, C Michael; Mohanty, Surya

    2016-06-01

    The potential impact of missing data on the results of clinical trials has received heightened attention recently. A National Research Council study provides recommendations for limiting missing data in clinical trial design and conduct, and principles for analysis, including the need for sensitivity analyses to assess robustness of findings to alternative assumptions about the missing data. A Food and Drug Administration advisory committee raised missing data as a serious concern in their review of results from the ATLAS ACS 2 TIMI 51 study, a large clinical trial that assessed rivaroxaban for its ability to reduce the risk of cardiovascular death, myocardial infarction or stroke in patients with acute coronary syndrome. This case study describes a variety of measures that were taken to address concerns about the missing data. A range of analyses are described to assess the potential impact of missing data on conclusions. In particular, measures of the amount of missing data are discussed, and the fraction of missing information from multiple imputation is proposed as an alternative measure. The sensitivity analysis in the National Research Council study is modified in the context of survival analysis where some individuals are lost to follow-up. The impact of deviations from ignorable censoring is assessed by differentially increasing the hazard of the primary outcome in the treatment groups and multiply imputing events between dropout and the end of the study. Tipping-point analyses are described, where the deviation from ignorable censoring that results in a reversal of significance of the treatment effect is determined. A study to determine the vital status of participants lost to follow-up was also conducted, and the results of including this additional information are assessed. Sensitivity analyses suggest that findings of the ATLAS ACS 2 TIMI 51 study are robust to missing data; this robustness is reinforced by the follow-up study, since inclusion of data from this study had little impact on the study conclusions. Missing data are a serious problem in clinical trials. The methods presented here, namely, the sensitivity analyses, the follow-up study to determine survival of missing cases, and the proposed measurement of missing data via the fraction of missing information, have potential application in other studies involving survival analysis where missing data are a concern. © The Author(s) 2016.

  13. Inverse probability weighting to control confounding in an illness-death model for interval-censored data.

    PubMed

    Gillaizeau, Florence; Sénage, Thomas; Le Borgne, Florent; Le Tourneau, Thierry; Roussel, Jean-Christian; Leffondrè, Karen; Porcher, Raphaël; Giraudeau, Bruno; Dantan, Etienne; Foucher, Yohann

    2018-04-15

    Multistate models with interval-censored data, such as the illness-death model, are still not used to any considerable extent in medical research regardless of the significant literature demonstrating their advantages compared to usual survival models. Possible explanations are their uncommon availability in classical statistical software or, when they are available, by the limitations related to multivariable modelling to take confounding into consideration. In this paper, we propose a strategy based on propensity scores that allows population causal effects to be estimated: the inverse probability weighting in the illness semi-Markov model with interval-censored data. Using simulated data, we validated the performances of the proposed approach. We also illustrated the usefulness of the method by an application aiming to evaluate the relationship between the inadequate size of an aortic bioprosthesis and its degeneration or/and patient death. We have updated the R package multistate to facilitate the future use of this method. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Conditional power and predictive power based on right censored data with supplementary auxiliary information.

    PubMed

    Sun, Libo; Wan, Ying

    2018-04-22

    Conditional power and predictive power provide estimates of the probability of success at the end of the trial based on the information from the interim analysis. The observed value of the time to event endpoint at the interim analysis could be biased for the true treatment effect due to early censoring, leading to a biased estimate of conditional power and predictive power. In such cases, the estimates and inference for this right censored primary endpoint are enhanced by incorporating a fully observed auxiliary variable. We assume a bivariate normal distribution of the transformed primary variable and a correlated auxiliary variable. Simulation studies are conducted that not only shows enhanced conditional power and predictive power but also can provide the framework for a more efficient futility interim analysis in terms of an improved accuracy in estimator, a smaller inflation in type II error and an optimal timing for such analysis. We also illustrated the new approach by a real clinical trial example. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Quantifying and estimating the predictive accuracy for censored time-to-event data with competing risks.

    PubMed

    Wu, Cai; Li, Liang

    2018-05-15

    This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk

    PubMed Central

    Wei, Shaoceng; Kryscio, Richard J.

    2015-01-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk (Figure 1). Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript we apply a Semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. PMID:24821001

  17. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk.

    PubMed

    Wei, Shaoceng; Kryscio, Richard J

    2016-12-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.

  18. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, Timothy A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.

  19. Semiparametric temporal process regression of survival-out-of-hospital.

    PubMed

    Zhan, Tianyu; Schaubel, Douglas E

    2018-05-23

    The recurrent/terminal event data structure has undergone considerable methodological development in the last 10-15 years. An example of the data structure that has arisen with increasing frequency involves the recurrent event being hospitalization and the terminal event being death. We consider the response Survival-Out-of-Hospital, defined as a temporal process (indicator function) taking the value 1 when the subject is currently alive and not hospitalized, and 0 otherwise. Survival-Out-of-Hospital is a useful alternative strategy for the analysis of hospitalization/survival in the chronic disease setting, with the response variate representing a refinement to survival time through the incorporation of an objective quality-of-life component. The semiparametric model we consider assumes multiplicative covariate effects and leaves unspecified the baseline probability of being alive-and-out-of-hospital. Using zero-mean estimating equations, the proposed regression parameter estimator can be computed without estimating the unspecified baseline probability process, although baseline probabilities can subsequently be estimated for any time point within the support of the censoring distribution. We demonstrate that the regression parameter estimator is asymptotically normal, and that the baseline probability function estimator converges to a Gaussian process. Simulation studies are performed to show that our estimating procedures have satisfactory finite sample performances. The proposed methods are applied to the Dialysis Outcomes and Practice Patterns Study (DOPPS), an international end-stage renal disease study.

  20. Excessive Gestational Weight Gain in the First Trimester among Women with Normal Glucose Tolerance and Resulting Neonatal Adiposity

    PubMed Central

    Josefson, Jami L.; Simons, Hannah; Zeiss, Dinah M.; Metzger, Boyd E.

    2016-01-01

    Objective To assess whether weight gain above or below Institute of Medicine (IOM) recommended amounts in an ethnically diverse obstetric population with normal glucose tolerance is associated with differences in neonatal adiposity. Study Design In this prospective cohort study, healthy women with normal glucose tolerance based on the International Association of Diabetes and Pregnancy Study Groups guidelines were enrolled. Gestational weight at multiple time points were collected. Neonatal adiposity was measured by air displacement plethysmography at 24-72 hours of life. Analyses included Fisher's exact test, ANOVA, and a trajectory analysis using a group-based weight gain trajectory model with a censored normal distribution. Result Overweight and obese women were more likely to exceed IOM weight gain guidelines. Regardless, there was no significant difference in %body fat of neonates born to mothers who either met or exceeded gestational weight gain guidelines. Gestational weight gain timing influenced neonatal anthropometrics: women who gained excessively by the first prenatal visit had neonates with significantly higher birth weight (3.91 kg vs. 3.45 kg, p<0.001), and %body fat (13.7% vs. 10.9%, p=0.0001) compared to women who had steady, moderate gestational weight gain. Conclusion Avoidance of excessive gestational weight gain in the first trimester may prevent high amounts of neonatal adiposity. PMID:27583397

  1. Costs of cervical cancer treatment: population-based estimates from Ontario

    PubMed Central

    Pendrith, C.; Thind, A.; Zaric, G.S.; Sarma, S.

    2016-01-01

    Objectives The objectives of the present study were to estimate the overall and specific medical care costs associated with cervical cancer in the first 5 years after diagnosis in Ontario. Methods Incident cases of invasive cervical cancer during 2007–2010 were identified from the Ontario Cancer Registry and linked to administrative databases held at the Institute for Clinical Evaluative Sciences. Mean costs in 2010 Canadian dollars were estimated using the arithmetic mean and estimators that adjust for censored data. Results Mean age of the patients in the study cohort (779 cases) was 49.3 years. The mean overall medical care cost was $39,187 [standard error (se): $1,327] in the 1st year after diagnosis. Costs in year 1 ranged from $34,648 (se: $1,275) for those who survived at least 1 year to $69,142 (se: $4,818) for those who died from cervical cancer within 1 year. At 5 years after diagnosis, the mean overall unadjusted cost was $63,131 (se: $3,131), and the cost adjusted for censoring was $68,745 (se: $2,963). Inpatient hospitalizations and cancer-related care were the two largest components of cancer treatment costs. Conclusions We found that the estimated mean costs that did not account for censoring were consistently undervalued, highlighting the importance of estimates based on censoring-adjusted costs in cervical cancer. Our results are reliable for estimating the economic burden of cervical cancer and the cost-effectiveness of cervical cancer prevention strategies. PMID:27122978

  2. Community drinking water quality monitoring data: utility for public health research and practice.

    PubMed

    Jones, Rachael M; Graber, Judith M; Anderson, Robert; Rockne, Karl; Turyk, Mary; Stayner, Leslie T

    2014-01-01

    Environmental Public Health Tracking (EPHT) tracks the occurrence and magnitude of environmental hazards and associated adverse health effects over time. The EPHT program has formally expanded its scope to include finished drinking water quality. Our objective was to describe the features, strengths, and limitations of using finished drinking water quality data from community water systems (CWSs) for EPHT applications, focusing on atrazine and nitrogen compounds in 8 Midwestern states. Water quality data were acquired after meeting with state partners and reviewed and merged for analysis. Data and the coding of variables, particularly with respect to censored results (nondetects), were not standardized between states. Monitoring frequency varied between CWSs and between atrazine and nitrates, but this was in line with regulatory requirements. Cumulative distributions of all contaminants were not the same in all states (Peto-Prentice test P < .001). Atrazine results were highly censored in all states (76.0%-99.3%); higher concentrations were associated with increased measurement frequency and surface water as the CWS source water type. Nitrate results showed substantial state-to-state variability in censoring (20.5%-100%) and in associations between concentrations and the CWS source water type. Statistical analyses of these data are challenging due to high rates of censoring and uncertainty about the appropriateness of parametric assumptions for time-series data. Although monitoring frequency was consistent with regulations, the magnitude of time gaps coupled with uncertainty about CWS service areas may limit linkage with health outcome data.

  3. Variable selection in a flexible parametric mixture cure model with interval-censored data.

    PubMed

    Scolas, Sylvie; El Ghouch, Anouar; Legrand, Catherine; Oulhaj, Abderrahim

    2016-03-30

    In standard survival analysis, it is generally assumed that every individual will experience someday the event of interest. However, this is not always the case, as some individuals may not be susceptible to this event. Also, in medical studies, it is frequent that patients come to scheduled interviews and that the time to the event is only known to occur between two visits. That is, the data are interval-censored with a cure fraction. Variable selection in such a setting is of outstanding interest. Covariates impacting the survival are not necessarily the same as those impacting the probability to experience the event. The objective of this paper is to develop a parametric but flexible statistical model to analyze data that are interval-censored and include a fraction of cured individuals when the number of potential covariates may be large. We use the parametric mixture cure model with an accelerated failure time regression model for the survival, along with the extended generalized gamma for the error term. To overcome the issue of non-stable and non-continuous variable selection procedures, we extend the adaptive LASSO to our model. By means of simulation studies, we show good performance of our method and discuss the behavior of estimates with varying cure and censoring proportion. Lastly, our proposed method is illustrated with a real dataset studying the time until conversion to mild cognitive impairment, a possible precursor of Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. Accounting for dropout in xenografted tumour efficacy studies: integrated endpoint analysis, reduced bias and better use of animals.

    PubMed

    Martin, Emma C; Aarons, Leon; Yates, James W T

    2016-07-01

    Xenograft studies are commonly used to assess the efficacy of new compounds and characterise their dose-response relationship. Analysis often involves comparing the final tumour sizes across dose groups. This can cause bias, as often in xenograft studies a tumour burden limit (TBL) is imposed for ethical reasons, leading to the animals with the largest tumours being excluded from the final analysis. This means the average tumour size, particularly in the control group, is underestimated, leading to an underestimate of the treatment effect. Four methods to account for dropout due to the TBL are proposed, which use all the available data instead of only final observations: modelling, pattern mixture models, treating dropouts as censored using the M3 method and joint modelling of tumour growth and dropout. The methods were applied to both a simulated data set and a real example. All four proposed methods led to an improvement in the estimate of treatment effect in the simulated data. The joint modelling method performed most strongly, with the censoring method also providing a good estimate of the treatment effect, but with higher uncertainty. In the real data example, the dose-response estimated using the censoring and joint modelling methods was higher than the very flat curve estimated from average final measurements. Accounting for dropout using the proposed censoring or joint modelling methods allows the treatment effect to be recovered in studies where it may have been obscured due to dropout caused by the TBL.

  5. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.

    PubMed

    Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N

    2016-01-01

    Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  6. The definition of biochemical failure in patients treated with definitive radiotherapy.

    PubMed

    Kattan, M W; Fearn, P A; Leibel, S; Potters, L

    2000-12-01

    The American Society for Therapeutic Radiology and Oncology (ASTRO) published a definition for biochemical failure following treatment of prostate cancer. Others have noted difficulties with interpreting this definition and recommended modifications to accommodate special recurrence patterns. We have compared various modifications to the original ASTRO definition on our series of 1213 patients treated with transperineal permanent prostate brachytherapy. The ASTRO modifications we considered adjusted for (1) early censoring of nonrecurrent patients with rising prostate-specific antigen levels (PSA), (2) cumulative rather than consecutive rises (without a decrease) as evidence of recurrence, (3) both of the above, and (4) waiting 2 years before data analysis. The Kaplan-Meier method was used to compute the effects on recurrence rate for patients treated with and without neoadjuvant hormones. With the original ASTRO definition, freedom from recurrence in our series of men who did not receive neoadjuvant hormones was 83% at 4 years. All of the modifications considered had statistically insignificant effects on freedom from recurrence rates, varying from 80% to 83% at 4 years. Patients treated with neoadjuvant hormones also showed very little sensitivity to the recurrence definition employed. Early censoring of equivocal patients and counting cumulative rather than consecutive rises in PSA (without a decrease) had little empiric effect on the ASTRO recurrence rates. However, we favor the addition of both these modifications to the ASTRO definition on conceptual grounds for evaluating patients following any modality (radiation or surgery), whereby a trend over multiple PSA values is used to judge failure.

  7. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Exotica and the status of the strong cosmic censor conjecture in four dimensions

    NASA Astrophysics Data System (ADS)

    Etesi, Gábor

    2017-12-01

    An immense class of physical counterexamples to the four dimensional strong cosmic censor conjecture—in its usual broad formulation—is exhibited. More precisely, out of any closed and simply connected 4-manifold an open Ricci-flat Lorentzian 4-manifold is constructed which is not globally hyperbolic, and no perturbation of which, in any sense, can be globally hyperbolic. This very stable non-global-hyperbolicity is the consequence of our open spaces having a ‘creased end’—i.e. an end diffeomorphic to an exotic \

  9. Multiple imputation of missing covariates for the Cox proportional hazards cure model

    PubMed Central

    Beesley, Lauren J; Bartlett, Jonathan W; Wolf, Gregory T; Taylor, Jeremy M G

    2016-01-01

    We explore several approaches for imputing partially observed covariates when the outcome of interest is a censored event time and when there is an underlying subset of the population that will never experience the event of interest. We call these subjects “cured,” and we consider the case where the data are modeled using a Cox proportional hazards (CPH) mixture cure model. We study covariate imputation approaches using fully conditional specification (FCS). We derive the exact conditional distribution and suggest a sampling scheme for imputing partially observed covariates in the CPH cure model setting. We also propose several approximations to the exact distribution that are simpler and more convenient to use for imputation. A simulation study demonstrates that the proposed imputation approaches outperform existing imputation approaches for survival data without a cure fraction in terms of bias in estimating CPH cure model parameters. We apply our multiple imputation techniques to a study of patients with head and neck cancer. PMID:27439726

  10. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    PubMed Central

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  11. Analysis of survival in breast cancer patients by using different parametric models

    NASA Astrophysics Data System (ADS)

    Enera Amran, Syahila; Asrul Afendi Abdullah, M.; Kek, Sie Long; Afiqah Muhamad Jamil, Siti

    2017-09-01

    In biomedical applications or clinical trials, right censoring was often arising when studying the time to event data. In this case, some individuals are still alive at the end of the study or lost to follow up at a certain time. It is an important issue to handle the censoring data in order to prevent any bias information in the analysis. Therefore, this study was carried out to analyze the right censoring data with three different parametric models; exponential model, Weibull model and log-logistic models. Data of breast cancer patients from Hospital Sultan Ismail, Johor Bahru from 30 December 2008 until 15 February 2017 was used in this study to illustrate the right censoring data. Besides, the covariates included in this study are the time of breast cancer infection patients survive t, age of each patients X1 and treatment given to the patients X2 . In order to determine the best parametric models in analysing survival of breast cancer patients, the performance of each model was compare based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and log-likelihood value using statistical software R. When analysing the breast cancer data, all three distributions were shown consistency of data with the line graph of cumulative hazard function resembles a straight line going through the origin. As the result, log-logistic model was the best fitted parametric model compared with exponential and Weibull model since it has the smallest value in AIC and BIC, also the biggest value in log-likelihood.

  12. Improvement of Parameter Estimations in Tumor Growth Inhibition Models on Xenografted Animals: Handling Sacrifice Censoring and Error Caused by Experimental Measurement on Larger Tumor Sizes.

    PubMed

    Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie

    2016-09-01

    The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).

  13. Quantification of variability and uncertainty for air toxic emission inventories with censored emission factor data.

    PubMed

    Frey, H Christopher; Zhao, Yuchao

    2004-11-15

    Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.

  14. Modeling absolute differences in life expectancy with a censored skew-normal regression approach

    PubMed Central

    Clough-Gorr, Kerri; Zwahlen, Marcel

    2015-01-01

    Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544

  15. “Smooth” Semiparametric Regression Analysis for Arbitrarily Censored Time-to-Event Data

    PubMed Central

    Zhang, Min; Davidian, Marie

    2008-01-01

    Summary A general framework for regression analysis of time-to-event data subject to arbitrary patterns of censoring is proposed. The approach is relevant when the analyst is willing to assume that distributions governing model components that are ordinarily left unspecified in popular semiparametric regression models, such as the baseline hazard function in the proportional hazards model, have densities satisfying mild “smoothness” conditions. Densities are approximated by a truncated series expansion that, for fixed degree of truncation, results in a “parametric” representation, which makes likelihood-based inference coupled with adaptive choice of the degree of truncation, and hence flexibility of the model, computationally and conceptually straightforward with data subject to any pattern of censoring. The formulation allows popular models, such as the proportional hazards, proportional odds, and accelerated failure time models, to be placed in a common framework; provides a principled basis for choosing among them; and renders useful extensions of the models straightforward. The utility and performance of the methods are demonstrated via simulations and by application to data from time-to-event studies. PMID:17970813

  16. A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data.

    PubMed

    Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P; Engel, Lawrence S; Kwok, Richard K; Blair, Aaron; Stewart, Patricia A

    2016-01-01

    Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method's performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the computational resources and prior information, as the method would generally provide accurate estimates and also provides the distributions of all of the parameters, which could be useful for making decisions in some applications. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  17. Personalized Risk Prediction in Clinical Oncology Research: Applications and Practical Issues Using Survival Trees and Random Forests.

    PubMed

    Hu, Chen; Steingrimsson, Jon Arni

    2018-01-01

    A crucial component of making individualized treatment decisions is to accurately predict each patient's disease risk. In clinical oncology, disease risks are often measured through time-to-event data, such as overall survival and progression/recurrence-free survival, and are often subject to censoring. Risk prediction models based on recursive partitioning methods are becoming increasingly popular largely due to their ability to handle nonlinear relationships, higher-order interactions, and/or high-dimensional covariates. The most popular recursive partitioning methods are versions of the Classification and Regression Tree (CART) algorithm, which builds a simple interpretable tree structured model. With the aim of increasing prediction accuracy, the random forest algorithm averages multiple CART trees, creating a flexible risk prediction model. Risk prediction models used in clinical oncology commonly use both traditional demographic and tumor pathological factors as well as high-dimensional genetic markers and treatment parameters from multimodality treatments. In this article, we describe the most commonly used extensions of the CART and random forest algorithms to right-censored outcomes. We focus on how they differ from the methods for noncensored outcomes, and how the different splitting rules and methods for cost-complexity pruning impact these algorithms. We demonstrate these algorithms by analyzing a randomized Phase III clinical trial of breast cancer. We also conduct Monte Carlo simulations to compare the prediction accuracy of survival forests with more commonly used regression models under various scenarios. These simulation studies aim to evaluate how sensitive the prediction accuracy is to the underlying model specifications, the choice of tuning parameters, and the degrees of missing covariates.

  18. Vemurafenib in patients with BRAFV600 mutation-positive metastatic melanoma: final overall survival results of the randomized BRIM-3 study.

    PubMed

    Chapman, P B; Robert, C; Larkin, J; Haanen, J B; Ribas, A; Hogg, D; Hamid, O; Ascierto, P A; Testori, A; Lorigan, P C; Dummer, R; Sosman, J A; Flaherty, K T; Chang, I; Coleman, S; Caro, I; Hauschild, A; McArthur, G A

    2017-10-01

    The BRIM-3 trial showed improved progression-free survival (PFS) and overall survival (OS) for vemurafenib compared with dacarbazine in treatment-naive patients with BRAFV600 mutation-positive metastatic melanoma. We present final OS data from BRIM-3. Patients were randomly assigned in a 1 : 1 ratio to receive vemurafenib (960 mg twice daily) or dacarbazine (1000 mg/m2 every 3 weeks). OS and PFS were co-primary end points. OS was assessed in the intention-to-treat population, with and without censoring of data for dacarbazine patients who crossed over to vemurafenib. Between 4 January 2010 and 16 December 2010, a total of 675 patients were randomized to vemurafenib (n = 337) or dacarbazine (n = 338, of whom 84 crossed over to vemurafenib). At the time of database lock (14 August 2015), median OS, censored at crossover, was significantly longer for vemurafenib than for dacarbazine {13.6 months [95% confidence interval (CI) 12.0-15.4] versus 9.7 months [95% CI 7.9-12.8; hazard ratio (HR) 0.81 [95% CI 0.67-0.98]; P = 0.03}, as was median OS without censoring at crossover [13.6 months (95% CI 12.0-15.4) versus 10.3 months (95% CI 9.1-12.8); HR 0.81 (95% CI 0.68-0.96); P = 0.01]. Kaplan-Meier estimates of OS rates for vemurafenib versus dacarbazine were 56% versus 46%, 30% versus 24%, 21% versus 19% and 17% versus 16% at 1, 2, 3 and 4 years, respectively. Overall, 173 of the 338 patients (51%) in the dacarbazine arm and 175 of the 337 (52%) of those in the vemurafenib arm received subsequent anticancer therapies, most commonly ipilimumab. Safety data were consistent with the primary analysis. Vemurafenib continues to be associated with improved median OS in the BRIM-3 trial after extended follow-up. OS curves converged after ≈3 years, likely as a result of crossover from dacarbazine to vemurafenib and receipt of subsequent anticancer therapies. NCT01006980. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology.

  19. Statistical inference methods for two crossing survival curves: a comparison of methods.

    PubMed

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.

  20. Statistical Inference Methods for Two Crossing Survival Curves: A Comparison of Methods

    PubMed Central

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman’s smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér—von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman’s smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests. PMID:25615624

  1. Estimating the mean and standard deviation of environmental data with below detection limit observations: Considering highly skewed data and model misspecification.

    PubMed

    Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin

    2015-11-01

    In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Challenges in risk estimation using routinely collected clinical data: The example of estimating cervical cancer risks from electronic health-records.

    PubMed

    Landy, Rebecca; Cheung, Li C; Schiffman, Mark; Gage, Julia C; Hyun, Noorie; Wentzensen, Nicolas; Kinney, Walter K; Castle, Philip E; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas; Sasieni, Peter D; Katki, Hormuzd A

    2018-06-01

    Electronic health-records (EHR) are increasingly used by epidemiologists studying disease following surveillance testing to provide evidence for screening intervals and referral guidelines. Although cost-effective, undiagnosed prevalent disease and interval censoring (in which asymptomatic disease is only observed at the time of testing) raise substantial analytic issues when estimating risk that cannot be addressed using Kaplan-Meier methods. Based on our experience analysing EHR from cervical cancer screening, we previously proposed the logistic-Weibull model to address these issues. Here we demonstrate how the choice of statistical method can impact risk estimates. We use observed data on 41,067 women in the cervical cancer screening program at Kaiser Permanente Northern California, 2003-2013, as well as simulations to evaluate the ability of different methods (Kaplan-Meier, Turnbull, Weibull and logistic-Weibull) to accurately estimate risk within a screening program. Cumulative risk estimates from the statistical methods varied considerably, with the largest differences occurring for prevalent disease risk when baseline disease ascertainment was random but incomplete. Kaplan-Meier underestimated risk at earlier times and overestimated risk at later times in the presence of interval censoring or undiagnosed prevalent disease. Turnbull performed well, though was inefficient and not smooth. The logistic-Weibull model performed well, except when event times didn't follow a Weibull distribution. We have demonstrated that methods for right-censored data, such as Kaplan-Meier, result in biased estimates of disease risks when applied to interval-censored data, such as screening programs using EHR data. The logistic-Weibull model is attractive, but the model fit must be checked against Turnbull non-parametric risk estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Modeling stream fish distributions using interval-censored detection times.

    PubMed

    Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro

    2016-08-01

    Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.

  4. Accounting for Selection Bias in Studies of Acute Cardiac Events.

    PubMed

    Banack, Hailey R; Harper, Sam; Kaufman, Jay S

    2018-06-01

    In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  5. Photon counting, censor corrections, and lifetime imaging for improved detection in two-photon microscopy

    PubMed Central

    Driscoll, Jonathan D.; Shih, Andy Y.; Iyengar, Satish; Field, Jeffrey J.; White, G. Allen; Squier, Jeffrey A.; Cauwenberghs, Gert

    2011-01-01

    We present a high-speed photon counter for use with two-photon microscopy. Counting pulses of photocurrent, as opposed to analog integration, maximizes the signal-to-noise ratio so long as the uncertainty in the count does not exceed the gain-noise of the photodetector. Our system extends this improvement through an estimate of the count that corrects for the censored period after detection of an emission event. The same system can be rapidly reconfigured in software for fluorescence lifetime imaging, which we illustrate by distinguishing between two spectrally similar fluorophores in an in vivo model of microstroke. PMID:21471395

  6. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  7. Thermal Cycling Life Prediction of Sn-3.0Ag-0.5Cu Solder Joint Using Type-I Censored Data

    PubMed Central

    Mi, Jinhua; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-01-01

    Because solder joint interconnections are the weaknesses of microelectronic packaging, their reliability has great influence on the reliability of the entire packaging structure. Based on an accelerated life test the reliability assessment and life prediction of lead-free solder joints using Weibull distribution are investigated. The type-I interval censored lifetime data were collected from a thermal cycling test, which was implemented on microelectronic packaging with lead-free ball grid array (BGA) and fine-pitch ball grid array (FBGA) interconnection structures. The number of cycles to failure of lead-free solder joints is predicted by using a modified Engelmaier fatigue life model and a type-I censored data processing method. Then, the Pan model is employed to calculate the acceleration factor of this test. A comparison of life predictions between the proposed method and the ones calculated directly by Matlab and Minitab is conducted to demonstrate the practicability and effectiveness of the proposed method. At last, failure analysis and microstructure evolution of lead-free solders are carried out to provide useful guidance for the regular maintenance, replacement of substructure, and subsequent processing of electronic products. PMID:25121138

  8. The fracture load and failure types of veneered anterior zirconia crowns: an analysis of normal and Weibull distribution of complete and censored data.

    PubMed

    Stawarczyk, Bogna; Ozcan, Mutlu; Hämmerle, Christoph H F; Roos, Malgorzata

    2012-05-01

    The aim of this study was to compare the fracture load of veneered anterior zirconia crowns using normal and Weibull distribution of complete and censored data. Standardized zirconia frameworks for maxillary canines were milled using a CAD/CAM system and randomly divided into 3 groups (N=90, n=30 per group). They were veneered with three veneering ceramics, namely GC Initial ZR, Vita VM9, IPS e.max Ceram using layering technique. The crowns were cemented with glass ionomer cement on metal abutments. The specimens were then loaded to fracture (1 mm/min) in a Universal Testing Machine. The data were analyzed using classical method (normal data distribution (μ, σ); Levene test and one-way ANOVA) and according to the Weibull statistics (s, m). In addition, fracture load results were analyzed depending on complete and censored failure types (only chipping vs. total fracture together with chipping). When computed with complete data, significantly higher mean fracture loads (N) were observed for GC Initial ZR (μ=978, σ=157; s=1043, m=7.2) and VITA VM9 (μ=1074, σ=179; s=1139; m=7.8) than that of IPS e.max Ceram (μ=798, σ=174; s=859, m=5.8) (p<0.05) by classical and Weibull statistics, respectively. When the data were censored for only total fracture, IPS e.max Ceram presented the lowest fracture load for chipping with both classical distribution (μ=790, σ=160) and Weibull statistics (s=836, m=6.5). When total fracture with chipping (classical distribution) was considered as failure, IPS e.max Ceram did not show significant fracture load for total fracture (μ=1054, σ=110) compared to other groups (GC Initial ZR: μ=1039, σ=152, VITA VM9: μ=1170, σ=166). According to Weibull distributed data, VITA VM9 showed significantly higher fracture load (s=1228, m=9.4) than those of other groups. Both classical distribution and Weibull statistics for complete data yielded similar outcomes. Censored data analysis of all ceramic systems based on failure types is essential and brings additional information regarding the susceptibility to chipping or total fracture. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  9. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    PubMed

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Multiple imputation for estimating the risk of developing dementia and its impact on survival.

    PubMed

    Yu, Binbing; Saczynski, Jane S; Launer, Lenore

    2010-10-01

    Dementia, Alzheimer's disease in particular, is one of the major causes of disability and decreased quality of life among the elderly and a leading obstacle to successful aging. Given the profound impact on public health, much research has focused on the age-specific risk of developing dementia and the impact on survival. Early work has discussed various methods of estimating age-specific incidence of dementia, among which the illness-death model is popular for modeling disease progression. In this article we use multiple imputation to fit multi-state models for survival data with interval censoring and left truncation. This approach allows semi-Markov models in which survival after dementia depends on onset age. Such models can be used to estimate the cumulative risk of developing dementia in the presence of the competing risk of dementia-free death. Simulations are carried out to examine the performance of the proposed method. Data from the Honolulu Asia Aging Study are analyzed to estimate the age-specific and cumulative risks of dementia and to examine the effect of major risk factors on dementia onset and death.

  11. Application of AFINCH as a tool for evaluating the effects of streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the southeast Lake Michigan hydrologic subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations.  Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971–2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages).Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size and composition of the streamflow-gaging network affected the average apparent errors and variability of the estimated flows and (b) whether results for certain months were more variable than for others. The six flow lines were categorized into one of three types depending upon their network topology and position relative to operating streamflow-gaging stations.    Statistical analysis of the model results indicates that (1) less precise (that is, more variable) estimates resulted from smaller streamflow-gaging networks as compared to larger streamflow-gaging networks, (2) precision of AFINCH flow estimates at an ungaged flow line is improved by operation of one or more streamflow gages upstream and (or) downstream in the enclosing basin, (3) no consistent seasonal trend in estimate variability was evident, and (4) flow lines from ungaged basins appeared to exhibit the smallest absolute apparent percent errors (APEs) and smallest changes in average APE as a function of increasing censoring level. The counterintuitive results described in item (4) above likely reflect both the nature of the base-streamflow estimate from which the errors were computed and insensitivity in the average model-derived estimates to changes in the streamflow-gaging-network size and composition. Another analysis demonstrated that errors for flow lines in ungaged basins have the potential to be much larger than indicated by their APEs if measured relative to their true (but unknown) flows.     “Missing gage” analyses, based on examination of censoring subset results where the streamflow gage of interest was omitted from the calibration data set, were done to better understand the true error characteristics for ungaged flow lines as a function of network size. Results examined for 2 water years indicated that the probability of computing a monthly streamflow estimate within 10 percent of the true value with AFINCH decreased from greater than 0.9 at about a 10-percent network-censoring level to less than 0.6 as the censoring level approached 75 percent. In addition, estimates for typically dry months tended to be characterized by larger percent errors than typically wetter months.

  12. Some simple guides to finding useful information in exploration geochemical data

    USGS Publications Warehouse

    Singer, D.A.; Kouda, R.

    2001-01-01

    Most regional geochemistry data reflect processes that can produce superfluous bits of noise and, perhaps, information about the mineralization process of interest. There are two end-member approaches to finding patterns in geochemical data-unsupervised learning and supervised learning. In unsupervised learning, data are processed and the geochemist is given the task of interpreting and identifying possible sources of any patterns. In supervised learning, data from known subgroups such as rock type, mineralized and nonmineralized, and types of mineralization are used to train the system which then is given unknown samples to classify into these subgroups. To locate patterns of interest, it is helpful to transform the data and to remove unwanted masking patterns. With trace elements use of a logarithmic transformation is recommended. In many situations, missing censored data can be estimated using multiple regression of other uncensored variables on the variable with censored values. In unsupervised learning, transformed values can be standardized, or normalized, to a Z-score by subtracting the subset's mean and dividing by its standard deviation. Subsets include any source of differences that might be related to processes unrelated to the target sought such as different laboratories, regional alteration, analytical procedures, or rock types. Normalization removes effects of different means and measurement scales as well as facilitates comparison of spatial patterns of elements. These adjustments remove effects of different subgroups and hopefully leave on the map the simple and uncluttered pattern(s) related to the mineralization only. Supervised learning methods, such as discriminant analysis and neural networks, offer the promise of consistent and, in certain situations, unbiased estimates of where mineralization might exist. These methods critically rely on being trained with data that encompasses all populations fairly and that can possibly fall into only the identified populations. ?? 2001 International Association for Mathematical Geology.

  13. Detecting small-study effects and funnel plot asymmetry in meta-analysis of survival data: A comparison of new and existing tests.

    PubMed

    Debray, Thomas P A; Moons, Karel G M; Riley, Richard D

    2018-03-01

    Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size, or the inverse of the total sample size. In this paper, we demonstrate that the application of these tests may be less appropriate in meta-analysis of survival data, where censoring influences statistical significance of the hazard ratio. We subsequently propose 2 new tests that are based on the total number of observed events and adopt a multiplicative variance component. We compare the performance of the various funnel plot asymmetry tests in an extensive simulation study where we varied the true hazard ratio (0.5 to 1), the number of published trials (N=10 to 100), the degree of censoring within trials (0% to 90%), and the mechanism leading to participant dropout (noninformative versus informative). Results demonstrate that previous well-known tests for detecting funnel plot asymmetry suffer from low power or excessive type-I error rates in meta-analysis of survival data, particularly when trials are affected by participant dropout. Because our novel test (adopting estimates of the asymptotic precision as study weights) yields reasonable power and maintains appropriate type-I error rates, we recommend its use to evaluate funnel plot asymmetry in meta-analysis of survival data. The use of funnel plot asymmetry tests should, however, be avoided when there are few trials available for any meta-analysis. © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons, Ltd.

  14. Rejoice in the hubris: useful things biologists could do for physicists

    NASA Astrophysics Data System (ADS)

    Austin, Robert H.

    2014-10-01

    Political correctness urges us to state how wonderful it is to work with biologists and how, just as the lion will someday lie down with the lamb, so will interdisciplinary work, where biologists and physicists are mixed together in light, airy buildings designed to force socialization, give rise to wonderful new science. But it has been said that the only drive in human nature stronger than the sex drive is the drive to censor and suppress, and so I claim that it is OK for physicists and biologists to maintain a wary distance from each other, so that neither one censors or suppresses the wild ideas of the other.

  15. Statistical methods for astronomical data with upper limits. I - Univariate distributions

    NASA Technical Reports Server (NTRS)

    Feigelson, E. D.; Nelson, P. I.

    1985-01-01

    The statistical treatment of univariate censored data is discussed. A heuristic derivation of the Kaplan-Meier maximum-likelihood estimator from first principles is presented which results in an expression amenable to analytic error analysis. Methods for comparing two or more censored samples are given along with simple computational examples, stressing the fact that most astronomical problems involve upper limits while the standard mathematical methods require lower limits. The application of univariate survival analysis to six data sets in the recent astrophysical literature is described, and various aspects of the use of survival analysis in astronomy, such as the limitations of various two-sample tests and the role of parametric modelling, are discussed.

  16. Rejoice in the hubris: useful things biologists could do for physicists.

    PubMed

    Austin, Robert H

    2014-10-08

    Political correctness urges us to state how wonderful it is to work with biologists and how, just as the lion will someday lie down with the lamb, so will interdisciplinary work, where biologists and physicists are mixed together in light, airy buildings designed to force socialization, give rise to wonderful new science. But it has been said that the only drive in human nature stronger than the sex drive is the drive to censor and suppress, and so I claim that it is OK for physicists and biologists to maintain a wary distance from each other, so that neither one censors or suppresses the wild ideas of the other.

  17. Performance of Distributed CFAR Processors in Pearson Distributed Clutter

    NASA Astrophysics Data System (ADS)

    Messali, Zoubeida; Soltani, Faouzi

    2006-12-01

    This paper deals with the distributed constant false alarm rate (CFAR) radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA), order statistics (OS), and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S) random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO) and the smallest of (SO) CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the "OR" fusion rule.

  18. siMacro: A Fast and Easy Data Processing Tool for Cell-Based Genomewide siRNA Screens.

    PubMed

    Singh, Nitin Kumar; Seo, Bo Yeun; Vidyasagar, Mathukumalli; White, Michael A; Kim, Hyun Seok

    2013-03-01

    Growing numbers of studies employ cell line-based systematic short interfering RNA (siRNA) screens to study gene functions and to identify drug targets. As multiple sources of variations that are unique to siRNA screens exist, there is a growing demand for a computational tool that generates normalized values and standardized scores. However, only a few tools have been available so far with limited usability. Here, we present siMacro, a fast and easy-to-use Microsoft Office Excel-based tool with a graphic user interface, designed to process single-condition or two-condition synthetic screen datasets. siMacro normalizes position and batch effects, censors outlier samples, and calculates Z-scores and robust Z-scores, with a spreadsheet output of >120,000 samples in under 1 minute.

  19. siMacro: A Fast and Easy Data Processing Tool for Cell-Based Genomewide siRNA Screens

    PubMed Central

    Singh, Nitin Kumar; Seo, Bo Yeun; Vidyasagar, Mathukumalli; White, Michael A.

    2013-01-01

    Growing numbers of studies employ cell line-based systematic short interfering RNA (siRNA) screens to study gene functions and to identify drug targets. As multiple sources of variations that are unique to siRNA screens exist, there is a growing demand for a computational tool that generates normalized values and standardized scores. However, only a few tools have been available so far with limited usability. Here, we present siMacro, a fast and easy-to-use Microsoft Office Excel-based tool with a graphic user interface, designed to process single-condition or two-condition synthetic screen datasets. siMacro normalizes position and batch effects, censors outlier samples, and calculates Z-scores and robust Z-scores, with a spreadsheet output of >120,000 samples in under 1 minute. PMID:23613684

  20. R package PRIMsrc: Bump Hunting by Patient Rule Induction Method for Survival, Regression and Classification

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326

  1. Short daily-, nocturnal- and conventional-home hemodialysis have similar patient and treatment survival.

    PubMed

    Tennankore, Karthik K; Na, Yingbo; Wald, Ron; Chan, Christopher T; Perl, Jeffrey

    2018-01-01

    Home hemodialysis (HHD) has many benefits, but less is known about relative outcomes when comparing different home-based hemodialysis modalities. Here, we compare patient and treatment survival for patients receiving short daily HHD (2-3 hours/5 plus sessions per week), nocturnal HHD (6-8 hours/5 plus sessions per week) and conventional HHD (3-6 hours/2-4 sessions per week). A nationally representative cohort of Canadian HHD patients from 1996-2012 was studied. The primary outcome was death or treatment failure (defined as a permanent return to in-center hemodialysis or peritoneal dialysis) using an intention to treat analysis and death-censored treatment failure as a secondary outcome. The cohort consisted of 600, 508 and 202 patients receiving conventional, nocturnal, and short daily HHD, respectively. Conventional-HHD patients were more likely to use dialysis catheter access (43%) versus nocturnal or short daily HHD (32% and 31%, respectively). Although point estimates were in favor of both therapies, after multivariable adjustment for patient and center factors, there was no statistically significant reduction in the relative hazard for the death/treatment failure composite comparing nocturnal to conventional HHD (hazard ratio 0.83 [95% confidence interval 0.66-1.03]) or short daily to conventional HHD (0.84, 0.63-1.12). Among those with information on vascular access, patients receiving nocturnal HHD had a relative improvement in death-censored treatment survival (0.75, 0.57-0.98). Thus, in this national cohort of HHD patients, those receiving short daily and nocturnal HHD had similar patient/treatment survival compared with patients receiving conventional HHD. Copyright © 2017 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.

  2. Hazard Regression Models of Early Mortality in Trauma Centers

    PubMed Central

    Clark, David E; Qian, Jing; Winchell, Robert J; Betensky, Rebecca A

    2013-01-01

    Background Factors affecting early hospital deaths after trauma may be different from factors affecting later hospital deaths, and the distribution of short and long prehospital times may vary among hospitals. Hazard regression (HR) models may therefore be more useful than logistic regression (LR) models for analysis of trauma mortality, especially when treatment effects at different time points are of interest. Study Design We obtained data for trauma center patients from the 2008–9 National Trauma Data Bank (NTDB). Cases were included if they had complete data for prehospital times, hospital times, survival outcome, age, vital signs, and severity scores. Cases were excluded if pulseless on admission, transferred in or out, or ISS<9. Using covariates proposed for the Trauma Quality Improvement Program and an indicator for each hospital, we compared LR models predicting survival at 8 hours after injury to HR models with survival censored at 8 hours. HR models were then modified to allow time-varying hospital effects. Results 85,327 patients in 161 hospitals met inclusion criteria. Crude hazards peaked initially, then steadily declined. When hazard ratios were assumed constant in HR models, they were similar to odds ratios in LR models associating increased mortality with increased age, firearm mechanism, increased severity, more deranged physiology, and estimated hospital-specific effects. However, when hospital effects were allowed to vary by time, HR models demonstrated that hospital outliers were not the same at different times after injury. Conclusions HR models with time-varying hazard ratios reveal inconsistencies in treatment effects, data quality, and/or timing of early death among trauma centers. HR models are generally more flexible than LR models, can be adapted for censored data, and potentially offer a better tool for analysis of factors affecting early death after injury. PMID:23036828

  3. Sample size guidelines for fitting a lognormal probability distribution to censored most probable number data with a Markov chain Monte Carlo method.

    PubMed

    Williams, Michael S; Cao, Yong; Ebel, Eric D

    2013-07-15

    Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.

  4. Likelihoods for fixed rank nomination networks

    PubMed Central

    HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE

    2014-01-01

    Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586

  5. Exploration of the use of Bayesian modeling of gradients for censored spatiotemporal data from the Deepwater Horizon oil spill

    PubMed Central

    Quick, Harrison; Groth, Caroline; Banerjee, Sudipto; Carlin, Bradley P.; Stenzel, Mark R.; Stewart, Patricia A.; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.

    2014-01-01

    Summary This paper develops a hierarchical framework for identifying spatiotemporal patterns in data with a high degree of censoring using the gradient process. To do this, we impute censored values using a sampling-based inverse CDF method within our Markov chain Monte Carlo algorithm, thereby avoiding burdensome integration and facilitating efficient estimation of other model parameters. We illustrate use of our methodology using a simulated data example, and uncover the danger of simply substituting a space- and time-constant function of the level of detection for all missing values. We then fit our model to area measurement data of volatile organic compounds (VOC) air concentrations collected on vessels supporting the response and clean-up efforts of the Deepwater Horizon oil release that occurred starting April 20, 2010. These data contained a high percentage of observations below the detectable limits of the measuring instrument. Despite this, we were still able to make some interesting discoveries, including elevated levels of VOC near the site of the oil well on June 26th. Using the results from this preliminary analysis, we hope to inform future research on the Deepwater Horizon study, including the use of gradient methods for assigning workers to exposure categories. PMID:25599019

  6. The Statistical Value of Raw Fluorescence Signal in Luminex xMAP Based Multiplex Immunoassays

    PubMed Central

    Breen, Edmond J.; Tan, Woei; Khan, Alamgir

    2016-01-01

    Tissue samples (plasma, saliva, serum or urine) from 169 patients classified as either normal or having one of seven possible diseases are analysed across three 96-well plates for the presences of 37 analytes using cytokine inflammation multiplexed immunoassay panels. Censoring for concentration data caused problems for analysis of the low abundant analytes. Using fluorescence analysis over concentration based analysis allowed analysis of these low abundant analytes. Mixed-effects analysis on the resulting fluorescence and concentration responses reveals a combination of censoring and mapping the fluorescence responses to concentration values, through a 5PL curve, changed observed analyte concentrations. Simulation verifies this, by showing a dependence on the mean florescence response and its distribution on the observed analyte concentration levels. Differences from normality, in the fluorescence responses, can lead to differences in concentration estimates and unreliable probabilities for treatment effects. It is seen that when fluorescence responses are normally distributed, probabilities of treatment effects for fluorescence based t-tests has greater statistical power than the same probabilities from concentration based t-tests. We add evidence that the fluorescence response, unlike concentration values, doesn’t require censoring and we show with respect to differential analysis on the fluorescence responses that background correction is not required. PMID:27243383

  7. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    PubMed

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Evaluation of High-Throughput Chemical Exposure Models ...

    EPA Pesticide Factsheets

    The U.S. EPA, under its ExpoCast program, is developing high-throughput near-field modeling methods to estimate human chemical exposure and to provide real-world context to high-throughput screening (HTS) hazard data. These novel modeling methods include reverse methods to infer parent chemical exposures from biomonitoring measurements and forward models to predict multi-pathway exposures from chemical use information and/or residential media concentrations. Here, both forward and reverse modeling methods are used to characterize the relationship between matched near-field environmental (air and dust) and biomarker measurements. Indoor air, house dust, and urine samples from a sample of 120 females (aged 60 to 80 years) were analyzed. In the measured data, 78% of the residential media measurements (across 80 chemicals) and 54% of the urine measurements (across 21 chemicals) were censored, i.e. below the limit of quantification (LOQ). Because of the degree of censoring, we applied a Bayesian approach to impute censored values for 69 chemicals having at least 15% of measurements above LOQ. This resulted in 10 chemicals (5 phthalates, 5 pesticides) with matched air, dust, and urine metabolite measurements. The population medians of indoor air and dust concentrations were compared to population median exposures inferred from urine metabolites concentrations using a high-throughput reverse-dosimetry approach. Median air and dust concentrations were found to be correl

  9. Overall Graft Loss Versus Death-Censored Graft Loss: Unmasking the Magnitude of Racial Disparities in Outcomes Among US Kidney Transplant Recipients.

    PubMed

    Taber, David J; Gebregziabher, Mulugeta; Payne, Elizabeth H; Srinivas, Titte; Baliga, Prabhakar K; Egede, Leonard E

    2017-02-01

    Black kidney transplant recipients experience disproportionately high rates of graft loss. This disparity has persisted for 40 years, and improvements may be impeded based on the current public reporting of overall graft loss by US regulatory organizations for transplantation. Longitudinal cohort study of kidney transplant recipients using a data set created by linking Veterans Affairs and US Renal Data System information, including 4918 veterans transplanted between January 2001 and December 2007, with follow-up through December 2010. Multivariable analysis was conducted using 2-stage joint modeling of random and fixed effects of longitudinal data (linear mixed model) with time to event outcomes (Cox regression). Three thousand three hundred six non-Hispanic whites (67%) were compared with 1612 non-Hispanic black (33%) recipients with 6.0 ± 2.2 years of follow-up. In the unadjusted analysis, black recipients were significantly more likely to have overall graft loss (hazard ratio [HR], 1.19; 95% confidence interval [95% CI], 1.07-1.33), death-censored graft loss (HR, 1.67; 95% CI, 1.45-1.92), and lower mortality (HR, 0.83; 95% CI, 0.72-0.96). In fully adjusted models, only death-censored graft loss remained significant (HR, 1.38; 95% CI, 1.12-1.71; overall graft loss [HR, 1.08; 95% CI, 0.91-1.28]; mortality [HR, 0.84; 95% CI, 0.67-1.06]). A composite definition of graft loss reduced the magnitude of disparities in blacks by 22%. Non-Hispanic black kidney transplant recipients experience a substantial disparity in graft loss, but not mortality. This study of US data provides evidence to suggest that researchers should focus on using death-censored graft loss as the primary outcome of interest to facilitate a better understanding of racial disparities in kidney transplantation.

  10. Post-Transplant Hypophosphatemia and the Risk of Death-Censored Graft Failure and Mortality after Kidney Transplantation.

    PubMed

    van Londen, Marco; Aarts, Brigitte M; Deetman, Petronella E; van der Weijden, Jessica; Eisenga, Michele F; Navis, Gerjan; Bakker, Stephan J L; de Borst, Martin H

    2017-08-07

    Hypophosphatemia is common in the first year after kidney transplantation, but its clinical implications are unclear. We investigated the relationship between the severity of post-transplant hypophosphatemia and mortality or death-censored graft failure in a large cohort of renal transplant recipients with long-term follow-up. We performed a longitudinal cohort study in 957 renal transplant recipients who were transplanted between 1993 and 2008 at a single center. We used a large real-life dataset containing 28,178 phosphate measurements (median of 27; first to third quartiles, 23-34) serial measurements per patient) and selected the lowest intraindividual phosphate level during the first year after transplantation. The primary outcomes were all-cause mortality, cardiovascular mortality, and death-censored graft failure. The median (interquartile range) intraindividual lowest phosphate level was 1.58 (1.30-1.95) mg/dl, and it was reached at 33 (21-51) days post-transplant. eGFR was the main correlate of the lowest serum phosphate level (model R 2 =0.32). During 9 (5-12) years of follow-up, 181 (19%) patients developed graft failure, and 295 (35%) patients died, of which 94 (32%) deaths were due to cardiovascular disease. In multivariable Cox regression analysis, more severe hypophosphatemia was associated with a lower risk of death-censored graft failure (fully adjusted hazard ratio, 0.61; 95% confidence interval, 0.43 to 0.88 per 1 mg/dl lower serum phosphate) and cardiovascular mortality (fully adjusted hazard ratio, 0.37; 95% confidence interval, 0.22 to 0.62) but not noncardiovascular mortality (fully adjusted hazard ratio, 1.33; 95% confidence interval, 0.9 to 1.96) or all-cause mortality (fully adjusted hazard ratio, 1.15; 95% confidence interval, 0.81 to 1.61). Post-transplant hypophosphatemia develops early after transplantation. These data connect post-transplant hypophosphatemia with favorable long-term graft and patient outcomes. Copyright © 2017 by the American Society of Nephrology.

  11. Propensity score matching and persistence correction to reduce bias in comparative effectiveness: the effect of cinacalcet use on all-cause mortality.

    PubMed

    Gillespie, Iain A; Floege, Jürgen; Gioni, Ioanna; Drüeke, Tilman B; de Francisco, Angel L; Anker, Stefan D; Kubo, Yumi; Wheeler, David C; Froissart, Marc

    2015-07-01

    The generalisability of randomised controlled trials (RCTs) may be limited by restrictive entry criteria or by their experimental nature. Observational research can provide complementary findings but is prone to bias. Employing propensity score matching, to reduce such bias, we compared the real-life effect of cinacalcet use on all-cause mortality (ACM) with findings from the Evaluation of Cinacalcet Therapy to Lower Cardiovascular Events (EVOLVE) RCT in chronic haemodialysis patients. Incident adult haemodialysis patients receiving cinacalcet, recruited in a prospective observational cohort from 2007-2009 (AROii; n = 10,488), were matched to non-exposed patients regardless of future exposure status. The effect of treatment crossover was investigated with inverse probability of censoring weighted and lag-censored analyses. EVOLVE ACM data were analysed largely as described for the primary composite endpoint. AROii patients receiving cinacalcet (n = 532) were matched to 1790 non-exposed patients. The treatment effect of cinacalcet on ACM in the main AROii analysis (hazard ratio 1.03 [95% confidence interval (CI) 0.78-1.35]) was closer to the null than for the Intention to Treat (ITT) analysis of EVOLVE (0.94 [95%CI 0.85-1.04]). Adjusting for non-persistence by 0- and 6-month lag-censoring and by inverse probability of censoring weight, the hazard ratios in AROii (0.76 [95%CI 0.51-1.15], 0.84 [95%CI 0.60-1.18] and 0.79 [95%CI 0.56-1.11], respectively) were comparable with those of EVOLVE (0.82 [95%CI 0.67-1.01], 0.83 [95%CI 0.73-0.96] and 0.87 [95%CI 0.71-1.06], respectively). Correcting for treatment crossover, we observed results in the 'real-life' setting of the AROii observational cohort that closely mirrored the results of the EVOLVE RCT. Persistence-corrected analyses revealed a trend towards reduced ACM in haemodialysis patients receiving cinacalcet therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  12. A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.

    PubMed

    Houseman, E Andres; Virji, M Abbas

    2017-08-01

    Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates were significant in some frequentist models, but in the Bayesian model their credible intervals contained zero; such discrepancies were observed in multiple datasets. Variance components from the Bayesian model reflected substantial autocorrelation, consistent with the frequentist models, except for the auto-regressive moving average model. Plots of means from the Bayesian model showed good fit to the observed data. The proposed Bayesian model provides an approach for modeling non-stationary autocorrelation in a hierarchical modeling framework to estimate task means, standard deviations, quantiles, and parameter estimates for covariates that are less biased and have better performance characteristics than some of the contemporary methods. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  13. Copula based flexible modeling of associations between clustered event times.

    PubMed

    Geerdens, Candida; Claeskens, Gerda; Janssen, Paul

    2016-07-01

    Multivariate survival data are characterized by the presence of correlation between event times within the same cluster. First, we build multi-dimensional copulas with flexible and possibly symmetric dependence structures for such data. In particular, clustered right-censored survival data are modeled using mixtures of max-infinitely divisible bivariate copulas. Second, these copulas are fit by a likelihood approach where the vast amount of copula derivatives present in the likelihood is approximated by finite differences. Third, we formulate conditions for clustered right-censored survival data under which an information criterion for model selection is either weakly consistent or consistent. Several of the familiar selection criteria are included. A set of four-dimensional data on time-to-mastitis is used to demonstrate the developed methodology.

  14. The Topp-Leone generalized Rayleigh cure rate model and its application

    NASA Astrophysics Data System (ADS)

    Nanthaprut, Pimwarat; Bodhisuwan, Winai; Patummasut, Mena

    2017-11-01

    Cure rate model is one of the survival analysis when model consider a proportion of the censored data. In clinical trials, the data represent time to recurrence of event or death of patients are used to improve the efficiency of treatments. Each dataset can be separated into two groups: censored and uncensored data. In this work, the new mixture cure rate model is introduced based on the Topp-Leone generalized Rayleigh distribution. The Bayesian approach is employed to estimate its parameters. In addition, a breast cancer dataset is analyzed for model illustration purpose. According to the deviance information criterion, the Topp-Leone generalized Rayleigh cure rate model shows better result than the Weibull and exponential cure rate models.

  15. Permutational distribution of the log-rank statistic under random censorship with applications to carcinogenicity assays.

    PubMed

    Heimann, G; Neuhaus, G

    1998-03-01

    In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.

  16. Fast iterative censoring CFAR algorithm for ship detection from SAR images

    NASA Astrophysics Data System (ADS)

    Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng

    2017-11-01

    Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.

  17. Estimation of descriptive statistics for multiply censored water quality data

    USGS Publications Warehouse

    Helsel, Dennis R.; Cohn, Timothy A.

    1988-01-01

    This paper extends the work of Gilliom and Helsel (1986) on procedures for estimating descriptive statistics of water quality data that contain “less than” observations. Previously, procedures were evaluated when only one detection limit was present. Here we investigate the performance of estimators for data that have multiple detection limits. Probability plotting and maximum likelihood methods perform substantially better than simple substitution procedures now commonly in use. Therefore simple substitution procedures (e.g., substitution of the detection limit) should be avoided. Probability plotting methods are more robust than maximum likelihood methods to misspecification of the parent distribution and their use should be encouraged in the typical situation where the parent distribution is unknown. When utilized correctly, less than values frequently contain nearly as much information for estimating population moments and quantiles as would the same observations had the detection limit been below them.

  18. Factors associated with clinically significant increased walking time in multiple sclerosis: results of a survival analysis of short-term follow-up data from a clinical database.

    PubMed

    Miller, Deborah M; Thompson, Nicolas R; Cohen, Jeffrey A; Fox, Robert J; Hartman, Jen; Schwetz, Kathleen; Conway, Devon S; Rudick, Richard A

    2015-04-01

    Because multiple sclerosis (MS) is variable and unpredictable, if symptom worsening could be predicted, patients may feel better prepared to manage changes in function. The objective of this paper is to study the prediction of walking impairment in MS. We retrieved data for all MS patients at our center (2008-2009), including baseline and follow-up timed 25-foot walk (T25FW) times. We assessed the incidence of ≥20% worsening in T25FW by developing two survival models: (1) disease course and (2) Multiple Sclerosis Performance Scales (MSPS) score. The outcome was days until ≥20% worsening in T25FW. Covariates were disease subtype, years since diagnosis, Patient Health Questionnaire-9 (PHQ-9) score, and demographics. Data were interval censored; missing data were handled with multiple imputation. Of 1544 patients, 309 (20%) experienced ≥20% worsening T25FW. For disease course, time to worsening was significantly shorter for secondary progressive vs. relapsing-remitting disease (p < 0.001). For MSPS, patients with lower baseline MSPS scores progressed more slowly (p = 0.001). In both models, sex, baseline T25W, and time since diagnosis were significantly associated with worsening. In the disease course model, PHQ 9 score may be related to worsening (p = 0.07). These findings suggest factors associated with worsening in T25FW and a potential approach to establishing indicators associated with clinically significant change. © The Author(s), 2014.

  19. Comparing two correlated C indices with right-censored survival outcome: a one-shot nonparametric approach

    PubMed Central

    Kang, Le; Chen, Weijie; Petrick, Nicholas A.; Gallas, Brandon D.

    2014-01-01

    The area under the receiver operating characteristic (ROC) curve (AUC) is often used as a summary index of the diagnostic ability in evaluating biomarkers when the clinical outcome (truth) is binary. When the clinical outcome is right-censored survival time, the C index, motivated as an extension of AUC, has been proposed by Harrell as a measure of concordance between a predictive biomarker and the right-censored survival outcome. In this work, we investigate methods for statistical comparison of two diagnostic or predictive systems, of which they could either be two biomarkers or two fixed algorithms, in terms of their C indices. We adopt a U-statistics based C estimator that is asymptotically normal and develop a nonparametric analytical approach to estimate the variance of the C estimator and the covariance of two C estimators. A z-score test is then constructed to compare the two C indices. We validate our one-shot nonparametric method via simulation studies in terms of the type I error rate and power. We also compare our one-shot method with resampling methods including the jackknife and the bootstrap. Simulation results show that the proposed one-shot method provides almost unbiased variance estimations and has satisfactory type I error control and power. Finally, we illustrate the use of the proposed method with an example from the Framingham Heart Study. PMID:25399736

  20. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks.

    PubMed

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2015-03-01

    Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.

  1. On prognostic models, artificial intelligence and censored observations.

    PubMed

    Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A

    2001-03-01

    The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.

  2. Likelihood inference for COM-Poisson cure rate model with interval-censored data and Weibull lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, N

    2017-10-01

    In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.

  3. Rethinking the advantage of zero-HLA mismatches in unrelated living donor kidney transplantation: implications on kidney paired donation.

    PubMed

    Casey, Michael Jin; Wen, Xuerong; Rehman, Shehzad; Santos, Alfonso H; Andreoni, Kenneth A

    2015-04-01

    The OPTN/UNOS Kidney Paired Donation (KPD) Pilot Program allocates priority to zero-HLA mismatches. However, in unrelated living donor kidney transplants (LDKT)-the same donor source in KPD-no study has shown whether zero-HLA mismatches provide any advantage over >0 HLA mismatches. We hypothesize that zero-HLA mismatches among unrelated LDKT do not benefit graft survival. This retrospective SRTR database study analyzed LDKT recipients from 1987 to 2012. Among unrelated LDKT, subjects with zero-HLA mismatches were compared to a 1:1-5 matched (by donor age ±1 year and year of transplantation) control cohort with >0 HLA mismatches. The primary endpoint was death-censored graft survival. Among 32,654 unrelated LDKT recipients, 83 had zero-HLA mismatches and were matched to 407 controls with >0 HLA mismatches. Kaplan-Meier analyses for death-censored graft and patient survival showed no difference between study and control cohorts. In multivariate marginal Cox models, zero-HLA mismatches saw no benefit with death-censored graft survival (HR = 1.46, 95% CI 0.78-2.73) or patient survival (HR = 1.43, 95% CI 0.68-3.01). Our data suggest that in unrelated LDKT, zero-HLA mismatches may not offer any survival advantage. Therefore, particular study of zero-HLA mismatching is needed to validate its place in the OPTN/UNOS KPD Pilot Program allocation algorithm. © 2014 Steunstichting ESOT.

  4. Estimating the effect of a rare time-dependent treatment on the recurrent event rate.

    PubMed

    Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E

    2018-05-30

    In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Random left censoring: a second look at bone lead concentration measurements

    NASA Astrophysics Data System (ADS)

    Popovic, M.; Nie, H.; Chettle, D. R.; McNeill, F. E.

    2007-09-01

    Bone lead concentrations measured in vivo by x-ray fluorescence (XRF) are subjected to left censoring due to limited precision of the technique at very low concentrations. In the analysis of bone lead measurements, inverse variance weighting (IVW) of measurements is commonly used to estimate the mean of a data set and its standard error. Student's t-test is used to compare the IVW means of two sets, testing the hypothesis that the two sets are from the same population. This analysis was undertaken to assess the adequacy of IVW in the analysis of bone lead measurements or to confirm the results of IVW using an independent approach. The rationale is provided for the use of methods of survival data analysis in the study of XRF bone lead measurements. The procedure is provided for bone lead data analysis using the Kaplan-Meier and Nelson-Aalen estimators. The methodology is also outlined for the rank tests that are used to determine whether two censored sets are from the same population. The methods are applied on six data sets acquired in epidemiological studies. The estimated parameters and test statistics were compared with the results of the IVW approach. It is concluded that the proposed methods of statistical analysis can provide valid inference about bone lead concentrations, but the computed parameters do not differ substantially from those derived by the more widely used method of IVW.

  6. Evaluation of methodology for the analysis of 'time-to-event' data in pharmacogenomic genome-wide association studies.

    PubMed

    Syed, Hamzah; Jorgensen, Andrea L; Morris, Andrew P

    2016-06-01

    To evaluate the power to detect associations between SNPs and time-to-event outcomes across a range of pharmacogenomic study designs while comparing alternative regression approaches. Simulations were conducted to compare Cox proportional hazards modeling accounting for censoring and logistic regression modeling of a dichotomized outcome at the end of the study. The Cox proportional hazards model was demonstrated to be more powerful than the logistic regression analysis. The difference in power between the approaches was highly dependent on the rate of censoring. Initial evaluation of single-nucleotide polymorphism association signals using computationally efficient software with dichotomized outcomes provides an effective screening tool for some design scenarios, and thus has important implications for the development of analytical protocols in pharmacogenomic studies.

  7. Exposure data from multi-application, multi-industry maintenance of surfaces and joints sealed with asbestos-containing gaskets and packing.

    PubMed

    Boelter, Fred; Simmons, Catherine; Hewett, Paul

    2011-04-01

    Fluid sealing devices (gaskets and packing) containing asbestos are manufactured and blended with binders such that the asbestos fibers are locked in a matrix that limits the potential for fiber release. Occasionally, fluid sealing devices fail and need to be replaced or are removed during preventive maintenance activities. This is the first study known to pool over a decade's worth of exposure assessments involving fluid sealing devices used in a variety of applications. Twenty-one assessments of work activities and air monitoring were performed under conditions with no mechanical ventilation and work scenarios described as "worst-case" conditions. Frequently, the work was conducted using aggressive techniques, along with dry removal practices. Personal and area samples were collected and analyzed in accordance with the National Institute for Occupational Safety and Health Methods 7400 and 7402. A total of 782 samples were analyzed by phase contrast microscopy, and 499 samples were analyzed by transmission electron microscopy. The statistical data analysis focused on the overall data sets which were personal full-shift time-weighted average (TWA) exposures, personal 30-min exposures, and area full-shift TWA values. Each data set contains three estimates of exposure: (1) total fibers; (2) asbestos fibers only but substituting a value of 0.0035 f/cc for censored data; and (3) asbestos fibers only but substituting the limit of quantification value for censored data. Censored data in the various data sets ranged from 7% to just over 95%. Because all the data sets were censored, the geometric mean and geometric standard deviation were estimated using the maximum likelihood estimation method. Nonparametric, Kaplan-Meier, and lognormal statistics were applied and found to be consistent and reinforcing. All three sets of statistics suggest that the mean and median exposures were less than 25% of 0.1 f/cc 8-hr TWA sample or 1.0 f/cc 30-min samples, and that there is at least 95% confidence that the true 95th percentile exposures are less than 0.1 f/cc as an 8-hr TWA.

  8. Methodological comparison of marginal structural model, time-varying Cox regression, and propensity score methods: the example of antidepressant use and the risk of hip fracture.

    PubMed

    Ali, M Sanni; Groenwold, Rolf H H; Belitser, Svetlana V; Souverein, Patrick C; Martín, Elisa; Gatto, Nicolle M; Huerta, Consuelo; Gardarsdottir, Helga; Roes, Kit C B; Hoes, Arno W; de Boer, Antonius; Klungel, Olaf H

    2016-03-01

    Observational studies including time-varying treatments are prone to confounding. We compared time-varying Cox regression analysis, propensity score (PS) methods, and marginal structural models (MSMs) in a study of antidepressant [selective serotonin reuptake inhibitors (SSRIs)] use and the risk of hip fracture. A cohort of patients with a first prescription for antidepressants (SSRI or tricyclic antidepressants) was extracted from the Dutch Mondriaan and Spanish Base de datos para la Investigación Farmacoepidemiológica en Atención Primaria (BIFAP) general practice databases for the period 2001-2009. The net (total) effect of SSRI versus no SSRI on the risk of hip fracture was estimated using time-varying Cox regression, stratification and covariate adjustment using the PS, and MSM. In MSM, censoring was accounted for by inverse probability of censoring weights. The crude hazard ratio (HR) of SSRI use versus no SSRI use on hip fracture was 1.75 (95%CI: 1.12, 2.72) in Mondriaan and 2.09 (1.89, 2.32) in BIFAP. After confounding adjustment using time-varying Cox regression, stratification, and covariate adjustment using the PS, HRs increased in Mondriaan [2.59 (1.63, 4.12), 2.64 (1.63, 4.25), and 2.82 (1.63, 4.25), respectively] and decreased in BIFAP [1.56 (1.40, 1.73), 1.54 (1.39, 1.71), and 1.61 (1.45, 1.78), respectively]. MSMs with stabilized weights yielded HR 2.15 (1.30, 3.55) in Mondriaan and 1.63 (1.28, 2.07) in BIFAP when accounting for censoring and 2.13 (1.32, 3.45) in Mondriaan and 1.66 (1.30, 2.12) in BIFAP without accounting for censoring. In this empirical study, differences between the different methods to control for time-dependent confounding were small. The observed differences in treatment effect estimates between the databases are likely attributable to different confounding information in the datasets, illustrating that adequate information on (time-varying) confounding is crucial to prevent bias. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Solutions to time variant problems of real-time expert systems

    NASA Technical Reports Server (NTRS)

    Yeh, Show-Way; Wu, Chuan-Lin; Hung, Chaw-Kwei

    1988-01-01

    Real-time expert systems for monitoring and control are driven by input data which changes with time. One of the subtle problems of this field is the propagation of time variant problems from rule to rule. This propagation problem is even complicated under a multiprogramming environment where the expert system may issue test commands to the system to get data and to access time consuming devices to retrieve data for concurrent reasoning. Two approaches are used to handle the flood of input data. Snapshots can be taken to freeze the system from time to time. The expert system treats the system as a stationary one and traces changes by comparing consecutive snapshots. In the other approach, when an input is available, the rules associated with it are evaluated. For both approaches, if the premise condition of a fired rule is changed to being false, the downstream rules should be deactivated. If the status change is due to disappearance of a transient problem, actions taken by the fired downstream rules which are no longer true may need to be undone. If a downstream rule is being evaluated, it should not be fired. Three mechanisms for solving this problem are discussed: tracing, backward checking, and censor setting. In the forward tracing mechanism, when the premise conditions of a fired rule become false, the premise conditions of downstream rules which have been fired or are being evaluated due to the firing of that rule are reevaluated. A tree with its root at the rule being deactivated is traversed. In the backward checking mechanism, when a rule is being fired, the expert system checks back on the premise conditions of the upstream rules that result in evaluation of the rule to see whether it should be fired. The root of the tree being traversed is the rule being fired. In the censor setting mechanism, when a rule is to be evaluated, a censor is constructed based on the premise conditions of the upstream rules and the censor is evaluated just before the rule is fired. Unlike the backward checking mechanism, this one does not search the upstream rules. This paper explores the details of implementation of the three mechanisms.

  10. The Paradoxical Role of the Research Administrator.

    ERIC Educational Resources Information Center

    White, Virginia P.

    1991-01-01

    This reprinted 1970 article examines the role of the university research administrator and finds that the role involves paradoxes between controller and entrepreneur, master and slave, censor and publicist, and traditionalist and innovator. (DB)

  11. Proportional hazards model with varying coefficients for length-biased data.

    PubMed

    Zhang, Feipeng; Chen, Xuerong; Zhou, Yong

    2014-01-01

    Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.

  12. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  13. Atrazine concentrations in near-surface aquifers: A censored regression approach

    USGS Publications Warehouse

    Liu, S.; Yen, S.T.; Kolpin, D.W.

    1996-01-01

    In 1991, the U.S. Geological Survey (USGS) conducted a study to investigate the occurrence of atrazine (2-chloro-4-ethylamino-6- isopropylamino-s-triazine) and other agricultural chemicals in near-surface aquifers in the midcontinental USA. Because about 83% of the atrazine concentrations from the USGS study were censored, standard statistical estimation procedures could not be used. To determine factors that affect atrazine concentrations in groundwater while accommodating the high degree of data censoring. Tobit models were used (normal homoscedastic, normal heteroscedastic, lognormal homoscedastic, and lognormal heteroscedastic). Empirical results suggest that the lognormal heteroscedastic Tobit model is the model of choice for this type of study. This model determined the following factors to have the strongest effect on atrazine concentrations in groundwater: percent of pasture within 3.2 km, percent of forest within 3.2 km (2 mi), mean open interval of the well, primary water use of a well, aquifer class (unconsolidated or bedrock), aquifer type (unconfined or confined), existence of a stream within 30 m (100 ft), existence of a stream within 30 m to 0.4 km (0.25 mi), and existence of a stream within 0.4 to 3.2 km. Examining the elasticities of the continuous explanatory factors provides further insight into their effects on atrazine concentrations in groundwater. This study documents a viable statistical method that can be used to accommodate the complicating presence of censured data, a feature that commonly occurs in environmental data.

  14. Course of serological tests in treated subjects with chronic Trypanosoma cruzi infection: a systematic review and meta-analysis of individual participant data.

    PubMed

    Sguassero, Yanina; Roberts, Karen N; Harvey, Guillermina B; Comandé, Daniel; Ciapponi, Agustín; Cuesta, Cristina B; Danesi, Emmaría; Aguiar, Camila; Andrade, Ana L; Castro, Ana Mde; Lana, Marta de; Escribà, Josep M; Fabbro, Diana L; Fernandes, Cloé D; Meira, Wendell Sf; Flores-Chávez, María; Hasslocher-Moreno, Alejandro M; Jackson, Yves; Lacunza, Carlos D; Machado-de-Assis, Girley F; Maldonado, Marisel; Monje-Rumi, María M; Molina, Israel; Martín, Catalina Muñoz-San; Murcia, Laura; Castro, Cleudson Nery de; Silveira, Celeste An; Negrette, Olga Sánchez; Segovia, Manuel; Solari, Aldo; Steindel, Mário; Streiger, Mirtha L; Bilbao, Ninfa Vera de; Zulantay, Inés; Sosa-Estani, Sergio

    2018-06-04

    To determine the course of serological tests in subjects with chronic T. cruzi infection treated with antitrypanosomal drugs. We conducted a systematic review and meta-analysis using individual participant data. Survival analysis and Cox proportional hazards regression model with a random effect to adjust for covariates were applied. The protocol was registered at www.crd.york.ac.uk/PROSPERO (CRD42012002162). We included 27 studies (1296 subjects) conducted in eight countries. The risk of bias was low for all domains in 17 studies (63.0%). We assessed 913 subjects (149 seroreversion events, 83.7% censored data) for ELISA, 670 subjects (134 events, 80.0% censored) for IIF, and 548 subjects (99 events, 82.0% censored) for IHA. A higher probability of seroreversion was observed in subjects aged 1-19 years compared to adults at a shorter time span. The chance of seroreversion also varied according to the country where the infection might have been acquired. For instance, the pooled adjusted hazard ratio between children/adolescents and adults for IIF test was 1.54 (95% CI 0.64-3.71) and 9.37 (3.44-25.50) in some countries of South America and Brazil, respectively. The disappearance of anti-T. cruzi antibodies was demonstrated along the follow-up. An interaction between age at treatment and country setting was shown. Copyright © 2018. Published by Elsevier Ltd.

  15. Comparing two correlated C indices with right-censored survival outcome: a one-shot nonparametric approach.

    PubMed

    Kang, Le; Chen, Weijie; Petrick, Nicholas A; Gallas, Brandon D

    2015-02-20

    The area under the receiver operating characteristic curve is often used as a summary index of the diagnostic ability in evaluating biomarkers when the clinical outcome (truth) is binary. When the clinical outcome is right-censored survival time, the C index, motivated as an extension of area under the receiver operating characteristic curve, has been proposed by Harrell as a measure of concordance between a predictive biomarker and the right-censored survival outcome. In this work, we investigate methods for statistical comparison of two diagnostic or predictive systems, of which they could either be two biomarkers or two fixed algorithms, in terms of their C indices. We adopt a U-statistics-based C estimator that is asymptotically normal and develop a nonparametric analytical approach to estimate the variance of the C estimator and the covariance of two C estimators. A z-score test is then constructed to compare the two C indices. We validate our one-shot nonparametric method via simulation studies in terms of the type I error rate and power. We also compare our one-shot method with resampling methods including the jackknife and the bootstrap. Simulation results show that the proposed one-shot method provides almost unbiased variance estimations and has satisfactory type I error control and power. Finally, we illustrate the use of the proposed method with an example from the Framingham Heart Study. Copyright © 2014 John Wiley & Sons, Ltd.

  16. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V.

    PubMed

    Wall, Michael; Zamba, Gideon K D; Artes, Paul H

    2018-01-01

    It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.

  17. Modeling time-to-event (survival) data using classification tree analysis.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  18. Comparison of mixed effects models of antimicrobial resistance metrics of livestock and poultry Salmonella isolates from a national monitoring system.

    PubMed

    Bjork, K E; Kopral, C A; Wagner, B A; Dargatz, D A

    2015-12-01

    Antimicrobial use in agriculture is considered a pathway for the selection and dissemination of resistance determinants among animal and human populations. From 1997 through 2003 the U.S. National Antimicrobial Resistance Monitoring System (NARMS) tested clinical Salmonella isolates from multiple animal and environmental sources throughout the United States for resistance to panels of 16-19 antimicrobials. In this study we applied two mixed effects models, the generalized linear mixed model (GLMM) and accelerated failure time frailty (AFT-frailty) model, to susceptible/resistant and interval-censored minimum inhibitory concentration (MIC) metrics, respectively, from Salmonella enterica subspecies enterica serovar Typhimurium isolates from livestock and poultry. Objectives were to compare characteristics of the two models and to examine the effects of time, species, and multidrug resistance (MDR) on the resistance of isolates to individual antimicrobials, as revealed by the models. Fixed effects were year of sample collection, isolate source species and MDR indicators; laboratory study site was included as a random effect. MDR indicators were significant for every antimicrobial and were dominant effects in multivariable models. Temporal trends and source species influences varied by antimicrobial. In GLMMs, the intra-class correlation coefficient ranged up to 0.8, indicating that the proportion of variance accounted for by laboratory study site could be high. AFT models tended to be more sensitive, detecting more curvilinear temporal trends and species differences; however, high levels of left- or right-censoring made some models unstable and results uninterpretable. Results from GLMMs may be biased by cutoff criteria used to collapse MIC data into binary categories, and may miss signaling important trends or shifts if the series of antibiotic dilutions tested does not span a resistance threshold. Our findings demonstrate the challenges of measuring the AMR ecosystem and the complexity of interacting factors, and have implications for future monitoring. We include suggestions for future data collection and analyses, including alternative modeling approaches. Published by Elsevier B.V.

  19. From Roswell to Richmond...To Your Town

    ERIC Educational Resources Information Center

    McShean, Gordon

    1970-01-01

    The library profession must stop playing around with the supposed causes of censorship and address itself to the principle of intellectual freedom. Librarians should confront the censors by exposing their tactics before they use them. (Author/JS)

  20. WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING

    PubMed Central

    Saegusa, Takumi; Wellner, Jon A.

    2013-01-01

    We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559

  1. Simulation program for estimating statistical power of Cox's proportional hazards model assuming no specific distribution for the survival time.

    PubMed

    Akazawa, K; Nakamura, T; Moriguchi, S; Shimada, M; Nose, Y

    1991-07-01

    Small sample properties of the maximum partial likelihood estimates for Cox's proportional hazards model depend on the sample size, the true values of regression coefficients, covariate structure, censoring pattern and possibly baseline hazard functions. Therefore, it would be difficult to construct a formula or table to calculate the exact power of a statistical test for the treatment effect in any specific clinical trial. The simulation program, written in SAS/IML, described in this paper uses Monte-Carlo methods to provide estimates of the exact power for Cox's proportional hazards model. For illustrative purposes, the program was applied to real data obtained from a clinical trial performed in Japan. Since the program does not assume any specific function for the baseline hazard, it is, in principle, applicable to any censored survival data as long as they follow Cox's proportional hazards model.

  2. Spatio-temporal precipitation climatology over complex terrain using a censored additive regression model.

    PubMed

    Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim

    2017-06-15

    Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.

  3. More evidence for trends in the intergenerational transmission of divorce: a completed cohort approach using data from the general social survey.

    PubMed

    Wolfinger, Nicholas H

    2011-05-01

    Many studies have demonstrated that the children of divorce are disproportionately likely to end their own marriages. In previous work, I showed that the transmission of divorce between generations weakened substantially for General Social Survey (GSS) respondents interviewed between 1973 and 1996 (Wolfinger 1999); Li and Wu (2006, 2008) contended that my finding is a methodological artifact of the GSS's lack of marriage duration data. This article presents a completed-cohort approach to studying divorce using the GSS. The results confirm a decline in the probability of divorce transmission that cannot be explained by the right-censoring bias alleged by Li and Wu. This finding contributes to an ongoing debate about trends in the negative consequences of parental divorce, as well as demonstrating a useful approach to right-censored phenomena when event history data are not available.

  4. Analysis of cigarette purchase task instrument data with a left-censored mixed effects model.

    PubMed

    Liao, Wenjie; Luo, Xianghua; Le, Chap T; Chu, Haitao; Epstein, Leonard H; Yu, Jihnhee; Ahluwalia, Jasjit S; Thomas, Janet L

    2013-04-01

    The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. Although a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug's RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, for example, 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method, and future directions of research are also discussed.

  5. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  6. Analysis of Cigarette Purchase Task Instrument Data with a Left-Censored Mixed Effects Model

    PubMed Central

    Liao, Wenjie; Luo, Xianghua; Le, Chap; Chu, Haitao; Epstein, Leonard H.; Yu, Jihnhee; Ahluwalia, Jasjit S.; Thomas, Janet L.

    2015-01-01

    The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. While a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug’s RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, e.g. 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method and future directions of research are also discussed. PMID:23356731

  7. A quantile regression model for failure-time data with time-dependent covariates

    PubMed Central

    Gorfine, Malka; Goldberg, Yair; Ritov, Ya’acov

    2017-01-01

    Summary Since survival data occur over time, often important covariates that we wish to consider also change over time. Such covariates are referred as time-dependent covariates. Quantile regression offers flexible modeling of survival data by allowing the covariates to vary with quantiles. This article provides a novel quantile regression model accommodating time-dependent covariates, for analyzing survival data subject to right censoring. Our simple estimation technique assumes the existence of instrumental variables. In addition, we present a doubly-robust estimator in the sense of Robins and Rotnitzky (1992, Recovery of information and adjustment for dependent censoring using surrogate markers. In: Jewell, N. P., Dietz, K. and Farewell, V. T. (editors), AIDS Epidemiology. Boston: Birkhaäuser, pp. 297–331.). The asymptotic properties of the estimators are rigorously studied. Finite-sample properties are demonstrated by a simulation study. The utility of the proposed methodology is demonstrated using the Stanford heart transplant dataset. PMID:27485534

  8. Risk-adjusted monitoring of survival times

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Reynolds, Marion R.; Woodall, William H.

    2009-02-26

    We consider the monitoring of clinical outcomes, where each patient has a di®erent risk of death prior to undergoing a health care procedure.We propose a risk-adjusted survival time CUSUM chart (RAST CUSUM) for monitoring clinical outcomes where the primary endpoint is a continuous, time-to-event variable that may be right censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart to the risk-adjusted Bernoulli CUSUM chart, using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is moremore » efficient at detecting a sudden decrease in the odds of death than the risk-adjusted Bernoulli CUSUM chart, especially when the fraction of censored observations is not too high. We also discuss the implementation of a prospective monitoring scheme using the RAST CUSUM chart.« less

  9. Advantage of the modified Lunn-McNeil technique over Kalbfleisch-Prentice technique in competing risks

    NASA Astrophysics Data System (ADS)

    Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.

    2002-03-01

    Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.

  10. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V

    PubMed Central

    Zamba, Gideon K. D.; Artes, Paul H.

    2018-01-01

    Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822

  11. A Joint Model for Longitudinal Measurements and Survival Data in the Presence of Multiple Failure Types

    PubMed Central

    Elashoff, Robert M.; Li, Gang; Li, Ning

    2009-01-01

    Summary In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel (Prentice et al., 1978, Biometrics 34, 541-554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease. PMID:18162112

  12. Predicting treatment effect from surrogate endpoints and historical trials: an extrapolation involving probabilities of a binary outcome or survival to a specific time

    PubMed Central

    Sargent, Daniel J.; Buyse, Marc; Burzykowski, Tomasz

    2011-01-01

    SUMMARY Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. PMID:21838732

  13. An investigation of the detection of tornadic thunderstorms by observing storm top features using geosynchronous satellite imagery

    NASA Technical Reports Server (NTRS)

    Anderson, Charles E.

    1991-01-01

    The number of tornado outbreak cases studied in detail was increased from the original 8. Detailed ground and aerial studies were carried out of two outbreak cases of considerable importance. It was demonstrated that multiple regression was able to predict the tornadic potential of a given thunderstorm cell by its cirrus anvil plume characteristics. It was also shown that the plume outflow intensity and the deviation of the plume alignment from storm relative winds at anvil altitude could account for the variance in tornadic potential for a given cell ranging from 0.37 to 0.82 for linear to values near 0.9 for quadratic regression. Several predictors were used in various discriminant analysis models and in censored regression models to obtain forecasts of whether a cell is tornadic and how strong tornadic it could be potentially. The experiments were performed with the synoptic scale vertical shear in the horizontal wind and with synoptic scale surface vorticity in the proximity of the cell.

  14. The banning of Frederick Wiseman's movie, "Titicut Follies"--censoring the evidence.

    PubMed

    Neuhauser, Duncan

    2003-01-01

    Poor quality of care in the form of indifference and indignity can be seen as the filmmaker Frederick Wiseman shows in his classic documentary, "Titicut Follies," which was banned from public showing for 25 years.

  15. Development and evaluation of a composite risk score to predict kidney transplant failure.

    PubMed

    Moore, Jason; He, Xiang; Shabir, Shazia; Hanvesakul, Rajesh; Benavente, David; Cockwell, Paul; Little, Mark A; Ball, Simon; Inston, Nicholas; Johnston, Atholl; Borrows, Richard

    2011-05-01

    Although risk factors for kidney transplant failure are well described, prognostic risk scores to estimate risk in prevalent transplant recipients are limited. Development and validation of risk-prediction instruments. The development data set included 2,763 prevalent patients more than 12 months posttransplant enrolled into the LOTESS (Long Term Efficacy and Safety Surveillance) Study. The validation data set included 731 patients who underwent transplant at a single UK center. Estimated glomerular filtration rate (eGFR) and other risk factors were evaluated using Cox regression. Scores for death-censored and overall transplant failure were based on the summed hazard ratios for baseline predictor variables. Predictive performance was assessed using calibration (Hosmer-Lemeshow statistic), discrimination (C statistic), and clinical reclassification (net reclassification improvement) compared with eGFR alone. In the development data set, 196 patients died and another 225 experienced transplant failure. eGFR, recipient age, race, serum urea and albumin levels, declining eGFR, and prior acute rejection predicted death-censored transplant failure. eGFR, recipient age, sex, serum urea and albumin levels, and declining eGFR predicted overall transplant failure. In the validation data set, 44 patients died and another 101 experienced transplant failure. The weighted scores comprising these variables showed adequate discrimination and calibration for death-censored (C statistic, 0.83; 95% CI, 0.75-0.91; Hosmer-Lemeshow χ(2)P = 0.8) and overall (C statistic, 0.70; 95% CI, 0.64-0.77; Hosmer-Lemeshow χ(2)P = 0.5) transplant failure. However, the scores failed to reclassify risk compared with eGFR alone (net reclassification improvements of 7.6% [95% CI, -0.2 to 13.4; P = 0.09] and 4.3% [95% CI, -2.7 to 11.8; P = 0.3] for death-censored and overall transplant failure, respectively). Retrospective analysis of predominantly cyclosporine-treated patients; limited study size and categorization of variables may limit power to detect effect. Although the scores performed well regarding discrimination and calibration, clinically relevant risk reclassification over eGFR alone was not evident, emphasizing the stringent requirements for such scores. Further studies are required to develop and refine this process. Copyright © 2011 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  16. On non-parametric maximum likelihood estimation of the bivariate survivor function.

    PubMed

    Prentice, R L

    The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.

  17. Analysis of an experiment aimed at improving the reliability of transmission centre shafts.

    PubMed

    Davis, T P

    1995-01-01

    Smith (1991) presents a paper proposing the use of Weibull regression models to establish dependence of failure data (usually times) on covariates related to the design of the test specimens and test procedures. In his article Smith made the point that good experimental design was as important in reliability applications as elsewhere, and in view of the current interest in design inspired by Taguchi and others, we pay some attention in this article to that topic. A real case study from the Ford Motor Company is presented. Our main approach is to utilize suggestions in the literature for applying standard least squares techniques of experimental analysis even when there is likely to be nonnormal error, and censoring. This approach lacks theoretical justification, but its appeal is its simplicity and flexibility. For completeness we also include some analysis based on the proportional hazards model, and in an attempt to link back to Smith (1991), look at a Weibull regression model.

  18. Sensitivity of goodness-of-fit statistics to rainfall data rounding off

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Puliga, Michelangelo

    An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.

  19. A Bayesian approach to truncated data sets: An application to Malmquist bias in Supernova Cosmology

    NASA Astrophysics Data System (ADS)

    March, Marisa Cristina

    2018-01-01

    A problem commonly encountered in statistical analysis of data is that of truncated data sets. A truncated data set is one in which a number of data points are completely missing from a sample, this is in contrast to a censored sample in which partial information is missing from some data points. In astrophysics this problem is commonly seen in a magnitude limited survey such that the survey is incomplete at fainter magnitudes, that is, certain faint objects are simply not observed. The effect of this `missing data' is manifested as Malmquist bias and can result in biases in parameter inference if it is not accounted for. In Frequentist methodologies the Malmquist bias is often corrected for by analysing many simulations and computing the appropriate correction factors. One problem with this methodology is that the corrections are model dependent. In this poster we derive a Bayesian methodology for accounting for truncated data sets in problems of parameter inference and model selection. We first show the methodology for a simple Gaussian linear model and then go on to show the method for accounting for a truncated data set in the case for cosmological parameter inference with a magnitude limited supernova Ia survey.

  20. Quantification of long-term erosion rates from root exposure/tree age relationships in an alpine meadow catchment

    NASA Astrophysics Data System (ADS)

    Scuderi, Louis A.

    2017-04-01

    Erosion rates derived using dendrogeomorphology have been used to quantify slope degradation in many localities globally. However, with the exception of the western United States, most of these estimates are derived from short-lived trees whose lifetimes may not adequately reflect the complete range of slope processes which can include erosion, deposition, impacts of extreme events and even long-term hiatuses. Erosion rate estimates at a given site using standard techniques therefore reflect censored local point erosion estimates rather than long-term rates. We applied a modified dendrogeomorphic approach to rapidly estimate erosion rates from dbh/age relationships to assess the difference between short and long-term rates and found that the mean short-term rate was 0.13 cm/yr with high variability, while the uncensored long-term rate was 0.06 cm/yr. The results indicate that rates calculated from short-lived trees, while possibly appropriate for local short-term point estimates of erosion, are highly variable and may overestimate regional long-term rates by > 50%. While these findings do not invalidate the use of dendrogeomorphology to estimate erosion rates they do suggest that care must be taken to select older trees that incorporate a range of slope histories in order to best approximate regional long-term rates.

  1. Censorship in Children's Literature: What Every Educator Should Know.

    ERIC Educational Resources Information Center

    Jalongo, Mary Renck; Creany, Anne Drolett

    1991-01-01

    Defines censorship and differentiates censorship from selection. Reviews the history of censorship and recent research trends. Describes typical censorable content and the consequences of censorship for libraries, books, and authors. Suggests strategies educators can use in dealing with censorship. (BC)

  2. Protecting Holden Caulfield and His Friends from the Censors.

    ERIC Educational Resources Information Center

    Jenkinson, Edward B.

    1985-01-01

    Surveys the textbook censorship picture over the past decade with particular attention to the activities of Tim LaHaye and Norma and Mel Gabler. Suggests 10 steps teachers can take to try and protect controversial texts from censorship. (RBW)

  3. Delineating the role of multiple intraarticular corticosteroid injections in the management of juvenile idiopathic arthritis in the biologic era.

    PubMed

    Papadopoulou, Charalampia; Kostik, Mikhail; Gonzalez-Fernandez, Maria Isabel; Bohm, Marek; Nieto-Gonzalez, Juan Carlos; Pistorio, Angela; Lanni, Stefano; Consolaro, Alessandro; Martini, Alberto; Ravelli, Angelo

    2013-07-01

    To investigate the outcome and predicting factors of multiple intraarticular corticosteroid (IAC) injections in children with juvenile idiopathic arthritis (JIA). The clinical charts of patients who received their first IAC injection in ≥3 joints between January 2002 and December 2011 were reviewed. The corticosteroid used was triamcinolone hexacetonide for large joints and methylprednisolone acetate for small or difficult to access joints. In each patient, the followup period after IAC injection was censored in case of synovitis flare or at the last visit with continued remission. Predictors included sex, age at disease onset, JIA category, antinuclear antibody (ANA) status, age and disease duration, disease course, general anesthesia, number and type of injected joints, acute-phase reactants, and concomitant systemic medications. A total of 220 patients who had 1,096 joints injected were included. Following IAC therapy, 66.4% of patients had synovitis flare after a median of 0.5 years, whereas 33.6% of patients had sustained remission after a median of 0.9 years. The cumulative probability of survival without synovitis flare was 50.0%, 31.5%, and 19.5% at 1, 2, and 3 years, respectively. On Cox regression analysis, positive C-reactive protein value, negative ANA, lack of concomitant methotrexate administration, and a polyarticular (versus an oligoarticular) disease course were the strongest predictors for synovitis flare. Multiple IAC injection therapy induced sustained remission of joint synovitis in a substantial proportion of patients. A controlled trial comparing multiple IAC injection therapy and methotrexate versus methotrexate and a tumor necrosis factor antagonist is worthy of consideration. Copyright © 2013 by the American College of Rheumatology.

  4. Hiroshima as Politics and History.

    ERIC Educational Resources Information Center

    Sherwin, Martin J.

    1995-01-01

    Argues that the objections raised to the Enola Gay exhibit are rooted in Cold War politics. Maintains that this historical myopia exemplifies the need for challenging historical inquiry. Characterizes opposition to the exhibit as largely political and discusses demands made to censor exhibit material. (MJP)

  5. Evaluation of Options for Interpreting Environmental ...

    EPA Pesticide Factsheets

    Report Secondary data from the BioResponse Operational Testing and Evaluation project were used to study six options for interpreting culture-based/microbial count data sets that include left censored data, or measurements that are less than established quantification limits and/or detection limits.

  6. Censorship Revisited.

    ERIC Educational Resources Information Center

    Fernandez, Melanie

    Governments, groups, and individuals have always tried to control information. This paper examines censorship, particularly textbook censorship and its effect upon the curriculum, and opposes the recent trend to censor textbooks in public schools. Since the mission of public schooling involves indoctrination and socialization as much as education,…

  7. Do Kids Need Government Censors?

    ERIC Educational Resources Information Center

    Rabkin, Rhoda

    2002-01-01

    Fashioning public policies restricting children's access to entertainment glamorizing violence, sex, drugs, and vulgarity is a complex task. The recently introduced Media Marketing Accountability Act would empower the federal government to regulate advertising of entertainment products to youth. Suggests that this power is undesirable compared to…

  8. The impact of follow-up type and missed deaths on population-based cancer survival studies for Hispanics and Asians.

    PubMed

    Pinheiro, Paulo S; Morris, Cyllene R; Liu, Lihua; Bungum, Timothy J; Altekruse, Sean F

    2014-11-01

    The accuracy of cancer survival statistics relies on the quality of death linkages and follow-up information collected by population-based cancer registries. Methodological issues on survival data by race-ethnicity in the United States, in particular for Hispanics and Asians, have not been well studied and may undermine our understanding of survival disparities. Based on Surveillance, Epidemiology, and End Results (SEER)-18 data, we analyzed existing biases in survival statistics when comparing the four largest racial-ethnic groups in the United States, whites, blacks, Hispanics and Asians. We compared the "reported alive" method for calculation of survival, which is appropriate when date of last alive contact is available for all cases, with the "presumed alive" method used when dates of last contact are unavailable. Cox regression was applied to calculate the likelihood of incomplete follow-up (those with less than 5 years of vital status information) according to racial-ethnic group and stage of diagnosis. Finally, potentially missed deaths were estimated based on the numbers of cases with incomplete follow-up for highly fatal cancers. The presumed alive method overestimated survival compared with the reported alive method by as much as 0.9-6.2 percentage points depending on the cancer site among Hispanics and by 0.4-2.7 percentage points among Asians. In SEER data, Hispanics and Asians are more likely to have incomplete follow-up than whites or blacks. The assumption of random censoring across race-ethnicity is not met, as among non-white cases, those who have a worse prognosis are more likely to have incomplete follow-up than those with a better prognosis (P < .05). Moreover, death ascertainment is not equal across racial-ethnic groups. Overall, 3% of cancer deaths were missed among Hispanics and Asians compared with less than 0.5% among blacks and whites. Cancer survival studies involving Hispanics and Asians should be interpreted with caution because the current available data overtly inflates survival in these populations. Censoring is clearly nonrandom across race-ethnicity meaning that findings of Hispanic and Asian survival advantages may be biased. Problematic death linkages among Hispanics and Asians contribute to missing deaths and overestimated survival. More complete follow-up with at least 5 years of information on vital status as well as improved death linkages will decisively increase the validity of survival estimates for these growing populations. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Utility of Risk Models in Decision Making After Radical Prostatectomy: Lessons from a Natural History Cohort of Intermediate- and High-Risk Men.

    PubMed

    Ross, Ashley E; Yousefi, Kasra; Davicioni, Elai; Ghadessi, Mercedeh; Johnson, Michael H; Sundi, Debasish; Tosoian, Jeffery J; Han, Misop; Humphreys, Elizabeth B; Partin, Alan W; Walsh, Patrick C; Trock, Bruce J; Schaeffer, Edward M

    2016-03-01

    Current guidelines suggest adjuvant radiation therapy for men with adverse pathologic features (APFs) at radical prostatectomy (RP). We examine at-risk men treated only with RP until the time of metastasis. To evaluate whether clinicopathologic risk models can help guide postoperative therapeutic decision making. Men with National Comprehensive Cancer Network intermediate- or high-risk localized prostate cancer undergoing RP in the prostate-specific antigen (PSA) era were identified (n=3089). Only men with initial undetectable PSA after surgery and who received no therapy prior to metastasis were included. APFs were defined as pT3 disease or positive surgical margins. Area under the receiver operating characteristic curve (AUC) for time to event data was used to measure the discrimination performance of the risk factors. Cumulative incidence curves were constructed using Fine and Gray competing risks analysis to estimate the risk of biochemical recurrence (BCR) or metastasis, taking censoring and death due to other causes into consideration. Overall, 43% of the cohort (n=1327) had APFs at RP. Median follow-up for censored patients was 5 yr. Cumulative incidence of metastasis was 6% at 10 yr after RP for all patients. Cumulative incidence of metastasis among men with APFs was 7.5% at 10 yr after RP. Among men with BCR, the incidence of metastasis was 38% 5 yr after BCR. At 10 yr after RP, time-dependent AUC for predicting metastasis by Cancer of the Prostate Risk Assessment Postsurgical or Eggener risk models was 0.81 (95% confidence interval [CI], 0.72-0.97) and 0.78 (95% CI, 0.67-0.97) in the APF population, respectively. At 5 yr after BCR, these values were lower (0.58 [95% CI, 0.50-0.66] and 0.70 [95% CI, 0.63-0.76]) among those who developed BCR. Use of risk model cut points could substantially reduce overtreatment while minimally increasing undertreatment (ie, use of an Eggener cut point of 2.5% for treatment of men with APFs would spare 46% from treatment while only allowing for metastatic events in 1% at 10 yr after RP). Use of risk models reduces overtreatment and should be a routine part of patient counseling when considering adjuvant therapy. Risk model performance is significantly reduced among men with BCR. Use of current risk models can help guide decision making regarding therapy after surgery and reduce overtreatment. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  10. A population-based study of hospital care costs during five years after TIA and stroke

    PubMed Central

    Luengo-Fernandez, Ramon; Gray, Alastair M.; Rothwell, Peter M.

    2016-01-01

    Background and Purpose Few studies have evaluated long-term costs after stroke onset, with almost no cost data for TIA. We studied hospital costs during the 5 years after TIA or stroke in a population-based study. Methods Patients from a UK population-based cohort study (Oxford Vascular Study) were recruited from 2002 to 2007. Analysis was based on follow-up until 2010. Hospital resource usage was obtained from patients’ hospital records and valued using 2008/09 unit costs. As not all patients had full 5-year follow-up, we used non-parametric censoring techniques. Results Among 485 TIA and 729 stroke patients ascertained and included, mean censor-adjusted 5-year hospital costs after index stroke were $25,741 (95% CI: 23,659-27,914), with costs varying considerably by severity: $21,134 after minor stroke, $33,119 after moderate stroke, and $28,552 after severe stroke. For the 239 surviving stroke patients who had reached final follow-up, mean costs were $24,383 (20,156-28,595), with over half of costs ($12,972) being incurred in the first year after the event. After index TIA, the mean censor-adjusted 5-year costs were $18,091 (15,947-20,258). A multivariate analysis showed that event severity, recurrent stroke and coronary events after the index event were independent predictors of 5-year costs. Differences by stroke subtype were mostly explained by stroke severity and subsequent events. Conclusions Long-term hospital costs after TIA and stroke are considerable, but are mainly incurred over the first year after the index event. Event severity and suffering subsequent stroke and coronary events after the index event accounted for much of the increase in costs. PMID:23160884

  11. Estimating the Distribution of the Incubation Periods of Human Avian Influenza A(H7N9) Virus Infections.

    PubMed

    Virlogeux, Victor; Li, Ming; Tsang, Tim K; Feng, Luzhao; Fang, Vicky J; Jiang, Hui; Wu, Peng; Zheng, Jiandong; Lau, Eric H Y; Cao, Yu; Qin, Ying; Liao, Qiaohong; Yu, Hongjie; Cowling, Benjamin J

    2015-10-15

    A novel avian influenza virus, influenza A(H7N9), emerged in China in early 2013 and caused severe disease in humans, with infections occurring most frequently after recent exposure to live poultry. The distribution of A(H7N9) incubation periods is of interest to epidemiologists and public health officials, but estimation of the distribution is complicated by interval censoring of exposures. Imputation of the midpoint of intervals was used in some early studies, resulting in estimated mean incubation times of approximately 5 days. In this study, we estimated the incubation period distribution of human influenza A(H7N9) infections using exposure data available for 229 patients with laboratory-confirmed A(H7N9) infection from mainland China. A nonparametric model (Turnbull) and several parametric models accounting for the interval censoring in some exposures were fitted to the data. For the best-fitting parametric model (Weibull), the mean incubation period was 3.4 days (95% confidence interval: 3.0, 3.7) and the variance was 2.9 days; results were very similar for the nonparametric Turnbull estimate. Under the Weibull model, the 95th percentile of the incubation period distribution was 6.5 days (95% confidence interval: 5.9, 7.1). The midpoint approximation for interval-censored exposures led to overestimation of the mean incubation period. Public health observation of potentially exposed persons for 7 days after exposure would be appropriate. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.

    PubMed

    Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I

    2018-06-26

    The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.

  13. Support Vector Hazards Machine: A Counting Process Framework for Learning Risk Scores for Censored Outcomes.

    PubMed

    Wang, Yuanjia; Chen, Tianle; Zeng, Donglin

    2016-01-01

    Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects.

  14. Assessing the treatment effect in a randomized controlled trial with extensive non-adherence: the EVOLVE trial.

    PubMed

    Kubo, Yumi; Sterling, Lulu Ren; Parfrey, Patrick S; Gill, Karminder; Mahaffey, Kenneth W; Gioni, Ioanna; Trotman, Marie-Louise; Dehmel, Bastian; Chertow, Glenn M

    2015-01-01

    Intention-to-treat (ITT) analysis is widely used to establish efficacy in randomized clinical trials. However, in a long-term outcomes study where non-adherence to study drug is substantial, the on-treatment effect of the study drug may be underestimated using the ITT analysis. The analyses presented herein are from the EVOLVE trial, a double-blind, placebo-controlled, event-driven cardiovascular outcomes study conducted to assess whether a treatment regimen including cinacalcet compared with placebo in addition to other conventional therapies reduces the risk of mortality and major cardiovascular events in patients receiving hemodialysis with secondary hyperparathyroidism. Pre-specified sensitivity analyses were performed to assess the impact of non-adherence on the estimated effect of cinacalcet. These analyses included lag-censoring, inverse probability of censoring weights (IPCW), rank preserving structural failure time model (RPSFTM) and iterative parameter estimation (IPE). The relative hazard (cinacalcet versus placebo) of mortality and major cardiovascular events was 0.93 (95% confidence interval 0.85, 1.02) using the ITT analysis; 0.85 (0.76, 0.95) using lag-censoring analysis; 0.81 (0.70, 0.92) using IPCW; 0.85 (0.66, 1.04) using RPSFTM and 0.85 (0.75, 0.96) using IPE. These analyses, while not providing definitive evidence, suggest that the intervention may have an effect while subjects are receiving treatment. The ITT method remains the established method to evaluate efficacy of a new treatment; however, additional analyses should be considered to assess the on-treatment effect when substantial non-adherence to study drug is expected or observed. Copyright © 2015 John Wiley & Sons, Ltd.

  15. [Data distribution and transformation in population based sampling survey of viral load in HIV positive men who have sex with men in China].

    PubMed

    Dou, Z; Chen, J; Jiang, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To understand the distribution of population viral load (PVL) data in HIV infected men who have sex with men (MSM), fit distribution function and explore the appropriate estimating parameter of PVL. Methods: The detection limit of viral load (VL) was ≤ 50 copies/ml. Box-Cox transformation and normal distribution tests were used to describe the general distribution characteristics of the original and transformed data of PVL, then the stable distribution function was fitted with test of goodness of fit. Results: The original PVL data fitted a skewed distribution with the variation coefficient of 622.24%, and had a multimodal distribution after Box-Cox transformation with optimal parameter ( λ ) of-0.11. The distribution of PVL data over the detection limit was skewed and heavy tailed when transformed by Box-Cox with optimal λ =0. By fitting the distribution function of the transformed data over the detection limit, it matched the stable distribution (SD) function ( α =1.70, β =-1.00, γ =0.78, δ =4.03). Conclusions: The original PVL data had some censored data below the detection limit, and the data over the detection limit had abnormal distribution with large degree of variation. When proportion of the censored data was large, it was inappropriate to use half-value of detection limit to replace the censored ones. The log-transformed data over the detection limit fitted the SD. The median ( M ) and inter-quartile ranger ( IQR ) of log-transformed data can be used to describe the centralized tendency and dispersion tendency of the data over the detection limit.

  16. Estimation of the incubation period of influenza A (H1N1-2009) among imported cases: addressing censoring using outbreak data at the origin of importation.

    PubMed

    Nishiura, Hiroshi; Inaba, Hisashi

    2011-03-07

    Empirical estimates of the incubation period of influenza A (H1N1-2009) have been limited. We estimated the incubation period among confirmed imported cases who traveled to Japan from Hawaii during the early phase of the 2009 pandemic (n=72). We addressed censoring and employed an infection-age structured argument to explicitly model the daily frequency of illness onset after departure. We assumed uniform and exponential distributions for the frequency of exposure in Hawaii, and the hazard rate of infection for the latter assumption was retrieved, in Hawaii, from local outbreak data. The maximum likelihood estimates of the median incubation period range from 1.43 to 1.64 days according to different modeling assumptions, consistent with a published estimate based on a New York school outbreak. The likelihood values of the different modeling assumptions do not differ greatly from each other, although models with the exponential assumption yield slightly shorter incubation periods than those with the uniform exposure assumption. Differences between our proposed approach and a published method for doubly interval-censored analysis highlight the importance of accounting for the dependence of the frequency of exposure on the survival function of incubating individuals among imported cases. A truncation of the density function of the incubation period due to an absence of illness onset during the exposure period also needs to be considered. When the data generating process is similar to that among imported cases, and when the incubation period is close to or shorter than the length of exposure, accounting for these aspects is critical for long exposure times. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Early statin use is an independent predictor of long-term graft survival.

    PubMed

    Moreso, Francesc; Calvo, Natividad; Pascual, Julio; Anaya, Fernando; Jiménez, Carlos; Del Castillo, Domingo; Sánchez-Plumed, Jaime; Serón, Daniel

    2010-06-01

    Background. Statin use in renal transplantation has been associated with a lower risk of patient death but not with an improvement of graft functional survival. The aim of this study is to evaluate the effect of statin use in graft survival, death-censored graft survival and patient survival using the data recorded on the Spanish Late Allograft Dysfunction Study Group.Patients and methods. Patients receiving a renal allograft in Spain in 1990, 1994, 1998 and 2002 were considered. Since the mean follow-up in the 2002 cohort was 3 years, statin use was analysed considering its introduction during the first year or during the initial 2 years after transplantation. Univariate and multivariate Cox regression analyses with a propensity score for statin use were employed to analyse graft survival, death-censored graft survival and patient survival.Results. In the 4682 evaluated patients, the early statin use after transplantation significantly increased from 1990 to 2002 (12.7%, 27.9%, 47.7% and 53.0%, P < 0.001). Statin use during the first year was not associated with graft or patient survival. Statin use during the initial 2 years was associated with a lower risk of graft failure (relative risk [RR] = 0.741 and 95% confidence interval [CI] = 0.635-0.866, P < 0.001) and patient death (RR = 0.806 and 95% CI = 0.656-0.989, P = 0.039). Death-censored graft survival was not associated with statin use during the initial 2 years.Conclusion. The early introduction of statin treatment after transplantation is associated with a significant decrease in late graft failure due to a risk reduction in patient death.

  18. Early statin use is an independent predictor of long-term graft survival

    PubMed Central

    Moreso, Francesc; Calvo, Natividad; Pascual, Julio; Anaya, Fernando; Jiménez, Carlos; del Castillo, Domingo; Sánchez-Plumed, Jaime; Serón, Daniel

    2010-01-01

    Background. Statin use in renal transplantation has been associated with a lower risk of patient death but not with an improvement of graft functional survival. The aim of this study is to evaluate the effect of statin use in graft survival, death-censored graft survival and patient survival using the data recorded on the Spanish Late Allograft Dysfunction Study Group. Patients and methods. Patients receiving a renal allograft in Spain in 1990, 1994, 1998 and 2002 were considered. Since the mean follow-up in the 2002 cohort was 3 years, statin use was analysed considering its introduction during the first year or during the initial 2 years after transplantation. Univariate and multivariate Cox regression analyses with a propensity score for statin use were employed to analyse graft survival, death-censored graft survival and patient survival. Results. In the 4682 evaluated patients, the early statin use after transplantation significantly increased from 1990 to 2002 (12.7%, 27.9%, 47.7% and 53.0%, P < 0.001). Statin use during the first year was not associated with graft or patient survival. Statin use during the initial 2 years was associated with a lower risk of graft failure (relative risk [RR] = 0.741 and 95% confidence interval [CI] = 0.635–0.866, P < 0.001) and patient death (RR = 0.806 and 95% CI = 0.656–0.989, P = 0.039). Death-censored graft survival was not associated with statin use during the initial 2 years. Conclusion. The early introduction of statin treatment after transplantation is associated with a significant decrease in late graft failure due to a risk reduction in patient death. PMID:20508861

  19. A joint model of persistent human papillomavirus infection and cervical cancer risk: Implications for cervical cancer screening

    PubMed Central

    Katki, Hormuzd A.; Cheung, Li C.; Fetterman, Barbara; Castle, Philip E.; Sundaram, Rajeshwari

    2014-01-01

    Summary New cervical cancer screening guidelines in the US and many European countries recommend that women get tested for human papillomavirus (HPV). To inform decisions about screening intervals, we calculate the increase in precancer/cancer risk per year of continued HPV infection. However, both time to onset of precancer/cancer and time to HPV clearance are interval-censored, and onset of precancer/cancer strongly informatively censors HPV clearance. We analyze this bivariate informatively interval-censored data by developing a novel joint model for time to clearance of HPV and time to precancer/cancer using shared random-effects, where the estimated mean duration of each woman’s HPV infection is a covariate in the submodel for time to precancer/cancer. The model was fit to data on 9,553 HPV-positive/Pap-negative women undergoing cervical cancer screening at Kaiser Permanente Northern California, data that were pivotal to the development of US screening guidelines. We compare the implications for screening intervals of this joint model to those from population-average marginal models of precancer/cancer risk. In particular, after 2 years the marginal population-average precancer/cancer risk was 5%, suggesting a 2-year interval to control population-average risk at 5%. In contrast, the joint model reveals that almost all women exceeding 5% individual risk in 2 years also exceeded 5% in 1 year, suggesting that a 1-year interval is better to control individual risk at 5%. The example suggests that sophisticated risk models capable of predicting individual risk may have different implications than population-average risk models that are currently used for informing medical guideline development. PMID:26556961

  20. A joint model of persistent human papillomavirus infection and cervical cancer risk: Implications for cervical cancer screening.

    PubMed

    Katki, Hormuzd A; Cheung, Li C; Fetterman, Barbara; Castle, Philip E; Sundaram, Rajeshwari

    2015-10-01

    New cervical cancer screening guidelines in the US and many European countries recommend that women get tested for human papillomavirus (HPV). To inform decisions about screening intervals, we calculate the increase in precancer/cancer risk per year of continued HPV infection. However, both time to onset of precancer/cancer and time to HPV clearance are interval-censored, and onset of precancer/cancer strongly informatively censors HPV clearance. We analyze this bivariate informatively interval-censored data by developing a novel joint model for time to clearance of HPV and time to precancer/cancer using shared random-effects, where the estimated mean duration of each woman's HPV infection is a covariate in the submodel for time to precancer/cancer. The model was fit to data on 9,553 HPV-positive/Pap-negative women undergoing cervical cancer screening at Kaiser Permanente Northern California, data that were pivotal to the development of US screening guidelines. We compare the implications for screening intervals of this joint model to those from population-average marginal models of precancer/cancer risk. In particular, after 2 years the marginal population-average precancer/cancer risk was 5%, suggesting a 2-year interval to control population-average risk at 5%. In contrast, the joint model reveals that almost all women exceeding 5% individual risk in 2 years also exceeded 5% in 1 year, suggesting that a 1-year interval is better to control individual risk at 5%. The example suggests that sophisticated risk models capable of predicting individual risk may have different implications than population-average risk models that are currently used for informing medical guideline development.

  1. Effect of Micronutrient Supplementation on Disease Progression in Asymptomatic, Antiretroviral-Naive, HIV-Infected Adults in Botswana A Randomized Clinical Trial

    PubMed Central

    Baum, Marianna K.; Campa, Adriana; Lai, Shenghan; Martinez, Sabrina Sales; Tsalaile, Lesedi; Burns, Patricia; Farahani, Mansour; Li, Yinghui; van Widenfelt, Erik; Page, John Bryan; Bussmann, Hermann; Fawzi, Wafaie W.; Moyo, Sikhulele; Makhema, Joseph; Thior, Ibou; Essex, Myron; Marlink, Richard

    2015-01-01

    IMPORTANCE Micronutrient deficiencies occur early in human immunodeficiency virus (HIV) infection, and supplementation with micronutrients may be beneficial; however, its effectiveness has not been investigated early in HIV disease among adults who are antiretroviral therapy (ART) naive. OBJECTIVE To investigate whether long-term micronutrient supplementation is effective and safe in delaying disease progression when implemented early in adults infected with HIV subtype C who are ART-naive. DESIGN, SETTING, AND PARTICIPANTS Randomized clinical trial of supplementation with either daily multivitamins (B vitamins and vitamins C and E), seleniumalone, or multivitamins with selenium vs placebo inafactorial design for 24 months. The study was conducted in 878 patients infected with HIV subtype C with a CD4 cell count greater than 350/μL who were not receiving ART at Princess Marina Hospital in Gaborone, Botswana, between December 2004 and July 2009. INTERVENTIONS Daily oral supplements of B vitamins and vitamins C and E, selenium alone, or multivitamins plus selenium, compared with placebo. MAIN OUTCOMES AND MEASURES Reaching a CD4 cell count less than 200/μL until May 2008; after this date, reaching a CD4 cell count of 250/μL or less, consistent with the standard of care in Botswana for initiation of ART at the time of the study. RESULTS There were 878 participants enrolled and randomized into the study. All participants were ART-naive throughout the study. In intent-to-treat analysis, participants receiving the combined supplement of multivitamins plus selenium had a significantly lower risk vs placebo of reaching CD4 cell count 250/μL or less (adjusted hazard ratio [HR], 0.46; 95% CI, 0.25-0.85; P = .01; absolute event rate [AER], 4.79/100 person-years; censoring rate, 0.92; 17 events; placebo AER, 9.22/100 person-years; censoring rate, 0.85; 32 events). Multivitamins plus selenium in a single supplement, vs placebo, also reduced the risk of secondary events of combined outcomes for disease progression (CD4 cell count ≤250/μL, AIDS-defining conditions, or AIDS-related death, whichever occurred earlier [adjusted HR, 0.56; 95% CI, 0.33-0.95; P = .03; AER, 6.48/100 person-years; censoring rate, 0.90; 23 events]). There was no effect of supplementation on HIV viral load. Multivitamins alone and selenium supplementation alone were not statistically different from placebo for any end point. Reported adverse events were adjudicated as unlikely to be related to the intervention, and there were no notable differences in incidence of HIV-related and health-related events among study groups. CONCLUSIONS AND RELEVANCE In ART-naive HIV-infected adults, 24-month supplementation with a single supplement containing multivitamins and selenium was safe and significantly PMID:24281460

  2. Effect of micronutrient supplementation on disease progression in asymptomatic, antiretroviral-naive, HIV-infected adults in Botswana: a randomized clinical trial.

    PubMed

    Baum, Marianna K; Campa, Adriana; Lai, Shenghan; Sales Martinez, Sabrina; Tsalaile, Lesedi; Burns, Patricia; Farahani, Mansour; Li, Yinghui; van Widenfelt, Erik; Page, John Bryan; Bussmann, Hermann; Fawzi, Wafaie W; Moyo, Sikhulele; Makhema, Joseph; Thior, Ibou; Essex, Myron; Marlink, Richard

    2013-11-27

    Micronutrient deficiencies occur early in human immunodeficiency virus (HIV) infection, and supplementation with micronutrients may be beneficial; however, its effectiveness has not been investigated early in HIV disease among adults who are antiretroviral therapy (ART) naive. To investigate whether long-term micronutrient supplementation is effective and safe in delaying disease progression when implemented early in adults infected with HIV subtype C who are ART-naive. Randomized clinical trial of supplementation with either daily multivitamins (B vitamins and vitamins C and E), selenium alone, or multivitamins with selenium vs placebo in a factorial design for 24 months. The study was conducted in 878 patients infected with HIV subtype C with a CD4 cell count greater than 350/μL who were not receiving ART at Princess Marina Hospital in Gaborone, Botswana, between December 2004 and July 2009. Daily oral supplements of B vitamins and vitamins C and E, selenium alone, or multivitamins plus selenium, compared with placebo. Reaching a CD4 cell count less than 200/μL until May 2008; after this date, reaching a CD4 cell count of 250/μL or less, consistent with the standard of care in Botswana for initiation of ART at the time of the study. There were 878 participants enrolled and randomized into the study. All participants were ART-naive throughout the study. In intent-to-treat analysis, participants receiving the combined supplement of multivitamins plus selenium had a significantly lower risk vs placebo of reaching CD4 cell count 250/μL or less (adjusted hazard ratio [HR], 0.46; 95% CI, 0.25-0.85; P = .01; absolute event rate [AER], 4.79/100 person-years; censoring rate, 0.92; 17 events; placebo AER, 9.22/100 person-years; censoring rate, 0.85; 32 events). Multivitamins plus selenium in a single supplement, vs placebo, also reduced the risk of secondary events of combined outcomes for disease progression (CD4 cell count ≤250/μL, AIDS-defining conditions, or AIDS-related death, whichever occurred earlier [adjusted HR, 0.56; 95% CI, 0.33-0.95; P = .03; AER, 6.48/100 person-years; censoring rate, 0.90; 23 events]). There was no effect of supplementation on HIV viral load. Multivitamins alone and selenium supplementation alone were not statistically different from placebo for any end point. Reported adverse events were adjudicated as unlikely to be related to the intervention, and there were no notable differences in incidence of HIV-related and health-related events among study groups. In ART-naive HIV-infected adults, 24-month supplementation with a single supplement containing multivitamins and selenium was safe and significantly reduced the risk of immune decline and morbidity. Micronutrient supplementation may be effective when started in the early stages of HIV disease.

  3. Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.

    2016-06-01

    Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar objects.

  4. METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS

    EPA Science Inventory

    Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such ...

  5. Celebrating Censored Books.

    ERIC Educational Resources Information Center

    Karolides, Nicholas J., Ed.; Burress, Lee, Ed.

    Intended to provide rationales for the use of a group of controversial books in American public schools, this manual explains the educational values of 33 of the most frequently challenged books. Detailed rationales include such information as historical perspective, literary elements, pertinence, and thematic content. Some of the titles include…

  6. The Censored Curriculum: The Problem with Textbooks Today.

    ERIC Educational Resources Information Center

    Ornstein, Allan C.

    1992-01-01

    All major textbook companies conform to preferences of larger educational markets (California, Illinois, New York, Texas, and Florida) and exercise self-censorship to appease dissenting factions and avoid alienating pressure groups. Recent censorship controversies have involved sanctity of family, criticism of free enterprise system,…

  7. Supreme Court Deals Blow to Student Journalists.

    ERIC Educational Resources Information Center

    Gynn, Ann

    1989-01-01

    Covers the U.S. Supreme Court decision in Hazelwood School District v. Kuhlmeier, which gave principals the right to censor school publications. In "One Student's Pursuit of Journalism," Alexandra Salas relates one student journalist's experience, including internships, from high school through the end of college. (LS)

  8. LOCAL EM ESTIMATION OF THE HAZARD FUNCTION FOR INTERVAL CENSORED DATA. (R824757)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  9. Multivariate Bayesian analysis of Gaussian, right censored Gaussian, ordered categorical and binary traits using Gibbs sampling

    PubMed Central

    Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just

    2003-01-01

    A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531

  10. Analysis of longitudinal marginal structural models.

    PubMed

    Bryan, Jenny; Yu, Zhuo; Van Der Laan, Mark J

    2004-07-01

    In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.

  11. Nonparametric estimation of median survival times with applications to multi-site or multi-center studies.

    PubMed

    Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C

    2018-01-01

    We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.

  12. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    PubMed

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.

  13. Missing data and censoring in the analysis of progression-free survival in oncology clinical trials.

    PubMed

    Denne, J S; Stone, A M; Bailey-Iacona, R; Chen, T-T

    2013-01-01

    Progression-free survival (PFS) is increasingly used as a primary endpoint in oncology clinical trials. However, trial conduct is often such that PFS data on some patients may be partially missing either due to incomplete follow-up for progression, or due to data that may be collected but confounded by patients stopping randomized therapy or starting alternative therapy prior to progression. Regulatory guidance on how to handle these patients in the analysis and whether to censor these patients differs between agencies. We present results of a reanalysis of 28 Phase III trials from 12 companies or institutions performed by the Pharmaceutical Research and Manufacturers Association-sponsored PFS Expert Team. We show that analyses not adhering to the intention-to-treat principle tend to give hazard ratio estimates further from unity and describe several factors associated with this shift. We present illustrative simulations to support these findings and provide recommendations for the analysis of PFS.

  14. Varying coefficient subdistribution regression for left-truncated semi-competing risks data.

    PubMed

    Li, Ruosha; Peng, Limin

    2014-10-01

    Semi-competing risks data frequently arise in biomedical studies when time to a disease landmark event is subject to dependent censoring by death, the observation of which however is not precluded by the occurrence of the landmark event. In observational studies, the analysis of such data can be further complicated by left truncation. In this work, we study a varying co-efficient subdistribution regression model for left-truncated semi-competing risks data. Our method appropriately accounts for the specifical truncation and censoring features of the data, and moreover has the flexibility to accommodate potentially varying covariate effects. The proposed method can be easily implemented and the resulting estimators are shown to have nice asymptotic properties. We also present inference, such as Kolmogorov-Smirnov type and Cramér Von-Mises type hypothesis testing procedures for the covariate effects. Simulation studies and an application to the Denmark diabetes registry demonstrate good finite-sample performance and practical utility of the proposed method.

  15. A Local Agreement Pattern Measure Based on Hazard Functions for Survival Outcomes

    PubMed Central

    Dai, Tian; Guo, Ying; Peng, Limin; Manatunga, Amita K.

    2017-01-01

    Summary Assessing agreement is often of interest in biomedical and clinical research when measurements are obtained on the same subjects by different raters or methods. Most classical agreement methods have been focused on global summary statistics, which cannot be used to describe various local agreement patterns. The objective of this work is to study the local agreement pattern between two continuous measurements subject to censoring. In this paper, we propose a new agreement measure based on bivariate hazard functions to characterize the local agreement pattern between two correlated survival outcomes. The proposed measure naturally accommodates censored observations, fully captures the dependence structure between bivariate survival times and provides detailed information on how the strength of agreement evolves over time. We develop a nonparametric estimation method for the proposed local agreement pattern measure and study theoretical properties including strong consistency and asymptotical normality. We then evaluate the performance of the estimator through simulation studies and illustrate the method using a prostate cancer data example. PMID:28724196

  16. A local agreement pattern measure based on hazard functions for survival outcomes.

    PubMed

    Dai, Tian; Guo, Ying; Peng, Limin; Manatunga, Amita K

    2018-03-01

    Assessing agreement is often of interest in biomedical and clinical research when measurements are obtained on the same subjects by different raters or methods. Most classical agreement methods have been focused on global summary statistics, which cannot be used to describe various local agreement patterns. The objective of this work is to study the local agreement pattern between two continuous measurements subject to censoring. In this article, we propose a new agreement measure based on bivariate hazard functions to characterize the local agreement pattern between two correlated survival outcomes. The proposed measure naturally accommodates censored observations, fully captures the dependence structure between bivariate survival times and provides detailed information on how the strength of agreement evolves over time. We develop a nonparametric estimation method for the proposed local agreement pattern measure and study theoretical properties including strong consistency and asymptotical normality. We then evaluate the performance of the estimator through simulation studies and illustrate the method using a prostate cancer data example. © 2017, The International Biometric Society.

  17. Bayesian Approach for Flexible Modeling of Semicompeting Risks Data

    PubMed Central

    Han, Baoguang; Yu, Menggang; Dignam, James J.; Rathouz, Paul J.

    2016-01-01

    Summary Semicompeting risks data arise when two types of events, non-terminal and terminal, are observed. When the terminal event occurs first, it censors the non-terminal event, but not vice versa. To account for possible dependent censoring of the non-terminal event by the terminal event and to improve prediction of the terminal event using the non-terminal event information, it is crucial to model their association properly. Motivated by a breast cancer clinical trial data analysis, we extend the well-known illness-death models to allow flexible random effects to capture heterogeneous association structures in the data. Our extension also represents a generalization of the popular shared frailty models that usually assume that the non-terminal event does not affect the hazards of the terminal event beyond a frailty term. We propose a unified Bayesian modeling approach that can utilize existing software packages for both model fitting and individual specific event prediction. The approach is demonstrated via both simulation studies and a breast cancer data set analysis. PMID:25274445

  18. Individual heterogeneity in growth and age at sexual maturity: A gamma process analysis of capture–mark–recapture data

    USGS Publications Warehouse

    Link, William; Hesed, Kyle Miller

    2015-01-01

    Knowledge of organisms’ growth rates and ages at sexual maturity is important for conservation efforts and a wide variety of studies in ecology and evolutionary biology. However, these life history parameters may be difficult to obtain from natural populations: individuals encountered may be of unknown age, information on age at sexual maturity may be uncertain and interval-censored, and growth data may include both individual heterogeneity and measurement errors. We analyzed mark–recapture data for Red-backed Salamanders (Plethodon cinereus) to compare sex-specific growth rates and ages at sexual maturity. Aging of individuals was made possible by the use of a von Bertalanffy model of growth, complemented with models for interval-censored and imperfect observations at sexual maturation. Individual heterogeneity in growth was modeled through the use of Gamma processes. Our analysis indicates that female P. cinereus mature earlier and grow more quickly than males, growing to nearly identical asymptotic size distributions as males.

  19. Censored data treatment using additional information in intelligent medical systems

    NASA Astrophysics Data System (ADS)

    Zenkova, Z. N.

    2015-11-01

    Statistical procedures are a very important and significant part of modern intelligent medical systems. They are used for proceeding, mining and analysis of different types of the data about patients and their diseases; help to make various decisions, regarding the diagnosis, treatment, medication or surgery, etc. In many cases the data can be censored or incomplete. It is a well-known fact that censorship considerably reduces the efficiency of statistical procedures. In this paper the author makes a brief review of the approaches which allow improvement of the procedures using additional information, and describes a modified estimation of an unknown cumulative distribution function involving additional information about a quantile which is known exactly. The additional information is used by applying a projection of a classical estimator to a set of estimators with certain properties. The Kaplan-Meier estimator is considered as an estimator of the unknown cumulative distribution function, the properties of the modified estimator are investigated for a case of a single right censorship by means of simulations.

  20. THE DiskMass SURVEY. III. STELLAR KINEMATICS VIA CROSS-CORRELATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westfall, Kyle B.; Bershady, Matthew A.; Verheijen, Marc A. W., E-mail: westfall@astro.rug.nl, E-mail: mab@astro.wisc.edu, E-mail: verheyen@astro.rug.nl

    2011-03-15

    We describe a new cross-correlation (CC) approach used by our survey to derive stellar kinematics from galaxy-continuum spectroscopy. This approach adopts the formal error analysis derived by Statler, but properly handles spectral masks. Thus, we address the primary concerns regarding application of the CC method to censored data, while maintaining its primary advantage by consolidating kinematic and template-mismatch information toward different regions of the CC function. We identify a systematic error in the nominal CC method of approximately 10% in velocity dispersion incurred by a mistreatment of detector-censored data, which is eliminated by our new method. We derive our approachmore » from first principles, and we use Monte Carlo simulations to demonstrate its efficacy. An identical set of Monte Carlo simulations performed using the well-established penalized-pixel-fitting code of Cappellari and Emsellem compares favorably with the results from our newly implemented software. Finally, we provide a practical demonstration of this software by extracting stellar kinematics from SparsePak spectra of UGC 6918.« less

  1. The problem of natural funnel asymmetries: a simulation analysis of meta-analysis in macroeconomics.

    PubMed

    Callot, Laurent; Paldam, Martin

    2011-06-01

    Effect sizes in macroeconomic are estimated by regressions on data published by statistical agencies. Funnel plots are a representation of the distribution of the resulting regression coefficients. They are normally much wider than predicted by the t-ratio of the coefficients and often asymmetric. The standard method of meta-analysts in economics assumes that the asymmetries are because of publication bias causing censoring and adjusts the average accordingly. The paper shows that some funnel asymmetries may be 'natural' so that they occur without censoring. We investigate such asymmetries by simulating funnels by pairs of data generating processes (DGPs) and estimating models (EMs), in which the EM has the problem that it disregards a property of the DGP. The problems are data dependency, structural breaks, non-normal residuals, non-linearity, and omitted variables. We show that some of these problems generate funnel asymmetries. When they do, the standard method often fails. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Concordance measure and discriminatory accuracy in transformation cure models.

    PubMed

    Zhang, Yilong; Shao, Yongzhao

    2018-01-01

    Many populations of early-stage cancer patients have non-negligible latent cure fractions that can be modeled using transformation cure models. However, there is a lack of statistical metrics to evaluate prognostic utility of biomarkers in this context due to the challenges associated with unknown cure status and heavy censorship. In this article, we develop general concordance measures as evaluation metrics for the discriminatory accuracy of transformation cure models including the so-called promotion time cure models and mixture cure models. We introduce explicit formulas for the consistent estimates of the concordance measures, and show that their asymptotically normal distributions do not depend on the unknown censoring distribution. The estimates work for both parametric and semiparametric transformation models as well as transformation cure models. Numerical feasibility of the estimates and their robustness to the censoring distributions are illustrated via simulation studies and demonstrated using a melanoma data set. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Nonparametric estimation of median survival times with applications to multi-site or multi-center studies

    PubMed Central

    Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.

    2018-01-01

    We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007

  4. FACTOR ANALYTIC MODELS OF CLUSTERED MULTIVARIATE DATA WITH INFORMATIVE CENSORING

    EPA Science Inventory

    This paper describes a general class of factor analytic models for the analysis of clustered multivariate data in the presence of informative missingness. We assume that there are distinct sets of cluster-level latent variables related to the primary outcomes and to the censorin...

  5. Treatment of Not-Administered Items on Individually Administered Intelligence Tests

    ERIC Educational Resources Information Center

    He, Wei; Wolfe, Edward W.

    2012-01-01

    In administration of individually administered intelligence tests, items are commonly presented in a sequence of increasing difficulty, and test administration is terminated after a predetermined number of incorrect answers. This practice produces stochastically censored data, a form of nonignorable missing data. By manipulating four factors…

  6. When Worldviews Collide

    ERIC Educational Resources Information Center

    Haynes, Charles C.

    2012-01-01

    Escalating conflicts over issues involving sexual orientation in public schools often pit conservative Christians against proponents of gay rights--with school leaders caught in the crossfire. Some beleaguered school administrators attempt to put a lid on the controversy by censoring student expression on one side or the other. But that tactic…

  7. [Censorship and Other Issues.

    ERIC Educational Resources Information Center

    Manchikes, Alice W., Ed.

    1975-01-01

    The five articles in this journal issue discuss attempts to censor certain nonprint media in the schools, the conflict between intellectual freedom and community values, recent efforts by Tennessee educators and librarians to achieve a better measure of intellectual freedom, the need for English teachers to reexamine their teaching methods and…

  8. Case Studies in Censorship: William Faulkner's "Sanctuary."

    ERIC Educational Resources Information Center

    Loe, Mary Hong; Moore, Robert R.

    1995-01-01

    Faulkner's novel "Sanctuary" was controversial, and attempts to censor it came from private citizens and public officials in response to both the print version and its film adaptation. A historical background and chronology of these incidents are described in detail in a 17-item annotated bibliography. (JMV)

  9. Literary Magazines: To Censor or Not?

    ERIC Educational Resources Information Center

    Mossman, Robert C.

    2007-01-01

    High school teacher Robert C. Mossman asserts that literary magazines reflect and elevate school communities' values. As adviser of his school literary magazine, Mossman encourages students to uphold certain standards for decency, while helping students learn to distinguish between original writing that is specific and thoughtful and writing that…

  10. Freedom and Censorship in Dramatic Television Writing.

    ERIC Educational Resources Information Center

    Blum, Richard A.

    The television writer's creative expression is limited by executive forces, conflicting philosophies, and restrictive regulations. In a 1972 poll by the Writer's Guild Committee on Censorship, it was revealed that an overwhelming majority of television writers felt personally censored by the industry. Although the success of character comedies…

  11. Censoring the Mails: What Is Your Opinion?

    ERIC Educational Resources Information Center

    Alexander, Mary, Ed.; Childress, Marilyn, Ed.

    1982-01-01

    Contains a primary source document--a letter written in 1916 protesting U.S. Postal Service censorship of an anarchist publication--and suggests ways to use the document in secondary U.S. history classes. Samples of student surveys on mail censorship and a document analysis worksheet are included. (AM)

  12. Intellectual Freedom: The Censorship War Continues.

    ERIC Educational Resources Information Center

    Spicola, Rose; Stephens, Carolyn

    1989-01-01

    Censorship efforts are taking place throughout the country in spite of greater activism among anticensorship forces. One of the alarming trends is that many of the censors are now attacking the curriculum of the schools, challenging textbooks in reading, literature, history, science, health, and social studies. The International Reading…

  13. ASURV: Astronomical SURVival Statistics

    NASA Astrophysics Data System (ADS)

    Feigelson, E. D.; Nelson, P. I.; Isobe, T.; LaValley, M.

    2014-06-01

    ASURV (Astronomical SURVival Statistics) provides astronomy survival analysis for right- and left-censored data including the maximum-likelihood Kaplan-Meier estimator and several univariate two-sample tests, bivariate correlation measures, and linear regressions. ASURV is written in FORTRAN 77, and is stand-alone and does not call any specialized libraries.

  14. Gene-Based Association Analysis for Censored Traits Via Fixed Effect Functional Regressions.

    PubMed

    Fan, Ruzong; Wang, Yifan; Yan, Qi; Ding, Ying; Weeks, Daniel E; Lu, Zhaohui; Ren, Haobo; Cook, Richard J; Xiong, Momiao; Swaroop, Anand; Chew, Emily Y; Chen, Wei

    2016-02-01

    Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, here we develop Cox proportional hazard models using functional regression (FR) to perform gene-based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well-controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT), which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age-related macular degeneration dataset was analyzed as an example. © 2016 WILEY PERIODICALS, INC.

  15. Evaluation of virus reduction efficiency in wastewater treatment unit processes as a credit value in the multiple-barrier system for wastewater reclamation and reuse.

    PubMed

    Ito, Toshihiro; Kato, Tsuyoshi; Hasegawa, Makoto; Katayama, Hiroyuki; Ishii, Satoshi; Okabe, Satoshi; Sano, Daisuke

    2016-12-01

    The virus reduction efficiency of each unit process is commonly determined based on the ratio of virus concentration in influent to that in effluent of a unit, but the virus concentration in wastewater has often fallen below the analytical quantification limit, which does not allow us to calculate the concentration ratio at each sampling event. In this study, left-censored datasets of norovirus (genogroup I and II), and adenovirus were used to calculate the virus reduction efficiency in unit processes of secondary biological treatment and chlorine disinfection. Virus concentration in influent, effluent from the secondary treatment, and chlorine-disinfected effluent of four municipal wastewater treatment plants were analyzed by a quantitative polymerase chain reaction (PCR) approach, and the probabilistic distributions of log reduction (LR) were estimated by a Bayesian estimation algorithm. The mean values of LR in the secondary treatment units ranged from 0.9 and 2.2, whereas those in the free chlorine disinfection units were from -0.1 and 0.5. The LR value in the secondary treatment was virus type and unit process dependent, which raised the importance for accumulating the data of virus LR values applicable to the multiple-barrier system, which is a global concept of microbial risk management in wastewater reclamation and reuse.

  16. Comparative analysis of natalizumab versus fingolimod as second-line treatment in relapsing-remitting multiple sclerosis.

    PubMed

    Lorscheider, Johannes; Benkert, Pascal; Lienert, Carmen; Hänni, Peter; Derfuss, Tobias; Kuhle, Jens; Kappos, Ludwig; Yaldizli, Özgür

    2018-05-01

    No randomized controlled trials have compared the efficacy of fingolimod or natalizumab as second-line treatment in patients with relapsing-remitting multiple sclerosis (RRMS). To compare clinical outcomes after escalation to fingolimod versus natalizumab in patients with clinically active RRMS. Using the registry of the Swiss Federation for Common Tasks of Health Insurances, we identified patients with RRMS and ≥1 relapse in the year before switching from interferon beta or glatiramer acetate to fingolimod or natalizumab. Propensity score matching was used to select patients with comparable baseline characteristics. Relapse and Expanded Disability Status Scale (EDSS) outcomes were compared in paired, pairwise-censored analyses. Of the 547 included patients, 358 were matched (fingolimod, n = 179; natalizumab, n = 179). Median follow-up time was 1.8 years (interquartile range 0.9-2.9). Patients switching to natalizumab had a lower risk of relapses (incidence rate ratio 0.5, 95% confidence interval (CI) 0.3-0.8, p = 0.001) and were more likely to experience EDSS improvement (hazard ratio (HR) 1.8, 95% CI 1.1-2.7, p = 0.01) compared to fingolimod. We found no differences in the proportion of patients free from EDSS progression (HR 0.9, 95% CI 0.5-1.5, p = 0.62). Natalizumab seems to be more effective in reducing relapse rate and improving disability compared with fingolimod.

  17. Gene-based Association Analysis for Censored Traits Via Fixed Effect Functional Regressions

    PubMed Central

    Fan, Ruzong; Wang, Yifan; Yan, Qi; Ding, Ying; Weeks, Daniel E.; Lu, Zhaohui; Ren, Haobo; Cook, Richard J; Xiong, Momiao; Swaroop, Anand; Chew, Emily Y.; Chen, Wei

    2015-01-01

    Summary Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, we develop here Cox proportional hazard models using functional regression (FR) to perform gene-based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well-controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT) which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age-related macular degeneration dataset was analyzed as an example. PMID:26782979

  18. Comprehensive Sex Education: Research and Results

    ERIC Educational Resources Information Center

    Advocates for Youth, 2009

    2009-01-01

    Since 1997 the federal government has invested more than $1.5 billion dollars in abstinence-only programs--proven ineffective programs which censor or exclude important information that could help young people protect their health. In fact, until recently, programs which met a strict abstinence-only definition were the only type of sex education…

  19. The Front Line: Corruption of Youth or Ignorance in Action?

    ERIC Educational Resources Information Center

    Unks, Gerald

    1981-01-01

    The author decries the tactics of one-issue pressure groups seeking to censor sex education and other educational activities. He sees these groups as an anti-intellectual force attacking the free exchange of ideas, which is the major purpose and function of schools in a democracy. (SJL)

  20. Statistical Aspects of Reliability, Maintainability, and Availability.

    DTIC Science & Technology

    1987-10-01

    A total of 33 research reports were issued, and 35 papers were published in scientific journals or are in press. Research topics included optimal assembly of systems, multistate system theory , testing whether new is better than used nonparameter survival function estimation measuring information in censored models, generalizations of total positively and

  1. A Citadel under Siege: A Strategy against the Censor.

    ERIC Educational Resources Information Center

    North, William D.

    1986-01-01

    Identifies the tasks that the library administrator must complete if the collection is to successfully withstand censorship. Using an analogy of the construction, arming, and manning of a "citadel," a strategy is presented which is adaptable to the broadest range of attack on collection, acquisition, and dissemination policies of…

  2. The Study of Aggressive Pornography: The Vicissitudes of Relevance.

    ERIC Educational Resources Information Center

    Brannigan, Augustine; Goldenberg, Sheldon

    1987-01-01

    Reviews experimental studies of behavioral consequences of exposure to violent or aggressive pornography and evaluates the validity and relevance as support for censoring pornography in the aftermath of the Meese Commission. Finds research deficient in several areas, such as design, theoretical models, and interpretation, thus offering no…

  3. The Free Mind: Intellectual Freedom's Perils and Prospects

    ERIC Educational Resources Information Center

    Oboler, Eli

    1976-01-01

    It is more important to make available all possible shades of opinion and varieties of creative effort than to have the widest possible range of gadgets and gizmos connected by the fanciest and longest possible networks, but including only a restricted, censored, limited range of topics and their interpretations. (Author)

  4. A Bibliophile's Quest for Copernicus.

    ERIC Educational Resources Information Center

    Broad, William J.

    1982-01-01

    A search is in progress for copies of Copernicus' 1543 publication "On the Revolutions of the Celestial Spheres." The history of each copy (215 found thus far) is being analyzed to offer insights into the past. For example, the futility of the Inquisition's attempt to censor the book was chronicled. (Author/JN)

  5. METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS

    EPA Science Inventory

    Due to limitations of chemical analysis procedures, small values cannot be precisely measured. These values are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such as half the LOD,...

  6. ROBUST ESTIMATION OF MEAN AND VARIANCE USING ENVIRONMENTAL DATA SETS WITH BELOW DETECTION LIMIT OBSERVATIONS

    EPA Science Inventory

    Scientists, especially environmental scientists often encounter trace level concentrations that are typically reported as less than a certain limit of detection, L. Type 1, left-censored data arise when certain low values lying below L are ignored or unknown as they cannot be mea...

  7. The Many Faces of Censorship.

    ERIC Educational Resources Information Center

    Peck, Richard

    1999-01-01

    An author of 26 books for young adults writes about different forms of censorship including rewriting history textbooks and parents who censor themselves by not staying in touch with their children. Citing Cormier's "The Chocolate War" and Golding's "Lord of the Flies" and making reference to the Colulmbine school murders, he illustrates the…

  8. 78 FR 52087 - Commercial Filming and Similar Projects and Still Photography Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-22

    ... too much power to restrict access to certain areas by documentary filmmakers, sound recordists, and photographers. These proposed rules could be used to censor information, or to hide the effects of activities in...-1136). Federal land managers may not arbitrarily exclude filmmakers or still photographers from...

  9. Propensity Score Weighting with Error-Prone Covariates

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.; Lockwood, J. R.; Setodji, Claude M.

    2011-01-01

    Inverse probability weighting (IPW) estimates are widely used in applications where data are missing due to nonresponse or censoring or in observational studies of causal effects where the counterfactuals cannot be observed. This extensive literature has shown the estimators to be consistent and asymptotically normal under very general conditions,…

  10. Should the Internet Be Censored? Yes! Yes! Yes! Yes!

    ERIC Educational Resources Information Center

    Taylor, Bruce

    1998-01-01

    Supports the use of filtering software by parents, schools, and libraries as a least restrictive alternative to criminal prosecution to restrict the availability of pornographic materials on the Internet. Argues that filtering programs work well and do not make most of the mistakes attributed to them by detractors. (DSK)

  11. Censoring Rock Lyrics: A Historical Analysis of the Debate.

    ERIC Educational Resources Information Center

    McDonald, James R.

    1988-01-01

    Examines the history of censorship and rock and roll, reassesses the present position taken by the Parents' Music Resource Center, reviews the scholarship regarding the influence of popular song lyrics on youth, and assesses what bearing that scholarship has on the present discussion of censorship and rock and roll. (BJV)

  12. Survival analysis, or what to do with upper limits in astronomical surveys

    NASA Technical Reports Server (NTRS)

    Isobe, Takashi; Feigelson, Eric D.

    1986-01-01

    A field of applied statistics called survival analysis has been developed over several decades to deal with censored data, which occur in astronomical surveys when objects are too faint to be detected. How these methods can assist in the statistical interpretation of astronomical data are reviewed.

  13. ProUCL version 4.00.04 Documentation Downloads

    EPA Pesticide Factsheets

    ProUCL Version 4.00.04 is an upgrade of ProUCL Version 4.0 (EPA, 2007). ProUCL 4.00.02 contains statistical methods to address various environmental issues for both full data sets without nondetects and for data sets with NDs (also known as left-censored d

  14. Censorship in Young Adult Fiction: What's Out There and What Should Be.

    ERIC Educational Resources Information Center

    Holland, Suzann

    2002-01-01

    Discussion of censorship of young adult books focuses on works of fiction that deal with censorship. Includes 14 annotated bibliographies; discusses stereotyped views of censors; and considers types of materials that have not been discussed in novels regarding censorship, including music and Internet filters. (LRW)

  15. Censorship: Does Anybody Care?

    ERIC Educational Resources Information Center

    Ochoa, Anna S.

    1979-01-01

    Focuses on the nature and extent of censorship of social studies materials in the United States. Specific topics addressed are persons and organizations censoring, status of the rights of teachers, and what educators, schools, and professional organizations such as the National Council for the Social Studies can do to help teachers deal with…

  16. The Censorship of the "Adventures of Huckleberry Finn": An Investigation.

    ERIC Educational Resources Information Center

    Cloonan, Michele V.

    1984-01-01

    Explores reasons why "Adventures of Huckleberry Finn" has been continuously censored from its publication in 1885 to present. Historical precedents for censorship of library materials in the United States and specific censorship attempts are discussed. Controversial passages are examined in light of both praise and criticism.…

  17. Managing the Student Press: Consider Carefully before You Unsheath the Censor's Scissors.

    ERIC Educational Resources Information Center

    Sendor, Benjamin

    1988-01-01

    The United States Supreme Court's decision in "Hazelwood School District v. Kuhlmeier" gives school officials sweeping power over school-sponsored publications and other curricular and extracurricular activities. To avoid charges of squelching student expression, school boards should make sure that policies limit censorship to legitimate…

  18. The Politics and Paradoxes of Censorship: "Miss Lonelyhearts" in Hollywood.

    ERIC Educational Resources Information Center

    Veitch, Jonathan

    1997-01-01

    Examines how Nathanael West's darkly critical text in his novel "Miss Lonelyhearts" was censored, through a series of demurrals and evasions, displacements and substitutions, even repression, to make it acceptable for the screen. Suggests that the resulting script illuminates the momentous cultural struggles underway in 1930s America--in…

  19. Four years of highly time resolved measurements of elemental and organic carbon at a rural background site in Central Europe

    NASA Astrophysics Data System (ADS)

    Mbengue, Saliou; Fusek, Michal; Schwarz, Jaroslav; Vodička, Petr; Šmejkalová, Adéla Holubová; Holoubek, Ivan

    2018-06-01

    Elemental carbon (EC) and organic carbon (OC) in fine atmospheric aerosols (PM2.5: aerodynamic diameter smaller than 2.5 μm) have been measured with a semi-automatic instrument during a 4-year survey at the National Atmospheric Observatory Košetice (NAOK), Czech Republic. Ground based measurements were performed from March 2013 to December 2016 with a field Semi-Continuous OCEC Aerosol Analyzer (Sunset Laboratory Inc., USA). The variation of EC and OC concentrations and the OC/EC ratio was characterized for different seasons and days of the week. During our survey, higher concentrations of EC and OC were observed in winter (0.83 ± 0.67 and 3.33 ± 2.28 μg m-3, respectively), and lower concentrations were recorded in summer (0.34 ± 0.18 and 2.30 ± 1.15 μg m-3, respectively). Inversely, the OC/EC ratio with mean value (5.1 ± 2.6) characteristic to rural background area was higher in summer (7.33 ± 3.23) in comparison to the other seasons. Since the data contain values below detection and quantification limits of the measuring device (i.e., censored values), statistical methods for censored data have been used in order to compare mean EC and OC concentrations between various seasons. It was found out that there is a significant difference between summer and the other seasons with the exception of mean OC concentrations at noon. In most cases, there was also a significant difference between winter and the other seasons. Moreover, it was found out that when dealing with OC concentrations, it is possible to replace censored values by a constant and still obtain reasonable results. In case of EC concentrations, the method based on censored distributions should be preferred when the sample size is small and the proportion of censored values is high. The diurnal variation of EC and OC is less pronounced in summer. During working days, the EC diurnal pattern displays a morning (between 6:00 and 10:00) and an afternoon/evening (between 18:00 and 22:00) peaks, while for OC, only the afternoon/evening peak is observed. These seasonal, diurnal and weekly variations of EC and OC concentrations and OC/EC ratio are probably related to variability in terms of emission sources (residential heating, traffic), transport characteristic and meteorological conditions. A weaker correlation between EC and OC in summer (r = 0.56) suggests additional sources and/or transport processes during other seasons. The elevated OC/EC ratio, the higher correlation between OC and O3, and the temperature and solar radiation during summer confirmed an increasing contribution of OC from secondary organic carbon (SOC) estimated as at least 59 ± 11% of total carbon in the PM2.5 using the EC tracer method. Backward trajectories of air masses arriving at 100 m AGL calculated in winter and summer show that higher pollution episodes of EC and OC are predominantly associated with continental air masses confined over Central Europe (about 79%), while lower EC and OC levels are mainly associated with episodes of long-range transport of marine air masses. Interestingly, the results reveal that in winter pollutants emitted during workdays could be accumulated above the region and influence the rural background air quality during some prolonged time of the weekend, especially on Saturday.

  20. Effects of hunting on survival of American woodcock in the Northeast

    USGS Publications Warehouse

    McAuley, D.G.; Longcore, J.R.; Clugston, D.A.; Allen, R.B.; Weik, A.; Williamson, S.; Dunn, J.; Palmer, B.; Evans, K.; Staats, W.; Sepik, G.F.; Halteman, W.

    2005-01-01

    Numbers of American woodcock (Scolopax minor) males counted on the annual singing ground survey (SGS) have declined over the last 35 years at an average rate of 2.3% per year in the Eastern Region and 1.8% per year in the Central Region. Although hunting was not thought to be a cause of these declines, mortality caused by hunters can be controlled. Furthermore, there has been no research on effects of hunting mortality on woodcock populations at local and regional levels on the breeding grounds. We used radiotelemetry to determine survival rates and causes of mortality for 913 woodcock captured during fall 1997?2000 on 7 areas in Maine, New Hampshire, Pennsylvania, and Vermont, USA. Three of 7 sites were closed to hunting. For all sites and all years combined, 176 woodcock died, and 130 were censored, of which 39 were censored mortalities. Predation was the major (n = 134, 76%) cause of mortality. Mammals accounted for 56% of the predation, raptors accounted for 25%, and 19% was attributed to unknown predators. On hunted sites, 36% of the total mortality (n = 102) was caused by hunting, 63% by predation, and 1 bird starved. Kaplan-Meier survival curves did not differ between hunted and non-hunted sites among years (P = 0.46). Overall, point estimates of survival did not differ (P = 0.217) between hunted (SR = 0.636, SE = 0.04) and nonhunted sites (SR = 0.661, SE = 0.08). We modeled hazard rates from hunting and natural mortality events using program MARK. Akaike's Information Criterion supported using a model with common constant hazards from both hunting and natural causes for groups of sites. Groupings of sites for hazard rates from natural causes were not influenced by whether a site was hunted or not. Models detected no effects of woodcock age and sex (P = 0.52) on survival. Proportional hazards models comparing hunted and nonhunted sites found no effects of age and sex (P = 0.45), interactions of age, sex, capture weight, and bill length (P > 0.269). Our data suggest that current hunting regulations are not causing lower survival of woodcock.

  1. Estimating the probability for major gene Alzheimer disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrer, L.A.; Cupples, L.A.

    1994-02-01

    Alzheimer disease (AD) is a neuropsychiatric illness caused by multiple etiologies. Prediction of whether AD is genetically based in a given family is problematic because of censoring bias among unaffected relatives as a consequence of the late onset of the disorder, diagnostic uncertainties, heterogeneity, and limited information in a single family. The authors have developed a method based on Bayesian probability to compute values for a continuous variable that ranks AD families as having a major gene form of AD (MGAD). In addition, they have compared the Bayesian method with a maximum-likelihood approach. These methods incorporate sex- and age-adjusted riskmore » estimates and allow for phenocopies and familial clustering of age on onset. Agreement is high between the two approaches for ranking families as MGAD (Spearman rank [r] = .92). When either method is used, the numerical outcomes are sensitive to assumptions of the gene frequency and cumulative incidence of the disease in the population. Consequently, risk estimates should be used cautiously for counseling purposes; however, there are numerous valid applications of these procedures in genetic and epidemiological studies. 41 refs., 4 figs., 3 tabs.« less

  2. A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis

    PubMed Central

    Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.

    2015-01-01

    Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324

  3. Contemporary labor patterns: the impact of maternal body mass index.

    PubMed

    Kominiarek, Michelle A; Zhang, Jun; Vanveldhuisen, Paul; Troendle, James; Beaver, Julie; Hibbard, Judith U

    2011-09-01

    We sought to compare labor patterns by body mass index (BMI). A total of 118,978 gravidas with a singleton term cephalic gestation were studied. Repeated-measures analysis constructed mean labor curves by parity and BMI categories for those who reached 10 cm. Interval-censored regression analysis determined median traverse times, adjusting for covariates in vaginal deliveries and intrapartum cesareans. In the labor curves, the time difference to reach 10 cm was 1.2 hours from the lowest to highest BMI category for nulliparas. Multiparas entered active phase by 6 cm, but reaching this point took longer for BMI ≥40.0 (3.4 hours) compared to BMI <25.0 (2.4 hours). Progression by centimeter (P < .001 for nulliparas) and from 4-10 cm (P < .001 for nulliparas and multiparas) increased as BMI increased. Second stage length, with and without an epidural, was similar among BMI categories for nulliparas (P > .05) but decreased as BMI increased for multiparas (P < .001). Labor proceeds more slowly as BMI increases, suggesting that labor management be altered to allow longer time for these differences. Copyright © 2011 Mosby, Inc. All rights reserved.

  4. Longitudinal Mediation Analysis with Time-varying Mediators and Exposures, with Application to Survival Outcomes

    PubMed Central

    Zheng, Wenjing; van der Laan, Mark

    2017-01-01

    In this paper, we study the effect of a time-varying exposure mediated by a time-varying intermediate variable. We consider general longitudinal settings, including survival outcomes. At a given time point, the exposure and mediator of interest are influenced by past covariates, mediators and exposures, and affect future covariates, mediators and exposures. Right censoring, if present, occurs in response to past history. To address the challenges in mediation analysis that are unique to these settings, we propose a formulation in terms of random interventions based on conditional distributions for the mediator. This formulation, in particular, allows for well-defined natural direct and indirect effects in the survival setting, and natural decomposition of the standard total effect. Upon establishing identifiability and the corresponding statistical estimands, we derive the efficient influence curves and establish their robustness properties. Applying Targeted Maximum Likelihood Estimation, we use these efficient influence curves to construct multiply robust and efficient estimators. We also present an inverse probability weighted estimator and a nested non-targeted substitution estimator for these parameters. PMID:29387520

  5. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation

    PubMed Central

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  6. Common pitfalls in statistical analysis: The perils of multiple testing

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2016-01-01

    Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478

  7. Censorship and Selection: Issues and Answers for Schools.

    ERIC Educational Resources Information Center

    Reichman, Henry

    Censorship is the removal, suppression, or restricted circulation of literary, artistic, or educational material--of images, ideas, and information--on the grounds that these are morally or otherwise objectionable in the light of standards applied by the censor. Few in our society advocate the banning of all but a tiny handful of materials from…

  8. Science and Security Clash on Bird-Flu Papers

    ERIC Educational Resources Information Center

    Fischman, Josh

    2012-01-01

    Censored papers on bird flu, which could help terrorists, have critics wondering if academic scientists can police their own work. The near-publication has brought out general critics of the federal panel, the National Science Advisory Board for Biosecurity, and the voluntary self-policing approach that it embraces instead of regulation. Members…

  9. Homeschooling in America a Viable Option

    ERIC Educational Resources Information Center

    Terry, Bobby K.

    2011-01-01

    Since prayer was banned from the public school systems in 1962 and all religious references have been censored, the American Public School System has become completely secular in nature. Children today face a great deal of peer pressure especially when bad behavior is constantly glamorized in both the media and the press. Because of the lack of…

  10. Censoring Freedom: Community-Based Professional Development and the Politics of Profanity

    ERIC Educational Resources Information Center

    Watson, Vajra M.

    2013-01-01

    The lack of strong literacy skills and practices among students is perhaps the clearest indicator that the education system continues to leave millions of children behind. To advance the reading, writing, and speaking skills of middle and high school students, this study examines a professional development model that brought trained…

  11. Changing Technology = Empowering Students through Media Literacy Education

    ERIC Educational Resources Information Center

    De Abreu, Belinha

    2010-01-01

    Background: As the world is changing quickly due to the technological advances, educators are looking at ways in which to empower their students' learning with digital platforms. Media literacy education is key for how this can happen in the 21st century classroom which seeks to promote learning without censoring the learner. Considering how media…

  12. Vulnerable Medicine

    ERIC Educational Resources Information Center

    Bochner, Arthur P.

    2009-01-01

    In "Narrative Medicine: Honoring the Stories of Illness," Rita Charon paints an original and humane portrait of what it can mean to be a doctor, to live a life immersed in sickness and dedicated to wellness. Charon drops the veil, inviting readers to look at the secret, subjective, emotional face of medicine, a zone of self-censored feelings and…

  13. Trust Your Children: Voices against Censorship in Children's Literature. Second Edition.

    ERIC Educational Resources Information Center

    West, Mark

    This book finds that although censorship of children's literature is currently more prevalent than ever, protest tactics have changed--in the 1990s the censors are more organized and while sexuality is still a concern, books are now attacked for being "Satanic,""anti-family," and "un-Christian." The book interviews…

  14. Personal Suicidality in Reception and Identification with Suicidal Film Characters

    ERIC Educational Resources Information Center

    Till, Benedikt; Vitouch, Peter; Herberth, Arno; Sonneck, Gernot; Niederkrotenthaler, Thomas

    2013-01-01

    The authors investigated the impact of suicidality on identity work during film exposure. Adults with low suicidality ("n" = 150) watched either "It's My Party" or "The Fire Within," censored versions of these films not depicting the suicide, or the control film that concluded with a non-suicidal death. Baseline…

  15. The Birth of a Baby: Obscenity or Censorship.

    ERIC Educational Resources Information Center

    Sherer, Michael D.

    This paper discusses the issues involved and presents an overall picture of attempts to censor the April 11, 1938 issue of "Life" magazine featuring a four-page spread of 35 pictures from the film "The Birth of a Baby." It examines contemporary news accounts from newspapers published in New York, Boston, Chicago, St. Louis, New…

  16. Filtering the Internet: The Children's Internet Protection Act. Legal Update

    ERIC Educational Resources Information Center

    McCarthy, Martha M.

    2004-01-01

    Accompanying the explosive growth of the Internet have been concerns about protecting children from viewing pornographic and other harmful images through cyberspace. In the past few years, Congress has passed several acts to censor Internet sites available to children, but only the Children's Internet Protection Act (CIPA) has received Supreme…

  17. The Art Teacher as Censor

    ERIC Educational Resources Information Center

    Petit, David A.

    2004-01-01

    A culture can be remembered and studied by the artwork it leaves behind. The artwork gives more than a pictorial or historical record of that culture. It also reveals the cultural practices and beliefs that form the accepted boundaries for the expression of culture. These boundaries are aesthetics. It is human nature to sometimes challenge these…

  18. Employee Rights Versus the First Amendment.

    ERIC Educational Resources Information Center

    Walker, Lois; Teaster, Gale; Kelley, Gloria

    1998-01-01

    Illustrates a case of a student art exhibit in the Winthrop University, South Carolina Library, where freedom of expression interfered with employee rights in the workplace. The Dean of Libraries stated that the library could not act as a censor, and that they were obligated to defend the First Amendment; the controversial statue remained, but…

  19. What Johnny Shouldn't Read: Textbook Censorship in America.

    ERIC Educational Resources Information Center

    DelFattore, Joan

    This book discusses ways in which special-interest groups influence the content of textbooks used in public and private schools throughout the United States. The book focuses on recent federal lawsuits involving attempts to censor or ban reading, literature, science, and social studies textbooks. Recreating the story behind each lawsuit, the book…

  20. The Censor, the Computer, and the Textbook

    ERIC Educational Resources Information Center

    Zoeller, Geoffrey W., Jr.

    2010-01-01

    Education in a free society requires that students are provided with a provocative and thoughtful curriculum and learning materials that will prepare them to function as productive adult citizens in a diverse and changing world. Textbooks and curricular materials that engage the rising generation in the study of social ideas, problems, and issues…

  1. Censors in the Classroom: The Mind Benders.

    ERIC Educational Resources Information Center

    Jenkinson, Edward B.

    Selected incidents of censorship in the schools are examined in this book, and suggestions are made for preserving students' and teachers' rights. The aim is to offer a calm, rational, informed discussion of censorship issues and an awareness of the organization and goals of pressure groups so that censorship efforts can be countered effectively.…

  2. Multicultural Content and Class Participation: Do Students Self-Censor?

    ERIC Educational Resources Information Center

    Hyde, Cheryl A.; Ruth, Betty J.

    2002-01-01

    Through survey and focus group data, examined student discomfort in social work courses, reasons for self-censorship, and solutions to self-censorship. Found that general classroom factors (being too shy or being unprepared), not political correctness, were more likely to be reasons for self-censorship. Solutions focused on the faculty's role in…

  3. Not Your Father's Censorship

    ERIC Educational Resources Information Center

    Lewis, Harry

    2009-01-01

    Documents from the Army during World War II arrived with the censor's approval stamp, certifying that no harm would come to the nation if those depictions of life at the front fell into enemy hands. That was the censorship of another time. Everyone understood why it was important and knew that the government needed to control the communication…

  4. Censoring the Imagination: Challenges to Children's Books.

    ERIC Educational Resources Information Center

    Saltman, Judith

    1998-01-01

    Presents a brief history of the censorship of children's books, and then focuses on recent challenges. The view of literature as a socializing force is discussed, and examples of challenged books in the United States and Canada are provided, illustrating different reasons for censorship on both sides of the political and social spectrum. (AEF)

  5. Case Studies in Censorship: Censoring "The Merchant of Venice."

    ERIC Educational Resources Information Center

    Gray, Richard A.

    1991-01-01

    Reviews censorship of "The Merchant of Venice," which has been based on its portrayal of the Jewish character Shylock. Background information is followed by an annotated bibliography which includes 15 citations dealing with Shylock, 22 citations to articles that address the censorship of the play, and 64 works of literature that have…

  6. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  7. REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.

    PubMed

    Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.

  8. Dual job holding general practitioners: the effect of patient shortage.

    PubMed

    Godager, Geir; Lurås, Hilde

    2009-10-01

    In 2001, a listpatient system with capitation payment was introduced in Norwegian general practice. After an allocation process where each inhabitant was listed with a general practitioner (GP), a considerable share of the GPs got fewer persons listed than they would have preferred. We examine whether GPs who experience a shortage of patients to a larger extent than other GPs seek to hold a second job in the community health service even though the wage rate is low compared with the wage rate in general practice. Assuming utility maximization, we model the effect of patient shortage on a GP's decision to contract for a second job in the community health service. The model predicts a positive relationship between patient shortage and participation in the community health service. This prediction is tested by means of censored regression analyses, taking account of labour supply as a censored variable. We find a significant effect of patient shortage on the number of hours the GPs supply to community health service. The estimated marginal effect is 1.72 hours per week.

  9. Four decades of the kidney transplantation program at the Institute Nacional de Ciencias Médicas y Nutrición Salvador Zubirán in Mexico City.

    PubMed

    Morales-Buenrostro, Luis E; Marino-Vázquez, Lluvia A; Alberú, Josefina

    2009-01-01

    This is a retrospective study that includes four decades of kidney transplant program at our Institute, with a total of 923 kidney transplants in 872 recipients. In this report, the effect of variables in recipient, donor, and transplant on long-term graft survival was analyzed using the Kaplan Meier method with log-rank test for survival comparisons. Global graft survival at our center-analyzed by censoring for death-with-functioning-graft-for 1, 5 and 10 years was 93%, 83% and 74%, respectively, with median survival of 24.5 years. When analyzed for all-cause graft loss, 1, 5 and 10 year survival was 90%, 76% and 61%, with 12.8-year median survival. Variables associated with lower graft survival censored for death-with-functioning-graft were transplantation in an earlier decade, less histocompatibility, younger kidney transplant recipients, no induction therapy, and double drug initial immunosuppression. After Cox's regression multivariate analysis, the risk factors that remained associated with worse survival were younger recipient, earlier transplant decade, and deceased donor.

  10. Monte Carlo Simulation of Sudden Death Bearing Testing

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.

  11. Modeling the Antioxidant Capacity of Red Wine from Different Production Years and Sources under Censoring

    PubMed Central

    Jäntschi, Lorentz; Sestraş, Radu E.; Bolboacă, Sorana D.

    2013-01-01

    The health benefit of drinking wine, expressed as capacity to defend the human organism from the free radicals action and thus reducing the oxidative stress, has already been demonstrated, and the results had been published in scientific literature. The aim of our study was to develop and assess a model able to estimate the antioxidant capacity (AC) of several samples of Romanian wines and to evaluate the AC dependency on the vintage (defined as the year in which wine was produced) and grape variety under presence of censored data. A contingency of two grape varieties from two different vineyards in Romania and five production years, with some missing experimental data, was used to conduct the analysis. The analysis showed that the antioxidant capacity of the investigated wines is linearly dependent on the vintage. Furthermore, an iterative algorithm was developed and applied to obtain the coefficients of the model and to estimate the missing experimental value. The contribution of wine source to the antioxidant capacity proved equal to 11%. PMID:24260039

  12. A Naive Bayes machine learning approach to risk prediction using censored, time-to-event data.

    PubMed

    Wolfson, Julian; Bandyopadhyay, Sunayan; Elidrisi, Mohamed; Vazquez-Benitez, Gabriela; Vock, David M; Musgrove, Donald; Adomavicius, Gediminas; Johnson, Paul E; O'Connor, Patrick J

    2015-09-20

    Predicting an individual's risk of experiencing a future clinical outcome is a statistical task with important consequences for both practicing clinicians and public health experts. Modern observational databases such as electronic health records provide an alternative to the longitudinal cohort studies traditionally used to construct risk models, bringing with them both opportunities and challenges. Large sample sizes and detailed covariate histories enable the use of sophisticated machine learning techniques to uncover complex associations and interactions, but observational databases are often 'messy', with high levels of missing data and incomplete patient follow-up. In this paper, we propose an adaptation of the well-known Naive Bayes machine learning approach to time-to-event outcomes subject to censoring. We compare the predictive performance of our method with the Cox proportional hazards model which is commonly used for risk prediction in healthcare populations, and illustrate its application to prediction of cardiovascular risk using an electronic health record dataset from a large Midwest integrated healthcare system. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Methods to detect, characterize, and remove motion artifact in resting state fMRI

    PubMed Central

    Power, Jonathan D; Mitra, Anish; Laumann, Timothy O; Snyder, Abraham Z; Schlaggar, Bradley L; Petersen, Steven E

    2013-01-01

    Head motion systematically alters correlations in resting state functional connectivity fMRI (RSFC). In this report we examine impact of motion on signal intensity and RSFC correlations. We find that motion-induced signal changes (1) are often complex and variable waveforms, (2) are often shared across nearly all brain voxels, and (3) often persist more than 10 seconds after motion ceases. These signal changes, both during and after motion, increase observed RSFC correlations in a distance-dependent manner. Motion-related signal changes are not removed by a variety of motion-based regressors, but are effectively reduced by global signal regression. We link several measures of data quality to motion, changes in signal intensity, and changes in RSFC correlations. We demonstrate that improvements in data quality measures during processing may represent cosmetic improvements rather than true correction of the data. We demonstrate a within-subject, censoring-based artifact removal strategy based on volume censoring that reduces group differences due to motion to chance levels. We note conditions under which group-level regressions do and do not correct motion-related effects. PMID:23994314

  14. An alternative index of satellite telemetry location error

    USGS Publications Warehouse

    Keating, Kim A.

    1994-01-01

    Existing indices of satellite telemetry error offer objective standards for censoring poor locations, but have drawbacks. Examining distances and relative directions between consecutive satellite telemetry locations, I developed an alternative error index, ξ, and compared its performance with that of the location quality index, NQ (Serv. Argos 1988). In controlled tests, ξ was more (P ≤ 0.005) effective for improving precision than was a threshold of NQ > 1. The ξ index also conferred greater control over the trade off between sample size and precision, making ξ more cost-effective than NQ. Performances of ξ and NQ were otherwise comparable. In field tests with bighorn sheep (Ovis canadensis), rejecting locations where ξ ≥ 1.5 km reduced (P 1 and 63% fewer data were censored, so that the extent of animals' movements was better indicated by using ξ rather than NQ. Because use of ξ may lead to underestimating the number of long-range, short-term forays (especially when the frequency of forays is high relative to sampling frequency), potential bias should be considered before using ξ. Nonetheless, ξ should be a useful alternative to NQ in many animal-tracking studies.

  15. A quality-of-life-oriented endpoint for comparing therapies.

    PubMed

    Gelber, R D; Gelman, R S; Goldhirsch, A

    1989-09-01

    An endpoint, time without symptoms of disease and toxicity of treatment (TWiST), is defined to provide a single measure of length and quality of survival. Time with subjective side effects of treatment and time with unpleasant symptoms of disease are subtracted from overall survival time to calculate TWiST for each patient. The purpose of this paper is to describe the construction of this endpoint, and to elaborate on its interpretation for patient care decision-making. Estimating the distribution of TWiST using actuarial methods is shown by simulation studies to be biased as a result of induced dependency between TWiST and its censoring distribution. Considering the distribution of TWiST accumulated within a specified time from start of therapy, L, allows one to reduce this bias by substituting estimated TWiST for censored values and provides a method to evaluate the "payback" period for early toxic effects. Quantile distance plots provide graphical representations for treatment comparisons. The analysis of Ludwig Trial III evaluating toxic adjuvant therapies versus a no-treatment control group for postmenopausal women with node-positive breast cancer illustrates the methodology.

  16. Bayesian WLS/GLS regression for regional skewness analysis for regions with large crest stage gage networks

    USGS Publications Warehouse

    Veilleux, Andrea G.; Stedinger, Jery R.; Eash, David A.

    2012-01-01

    This paper summarizes methodological advances in regional log-space skewness analyses that support flood-frequency analysis with the log Pearson Type III (LP3) distribution. A Bayesian Weighted Least Squares/Generalized Least Squares (B-WLS/B-GLS) methodology that relates observed skewness coefficient estimators to basin characteristics in conjunction with diagnostic statistics represents an extension of the previously developed B-GLS methodology. B-WLS/B-GLS has been shown to be effective in two California studies. B-WLS/B-GLS uses B-WLS to generate stable estimators of model parameters and B-GLS to estimate the precision of those B-WLS regression parameters, as well as the precision of the model. The study described here employs this methodology to develop a regional skewness model for the State of Iowa. To provide cost effective peak-flow data for smaller drainage basins in Iowa, the U.S. Geological Survey operates a large network of crest stage gages (CSGs) that only record flow values above an identified recording threshold (thus producing a censored data record). CSGs are different from continuous-record gages, which record almost all flow values and have been used in previous B-GLS and B-WLS/B-GLS regional skewness studies. The complexity of analyzing a large CSG network is addressed by using the B-WLS/B-GLS framework along with the Expected Moments Algorithm (EMA). Because EMA allows for the censoring of low outliers, as well as the use of estimated interval discharges for missing, censored, and historic data, it complicates the calculations of effective record length (and effective concurrent record length) used to describe the precision of sample estimators because the peak discharges are no longer solely represented by single values. Thus new record length calculations were developed. The regional skewness analysis for the State of Iowa illustrates the value of the new B-WLS/BGLS methodology with these new extensions.

  17. Practical application of cure mixture model for long-term censored survivor data from a withdrawal clinical trial of patients with major depressive disorder.

    PubMed

    Arano, Ichiro; Sugimoto, Tomoyuki; Hamasaki, Toshimitsu; Ohno, Yuko

    2010-04-23

    Survival analysis methods such as the Kaplan-Meier method, log-rank test, and Cox proportional hazards regression (Cox regression) are commonly used to analyze data from randomized withdrawal studies in patients with major depressive disorder. However, unfortunately, such common methods may be inappropriate when a long-term censored relapse-free time appears in data as the methods assume that if complete follow-up were possible for all individuals, each would eventually experience the event of interest. In this paper, to analyse data including such a long-term censored relapse-free time, we discuss a semi-parametric cure regression (Cox cure regression), which combines a logistic formulation for the probability of occurrence of an event with a Cox proportional hazards specification for the time of occurrence of the event. In specifying the treatment's effect on disease-free survival, we consider the fraction of long-term survivors and the risks associated with a relapse of the disease. In addition, we develop a tree-based method for the time to event data to identify groups of patients with differing prognoses (cure survival CART). Although analysis methods typically adapt the log-rank statistic for recursive partitioning procedures, the method applied here used a likelihood ratio (LR) test statistic from a fitting of cure survival regression assuming exponential and Weibull distributions for the latency time of relapse. The method is illustrated using data from a sertraline randomized withdrawal study in patients with major depressive disorder. We concluded that Cox cure regression reveals facts on who may be cured, and how the treatment and other factors effect on the cured incidence and on the relapse time of uncured patients, and that cure survival CART output provides easily understandable and interpretable information, useful both in identifying groups of patients with differing prognoses and in utilizing Cox cure regression models leading to meaningful interpretations.

  18. Risk-Stratified Imputation in Survival Analysis

    PubMed Central

    Kennedy, Richard E.; Adragni, Kofi P.; Tiwari, Hemant K.; Voeks, Jenifer H.; Brott, Thomas G.; Howard, George

    2013-01-01

    Background Censoring that is dependent on covariates associated with survival can arise in randomized trials due to changes in recruitment and eligibility criteria to minimize withdrawals, potentially leading to biased treatment effect estimates. Imputation approaches have been proposed to address censoring in survival analysis; and while these approaches may provide unbiased estimates of treatment effects, imputation of a large number of outcomes may over- or underestimate the associated variance based on the imputation pool selected. Purpose We propose an improved method, risk-stratified imputation, as an alternative to address withdrawal related to the risk of events in the context of time-to-event analyses. Methods Our algorithm performs imputation from a pool of replacement subjects with similar values of both treatment and covariate(s) of interest, that is, from a risk-stratified sample. This stratification prior to imputation addresses the requirement of time-to-event analysis that censored observations are representative of all other observations in the risk group with similar exposure variables. We compared our risk-stratified imputation to case deletion and bootstrap imputation in a simulated dataset in which the covariate of interest (study withdrawal) was related to treatment. A motivating example from a recent clinical trial is also presented to demonstrate the utility of our method. Results In our simulations, risk-stratified imputation gives estimates of treatment effect comparable to bootstrap and auxiliary variable imputation while avoiding inaccuracies of the latter two in estimating the associated variance. Similar results were obtained in analysis of clinical trial data. Limitations Risk-stratified imputation has little advantage over other imputation methods when covariates of interest are not related to treatment, although its performance is superior when covariates are related to treatment. Risk-stratified imputation is intended for categorical covariates, and may be sensitive to the width of the matching window if continuous covariates are used. Conclusions The use of the risk-stratified imputation should facilitate the analysis of many clinical trials, in which one group has a higher withdrawal rate that is related to treatment. PMID:23818434

  19. Dutch Tariff for the Five-Level Version of EQ-5D.

    PubMed

    M Versteegh, Matthijs; M Vermeulen, Karin; M A A Evers, Silvia; de Wit, G Ardine; Prenger, Rilana; A Stolk, Elly

    2016-06-01

    In 2009, a new version of the EuroQol five-dimensional questionnaire (EQ-5D) was introduced with five rather than three answer levels per dimension. This instrument is known as the EQ-5D-5L. To make the EQ-5D-5L suitable for use in economic evaluations, societal values need to be attached to all 3125 health states. To derive a Dutch tariff for the EQ-5D-5L. Health state values were elicited during face-to-face interviews in a general population sample stratified for age, sex, and education, using composite time trade-off (cTTO) and a discrete choice experiment (DCE). Data were modeled using ordinary least squares and tobit regression (for cTTO) and a multinomial conditional logit model (for DCE). Model performance was evaluated on the basis of internal consistency, parsimony, goodness of fit, handling of left-censored values, and theoretical considerations. A representative sample (N = 1003) of the Dutch population participated in the valuation study. Data of 979 and 992 respondents were included in the analysis of the cTTO and the DCE, respectively. The cTTO data were left-censored at -1. The tobit model was considered the preferred model for the tariff on the basis of its handling of the censored nature of the data, which was confirmed through comparison with the DCE data. The predicted values for the EQ-5D-5L ranged from -0.446 to 1. This study established a Dutch tariff for the EQ-5D-5L on the basis of cTTO. The values represent the preferences of the Dutch population. The tariff can be used to estimate the impact of health care interventions on quality of life, for example, in context of economic evaluations. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  1. Comparison of the long-term outcomes of kidney transplantation: USA versus Spain

    PubMed Central

    Ojo, Akinlolu O.; Morales, José María; González-Molina, Miguel; Steffick, Diane E.; Luan, Fu L.; Merion, Robert M.; Ojo, Tammy; Moreso, Francesc; Arias, Manuel; Campistol, Josep María; Hernandez, Domingo; Serón, Daniel

    2013-01-01

    Background The long-term outcomes of kidney transplantation are suboptimal because many patients lose their allografts or experience premature death. Cross-country comparisons of long-term outcomes of kidney transplantation may provide insight into factors contributing to premature graft failure and death. We evaluated the rates of late graft failure and death among US and Spanish kidney recipients. Methods This is a cohort study of US (n = 9609) and Spanish (n = 3808) patients who received a deceased donor kidney transplant in 1990, 1994, 1998 or 2002 and had a functioning allograft 1 year after transplantation with follow-up through September 2006. Ten-year overall and death-censored graft survival and 10-year overall recipient survival and death with graft function (DWGF) were estimated with multivariate Cox models. Results Among recipients alive with graft function 1 year after transplant, the 10-year graft survival was 71.3% for Spanish and 53.4% for US recipients (P < 0.001). The 10-year, death-censored graft survival was 75.6 and 76.0% for Spanish and US recipients, respectively (P = 0.73). The 10-year recipient survival was 86.2% for Spanish and 67.4% for US recipients (P < 0.001). In recipients with diabetes as the cause of ESRD, the adjusted DWGF rates at 10 years were 23.9 and 53.8 per 1000 person-years for Spanish and US recipients, respectively (P < 0.001). Among recipients whose cause of ESRD was not diabetes mellitus, the adjusted 10-year DWGF rates were 11.0 and 25.4 per 1000 person-years for Spanish and US recipients, respectively. Conclusions US kidney transplant recipients had more than twice the long-term hazard of DWGF compared with Spanish kidney transplant recipients and similar levels of death-censored graft function. Pre-transplant medical care, comorbidities, such as cardiovascular disease, and their management in each country's health system are possible explanations for the differences between the two countries. PMID:22759384

  2. Comparison of the long-term outcomes of kidney transplantation: USA versus Spain.

    PubMed

    Ojo, Akinlolu O; Morales, José María; González-Molina, Miguel; Steffick, Diane E; Luan, Fu L; Merion, Robert M; Ojo, Tammy; Moreso, Francesc; Arias, Manuel; Campistol, Josep María; Hernandez, Domingo; Serón, Daniel

    2013-01-01

    The long-term outcomes of kidney transplantation are suboptimal because many patients lose their allografts or experience premature death. Cross-country comparisons of long-term outcomes of kidney transplantation may provide insight into factors contributing to premature graft failure and death. We evaluated the rates of late graft failure and death among US and Spanish kidney recipients. This is a cohort study of US (n = 9609) and Spanish (n = 3808) patients who received a deceased donor kidney transplant in 1990, 1994, 1998 or 2002 and had a functioning allograft 1 year after transplantation with follow-up through September 2006. Ten-year overall and death-censored graft survival and 10-year overall recipient survival and death with graft function (DWGF) were estimated with multivariate Cox models. Among recipients alive with graft function 1 year after transplant, the 10-year graft survival was 71.3% for Spanish and 53.4% for US recipients (P < 0.001). The 10-year, death-censored graft survival was 75.6 and 76.0% for Spanish and US recipients, respectively (P = 0.73). The 10-year recipient survival was 86.2% for Spanish and 67.4% for US recipients (P < 0.001). In recipients with diabetes as the cause of ESRD, the adjusted DWGF rates at 10 years were 23.9 and 53.8 per 1000 person-years for Spanish and US recipients, respectively (P < 0.001). Among recipients whose cause of ESRD was not diabetes mellitus, the adjusted 10-year DWGF rates were 11.0 and 25.4 per 1000 person-years for Spanish and US recipients, respectively. US kidney transplant recipients had more than twice the long-term hazard of DWGF compared with Spanish kidney transplant recipients and similar levels of death-censored graft function. Pre-transplant medical care, comorbidities, such as cardiovascular disease, and their management in each country's health system are possible explanations for the differences between the two countries.

  3. The Application of Censored Regression Models in Low Streamflow Analyses

    NASA Astrophysics Data System (ADS)

    Kroll, C.; Luz, J.

    2003-12-01

    Estimation of low streamflow statistics at gauged and ungauged river sites is often a daunting task. This process is further confounded by the presence of intermittent streamflows, where streamflow is sometimes reported as zero, within a region. Streamflows recorded as zero may be zero, or may be less than the measurement detection limit. Such data is often referred to as censored data. Numerous methods have been developed to characterize intermittent streamflow series. Logit regression has been proposed to develop regional models of the probability annual lowflows series (such as 7-day lowflows) are zero. In addition, Tobit regression, a method of regression that allows for censored dependent variables, has been proposed for lowflow regional regression models in regions where the lowflow statistic of interest estimated as zero at some sites in the region. While these methods have been proposed, their use in practice has been limited. Here a delete-one jackknife simulation is presented to examine the performance of Logit and Tobit models of 7-day annual minimum flows in 6 USGS water resource regions in the United States. For the Logit model, an assessment is made of whether sites are correctly classified as having at least 10% of 7-day annual lowflows equal to zero. In such a situation, the 7-day, 10-year lowflow (Q710), a commonly employed low streamflow statistic, would be reported as zero. For the Tobit model, a comparison is made between results from the Tobit model, and from performing either ordinary least squares (OLS) or principal component regression (PCR) after the zero sites are dropped from the analysis. Initial results for the Logit model indicate this method to have a high probability of correctly classifying sites into groups with Q710s as zero and non-zero. Initial results also indicate the Tobit model produces better results than PCR and OLS when more than 5% of the sites in the region have Q710 values calculated as zero.

  4. ["That flesh, pink and perishable": analysis of disease-free survival analysis in breast cancer in Gipuzkoa (Spain) in the presence of competing risks].

    PubMed

    Martínez-Camblor, Pablo; Larrañaga, Nerea; Sarasqueta, Cristina; Mitxelena, María José; Basterretxea, Mikel

    2009-01-01

    To analyze time of disease-free survival and relative survival in women diagnosed with breast cancer in the province of Gipuzkoa within the context of competing risks by assessing differences between the direct use of the Kaplan-Meier estimator and the multiple decrement method on the one hand, and relative survival on the other. All registered breast cancer cases in Gipuzkoa in 1995 and 1996 with stages other than stage IV were included. An 8-year follow-up for recurrence and a 10-year follow-up for survival were performed. Time of disease-free survival was studied by the multiple decrement model. Observed survival and survival corrected by the expected mortality in the population (relative survival) were also studied. Estimation of the probability of recurrence at 8 years with the multiple decrement method was 8.8% lower than that obtained with the Kaplan-Meier method. The difference between the observed and relative survival rates at 10 years was 10.8%. Both results show how, in this case, the Kaplan-Meier estimator overestimates both the probability of recurrence and that of mortality from the disease. Two issues are often overlooked when performing survival analyses: firstly, because of the lack of independence between survival time and censoring time, the results obtained by the Kaplan-Meier estimator are uninterpretable; secondly, it is an incontrovertible fact that one way or another, everyone causes failures. In this approach, survival analyses must take into account the probability of failure in the general population of reference. The results obtained in this study show that superficial use of the Kaplan Meier estimator overestimates both the probability of recurrence and that of mortality caused by the disease.

  5. Pointright: a system to redirect mouse and keyboard control among multiple machines

    DOEpatents

    Johanson, Bradley E [Palo Alto, CA; Winograd, Terry A [Stanford, CA; Hutchins, Gregory M [Mountain View, CA

    2008-09-30

    The present invention provides a software system, PointRight, that allows for smooth and effortless control of pointing and input devices among multiple displays. With PointRight, a single free-floating mouse and keyboard can be used to control multiple screens. When the cursor reaches the edge of a screen it seamlessly moves to the adjacent screen and keyboard control is simultaneously redirected to the appropriate machine. Laptops may also redirect their keyboard and pointing device, and multiple pointers are supported simultaneously. The system automatically reconfigures itself as displays go on, go off, or change the machine they display.

  6. Estimating vehicle height using homographic projections

    DOEpatents

    Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter

    2013-07-16

    Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.

  7. Cox regression analysis with missing covariates via nonparametric multiple imputation.

    PubMed

    Hsu, Chiu-Hsieh; Yu, Mandi

    2018-01-01

    We consider the situation of estimating Cox regression in which some covariates are subject to missing, and there exists additional information (including observed event time, censoring indicator and fully observed covariates) which may be predictive of the missing covariates. We propose to use two working regression models: one for predicting the missing covariates and the other for predicting the missing probabilities. For each missing covariate observation, these two working models are used to define a nearest neighbor imputing set. This set is then used to non-parametrically impute covariate values for the missing observation. Upon the completion of imputation, Cox regression is performed on the multiply imputed datasets to estimate the regression coefficients. In a simulation study, we compare the nonparametric multiple imputation approach with the augmented inverse probability weighted (AIPW) method, which directly incorporates the two working models into estimation of Cox regression, and the predictive mean matching imputation (PMM) method. We show that all approaches can reduce bias due to non-ignorable missing mechanism. The proposed nonparametric imputation method is robust to mis-specification of either one of the two working models and robust to mis-specification of the link function of the two working models. In contrast, the PMM method is sensitive to misspecification of the covariates included in imputation. The AIPW method is sensitive to the selection probability. We apply the approaches to a breast cancer dataset from Surveillance, Epidemiology and End Results (SEER) Program.

  8. What Determines Basic School Attainment in Developing Countries? Evidence from Rural China

    ERIC Educational Resources Information Center

    Zhao, Meng; Glewwe, Paul

    2010-01-01

    This paper analyzes recent household survey data from Gansu, a less developed province in Northwest China, to examine school attainment in a poor rural area of China. Censored ordered probit regressions are used to estimate the determinants of years of schooling. Child nutritional status, as measured by height-for-age Z-scores, and household…

  9. The Fear of the Word: Censorship and Sex.

    ERIC Educational Resources Information Center

    Oboler, Eli M.

    This book discusses censorship and sex through the ages. The 15 chapters focus on the following topics: (1) the bases of censorship; (2) the concept of taboo; (3) the role of words in the control of people's thought; (4) Hellenism, stoicism, and censorship; (5) the Judeo-Christian influence; (6) the puritan and the censor; (7) religion and…

  10. The Relationship between Censorship and the Emotional and Critical Tone of Television News Coverage of the Persian Gulf War.

    ERIC Educational Resources Information Center

    Newhagen, John E.

    1994-01-01

    Analyzes television news stories broadcast during the Persian Gulf War for censorship disclaimers, the censoring source, and the producing network. Discusses results in terms of both production- and viewer-based differences. Considers the question of whether censorship "works" in terms of unanticipated results related to story…

  11. The Contribution of Jayaprakash Narayan in Preserving Free Expression in India.

    ERIC Educational Resources Information Center

    Merriam, Allen H.

    India's policy of free speech suffered a severe if temporary setback in the 1970s. Prime Minister Indira Gandhi declared a 19-month state of emergency, during which 150,000 people were arrested, newspapers were censored, and dissent was essentially eliminated. A central figure in the confrontation with the Indian government over political…

  12. A comparison of techniques for assessing central tendency in left-censored data using PCB and p,p'DDE contaminant concentrations from Michigan's bald eagle biosentinel program

    Treesearch

    Katherine F. Leith; William W. Bowerman; Michael R. Wierda; Dave A. Best; Teryl G. Grubb; James G. Sikarske

    2010-01-01

    Monitoring of contaminants in the environment is an important part of understanding the fate of ecosystems after a chemical insult. Frequently, such monitoring efforts result in datasets with observations below the detection limit (DL) that are reported as 'non-detect' or '

  13. The Lost Caravan: The Rise and Fall of Al Qaeda in Iraq, 2003-2007

    DTIC Science & Technology

    2010-06-01

    places of worship, and monitored and censored the publication of religious literature. The ministry served as a source of intelligence for the regime......2007, President Bush noted that, “It is clear that we need to change our strategy in Iraq…So America will change our strategy to help the Iraqis

  14. Identifying and Combating Sexism in EFL Textbooks--With a Case Study into China

    ERIC Educational Resources Information Center

    Tao, Baiqiang

    2008-01-01

    This paper explores methodologies of identifying and combating sexism in EFL (English as a Foreign Language) textbooks. The writer of this paper has found out there exists sexism or gender inequality in the Chinese high school EFL textbooks. The writer hopes that future EFL curriculum designers, EFL textbook writers, textbook censors in textbook…

  15. Volatile Knowing: Parents, Teachers, and the Censored Story of Accountability in America's Public Schools

    ERIC Educational Resources Information Center

    Tollefson, Kaia

    2008-01-01

    "Volatile Knowing" refers to the potential for positive change that can result when parents and teachers talk with each other about the politics and policies of externally defined accountability mandates in education. This text tells the story of twelve teachers and parents who breached the unofficial, but deeply inscribed home/school divide to…

  16. Human Capital Background and the Educational Attainment of Second-Generation Immigrants in France

    ERIC Educational Resources Information Center

    Dos Santos, Manon Domingues; Wolff, Francois-Charles

    2011-01-01

    In this paper, we study the impact of parental human capital background on ethnic educational gaps between second-generation immigrants using a large data set conducted in France in 2003. Estimates from censored random effect ordered Probit regressions show that the skills of immigrants explain in the most part, the ethnic educational gap between…

  17. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  18. Fatal Attraction: The Selling of Addiction. Special Double Issue.

    ERIC Educational Resources Information Center

    Silver, Rosalind, Ed.; Thoman, Elizabeth, Ed.

    1991-01-01

    This issue of "Media & Values" provides essays and teaching ideas for addressing the influence of the media in society and the growing incidence of addiction. Articles in this issue include: (1) "Culture of Addiction" (Rosalind Silver); (2) "Crack and the Box" (Pete Hamill); (3) "When It Comes to Drugs, Beware the Censor's Fix" (David Musto); (4)…

  19. Appeasing the Censors. A Special Report on Campus Free Speech Abuses.

    ERIC Educational Resources Information Center

    Csorba, Les, III

    The issue of censorship of conservative speakers at major universities by students on the political left is discussed. It is argued that many campus liberals are being duped by committed radicals into supporting radical causes, and that in many cases, campus liberals betray the tradition of intellectual tolerance. The following concerns are posed:…

  20. Liberating Literacy under Threat: Re-Reading James Moffett's "Storm in the Mountains"

    ERIC Educational Resources Information Center

    Mayher, John

    2010-01-01

    James Moffett's "Storm in the Mountains: A Case Study of Censorship, Conflict, and Consciousness" remains as relevant today as it was when it was published in 1988 for those who want to understand the nature and sources of contemporary conflicts in American language and literacy education. Censors continue to try to restrict student…

  1. Satanic Portals and Sex-Saturated Books: Parent Complaints about English Texts in NSW Protestant Schools

    ERIC Educational Resources Information Center

    Hastie, David

    2014-01-01

    Exploring Australian school text censorship has a long pedigree in "English in Australia." This article aims to contribute by observing attempts by parents to censor English texts in religious schools. A brief summary of "EIA"'s previous approaches is followed by an explanation of the method of my research with NSW Protestant…

  2. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications

    PubMed Central

    Saeedi, Ehsan; Kong, Yinan

    2017-01-01

    In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831

  3. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications.

    PubMed

    Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan

    2017-01-01

    In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.

  4. Understanding survival analysis: Kaplan-Meier estimate.

    PubMed

    Goel, Manish Kumar; Khanna, Pardeep; Kishore, Jugal

    2010-10-01

    Kaplan-Meier estimate is one of the best options to be used to measure the fraction of subjects living for a certain amount of time after treatment. In clinical trials or community trials, the effect of an intervention is assessed by measuring the number of subjects survived or saved after that intervention over a period of time. The time starting from a defined point to the occurrence of a given event, for example death is called as survival time and the analysis of group data as survival analysis. This can be affected by subjects under study that are uncooperative and refused to be remained in the study or when some of the subjects may not experience the event or death before the end of the study, although they would have experienced or died if observation continued, or we lose touch with them midway in the study. We label these situations as censored observations. The Kaplan-Meier estimate is the simplest way of computing the survival over time in spite of all these difficulties associated with subjects or situations. The survival curve can be created assuming various situations. It involves computing of probabilities of occurrence of event at a certain point of time and multiplying these successive probabilities by any earlier computed probabilities to get the final estimate. This can be calculated for two groups of subjects and also their statistical difference in the survivals. This can be used in Ayurveda research when they are comparing two drugs and looking for survival of subjects.

  5. Predicting the time of conversion to MCI in the elderly: role of verbal expression and learning.

    PubMed

    Oulhaj, Abderrahim; Wilcock, Gordon K; Smith, A David; de Jager, Celeste A

    2009-11-03

    Increasing awareness that minimal or mild cognitive impairment (MCI) in the elderly may be a precursor of dementia has led to an increase in the number of people attending memory clinics. We aimed to develop a way of predicting the period of time before cognitive impairment occurs in community-dwelling elderly. The method is illustrated by the use of simple tests of different cognitive domains. A cohort of 241 normal elderly volunteers was followed for up to 20 years with regular assessments of cognitive abilities using the Cambridge Cognitive Examination (CAMCOG); 91 participants developed MCI. We used interval-censored survival analysis statistical methods to model which baseline cognitive tests best predicted the time to convert to MCI. Out of several baseline variables, only age and CAMCOG subscores for expression and learning/memory were predictors of the time to conversion. The time to conversion was 14% shorter for each 5 years of age, 17% shorter for each point lower in the expression score, and 15% shorter for each point lower in the learning score. We present in tabular form the probability of converting to MCI over intervals between 2 and 10 years for different combinations of expression and learning scores. In apparently normal elderly people, subtle measurable cognitive deficits that occur within the normal range on standard testing protocols reliably predict the time to clinically relevant cognitive impairment long before clinical symptoms are reported.

  6. Predicting risk of substantial weight gain in German adults-a multi-center cohort approach.

    PubMed

    Bachlechner, Ursula; Boeing, Heiner; Haftenberger, Marjolein; Schienkiewitz, Anja; Scheidt-Nave, Christa; Vogt, Susanne; Thorand, Barbara; Peters, Annette; Schipf, Sabine; Ittermann, Till; Völzke, Henry; Nöthlings, Ute; Neamat-Allah, Jasmine; Greiser, Karin-Halina; Kaaks, Rudolf; Steffen, Annika

    2017-08-01

    A risk-targeted prevention strategy may efficiently utilize limited resources available for prevention of overweight and obesity. Likewise, more efficient intervention trials could be designed if selection of subjects was based on risk. The aim of the study was to develop a risk score predicting substantial weight gain among German adults. We developed the risk score using information on 15 socio-demographic, dietary and lifestyle factors from 32 204 participants of five population-based German cohort studies. Substantial weight gain was defined as gaining ≥10% of weight between baseline and follow-up (>6 years apart). The cases were censored according to the theoretical point in time when the threshold of 10% baseline-based weight gain was crossed assuming linearity of weight gain. Beta coefficients derived from proportional hazards regression were used as weights to compute the risk score as a linear combination of the predictors. Cross-validation was used to evaluate the score's discriminatory accuracy. The cross-validated c index (95% CI) was 0.71 (0.67-0.75). A cutoff value of ≥475 score points yielded a sensitivity of 71% and a specificity of 63%. The corresponding positive and negative predictive values were 10.4% and 97.6%, respectively. The proposed risk score may support healthcare providers in decision making and referral and facilitate an efficient selection of subjects into intervention trials. © The Author 2016. Published by Oxford University Press on behalf of the European Public Health Association.

  7. Predicting risk of substantial weight gain in German adults—a multi-center cohort approach

    PubMed Central

    Bachlechner, Ursula; Boeing, Heiner; Haftenberger, Marjolein; Schienkiewitz, Anja; Scheidt-Nave, Christa; Vogt, Susanne; Thorand, Barbara; Peters, Annette; Schipf, Sabine; Ittermann, Till; Völzke, Henry; Nöthlings, Ute; Neamat-Allah, Jasmine; Greiser, Karin-Halina; Kaaks, Rudolf

    2017-01-01

    Abstract Background A risk-targeted prevention strategy may efficiently utilize limited resources available for prevention of overweight and obesity. Likewise, more efficient intervention trials could be designed if selection of subjects was based on risk. The aim of the study was to develop a risk score predicting substantial weight gain among German adults. Methods We developed the risk score using information on 15 socio-demographic, dietary and lifestyle factors from 32 204 participants of five population-based German cohort studies. Substantial weight gain was defined as gaining ≥10% of weight between baseline and follow-up (>6 years apart). The cases were censored according to the theoretical point in time when the threshold of 10% baseline-based weight gain was crossed assuming linearity of weight gain. Beta coefficients derived from proportional hazards regression were used as weights to compute the risk score as a linear combination of the predictors. Cross-validation was used to evaluate the score’s discriminatory accuracy. Results The cross-validated c index (95% CI) was 0.71 (0.67–0.75). A cutoff value of ≥475 score points yielded a sensitivity of 71% and a specificity of 63%. The corresponding positive and negative predictive values were 10.4% and 97.6%, respectively. Conclusions The proposed risk score may support healthcare providers in decision making and referral and facilitate an efficient selection of subjects into intervention trials. PMID:28013243

  8. Electronic Imaging

    DTIC Science & Technology

    1991-11-01

    Tilted Rough Disc," Donald J. Schertler and Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George...Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse Bryan J. Stossel Responses Nicholas George z 0 zw V) w LU 0...number of impulses present in the degradation. IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSESt Bryan J. Stossel Nicholas George Institute of Optics

  9. Determinants of Participation and Expenditure Patterns of Private Tuition Received by Primary School Students in Penang, Malaysia: An Exploratory Study

    ERIC Educational Resources Information Center

    Jelani, Juliana; Tan, Andrew K. G.

    2012-01-01

    In this exploratory study, the censored Tobit model is applied on primary data collected amongst parents of primary school students in Penang, Malaysia to examine the determinants of participation and expenditures on private tuition (PT). Results of the marginal effects indicate that socio-demographic characteristics--ethnicity, household income,…

  10. Ignorance Only: HIV/AIDS, Human Rights, and Federally Funded Abstinence-Only Programs in the United States. Texas: A Case Study.

    ERIC Educational Resources Information Center

    Schleifer, Rebecca

    2002-01-01

    This report contends that programs teaching teenagers to "just say no" to sex before marriage are threatening adolescent health by censoring basic information about how to prevent HIV/AIDS. The report focuses on federally funded "abstinence-only-until-marriage" programs in Texas, where advertising campaigns convey the message…

  11. Freedom of the Press For Whom? The Right of Access to Mass Media.

    ERIC Educational Resources Information Center

    Barron, Jerome A.

    Freedom of the press has always meant freedom from government control and censorship, but now the communications industry itself is the major censor, with control of access to the media resting in a very few hands. The new effort throughout the country to open up the media by applying citizen group pressure, by bringing court action, and by…

  12. Censoring History: Citizenship and Memory in Japan, Germany, and the United States. Asia and the Pacific.

    ERIC Educational Resources Information Center

    Hein, Laura, Ed.; Selden, Mark, Ed.

    This collection of essays focuses on textbook treatments of World War II in Japan, Germany, and the United States and gives readers a new perspective on the creation of national identities and international misunderstandings. Essays in the collection are: (1) "The Lessons of War, Global Power, and Social Change" (Laura Hein; Mark…

  13. Measuring the Determinants of School Completion in Pakistan: Analysis of Censoring and Selection Bias. Center Discussion Paper.

    ERIC Educational Resources Information Center

    Holmes, Jessica

    This paper explores the demand for child schooling in Pakistan, using the "Pakistan Integrated Household Survey" (1991). There have been few such studies for Pakistan, a country with relatively low enrollment rates and education levels, high illiteracy, and a large disparity between male and female education. This study focuses on two…

  14. Ideology, Utopia and Islam on Campus: How to Free Speech a Little from Its Own Terrors

    ERIC Educational Resources Information Center

    Scott-Baumann, Alison

    2017-01-01

    A dominant narrative on many British campuses is "Prevent," which is part of the government's counter-terror policy, an ideology based on fear. Muslims, in particular, are considered to be at risk of radicalisation on campus, and being under suspicion makes them self-censor. Additionally, the no-platforming student lobby creates a…

  15. Culture Shock: A Teacher's Guide To Accompany the Four-Part PBS Series Premiering January 2000.

    ERIC Educational Resources Information Center

    Jaffee, Cyrisse, Ed.; Sharma, Amina, Ed.

    This teacher's guide accompanies the four videos ("Born to Trouble: Adventures of Huckleberry Finn"; "The Shock of the Nude: Manet's Olympia"; "Hollywood Censored: Movies, Morality and the Production Code"; and "The Devil's Music: 1920s Jazz") of the PBS "Culture Shock" series. The guide suggests that the videos could be used in the…

  16. Internet censorship.

    PubMed

    1996-12-27

    The U.S. Supreme Court agreed to review a lower court ruling that found the Communications Decency Act to be an unconstitutional infringement on free speech. The judges from the U.S. District Court in Philadelphia said that parents should monitor material that children are exposed to on the Internet. AIDS groups that publish information on safer sex, HIV prevention and AIDS treatments are not responsible for censoring content.

  17. Censoring Textbooks: Is West Virginia the Tip of the Iceberg? A Transcript of "Options on Education," December 11, 1974.

    ERIC Educational Resources Information Center

    George Washington Univ., Washington, DC. Inst. for Educational Leadership.

    Interviews with several individuals representing a variety of viewpoints about the recent controversy regarding textbooks and philosophy in the Kanawha County, West Virginia, public schools are presented in this transcript of a National Public Radio program broadcast in December 1974. Beginning with a discussion of the issue of textbook selection…

  18. Nobody Says No: Student Self-Censorship in a Collaborative Knowledge Building Activity

    ERIC Educational Resources Information Center

    Roberts, Alan; Nason, Rod

    2011-01-01

    This paper explores student self-censorship within an online learning environment. Self-censorship in group activity can be seen as a two-edged sword. While it can be advantageous that a student censor personal frustration and angst when working with others, if the self-censorship impacts on the cognitive contribution a student makes then this may…

  19. Some New Perspectives on the First Amendment in the Schools.

    ERIC Educational Resources Information Center

    O'Neil, Robert M.

    The first textbook censorship challenge reached the courts in 1949, when a group of Jewish parents objected to Jewish stereotypes in literature used in the classrooms. This New York case is a reminder that (1) would-be censors do not always come from the far right; (2) while most censorship cases go to court for the purpose of having controversial…

  20. A threshold-free summary index of prediction accuracy for censored time to event data.

    PubMed

    Yuan, Yan; Zhou, Qian M; Li, Bingying; Cai, Hengrui; Chow, Eric J; Armstrong, Gregory T

    2018-05-10

    Prediction performance of a risk scoring system needs to be carefully assessed before its adoption in clinical practice. Clinical preventive care often uses risk scores to screen asymptomatic population. The primary clinical interest is to predict the risk of having an event by a prespecified future time t 0 . Accuracy measures such as positive predictive values have been recommended for evaluating the predictive performance. However, for commonly used continuous or ordinal risk score systems, these measures require a subjective cutoff threshold value that dichotomizes the risk scores. The need for a cutoff value created barriers for practitioners and researchers. In this paper, we propose a threshold-free summary index of positive predictive values that accommodates time-dependent event status and competing risks. We develop a nonparametric estimator and provide an inference procedure for comparing this summary measure between 2 risk scores for censored time to event data. We conduct a simulation study to examine the finite-sample performance of the proposed estimation and inference procedures. Lastly, we illustrate the use of this measure on a real data example, comparing 2 risk score systems for predicting heart failure in childhood cancer survivors. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Missing Value Imputation Approach for Mass Spectrometry-based Metabolomics Data.

    PubMed

    Wei, Runmin; Wang, Jingye; Su, Mingming; Jia, Erik; Chen, Shaoqiu; Chen, Tianlu; Ni, Yan

    2018-01-12

    Missing values exist widely in mass-spectrometry (MS) based metabolomics data. Various methods have been applied for handling missing values, but the selection can significantly affect following data analyses. Typically, there are three types of missing values, missing not at random (MNAR), missing at random (MAR), and missing completely at random (MCAR). Our study comprehensively compared eight imputation methods (zero, half minimum (HM), mean, median, random forest (RF), singular value decomposition (SVD), k-nearest neighbors (kNN), and quantile regression imputation of left-censored data (QRILC)) for different types of missing values using four metabolomics datasets. Normalized root mean squared error (NRMSE) and NRMSE-based sum of ranks (SOR) were applied to evaluate imputation accuracy. Principal component analysis (PCA)/partial least squares (PLS)-Procrustes analysis were used to evaluate the overall sample distribution. Student's t-test followed by correlation analysis was conducted to evaluate the effects on univariate statistics. Our findings demonstrated that RF performed the best for MCAR/MAR and QRILC was the favored one for left-censored MNAR. Finally, we proposed a comprehensive strategy and developed a public-accessible web-tool for the application of missing value imputation in metabolomics ( https://metabolomics.cc.hawaii.edu/software/MetImp/ ).

  2. Joint modelling of longitudinal outcome and interval-censored competing risk dropout in a schizophrenia clinical trial

    PubMed Central

    Gueorguieva, Ralitza; Rosenheck, Robert; Lin, Haiqun

    2011-01-01

    Summary The ‘Clinical antipsychotic trials in intervention effectiveness’ study, was designed to evaluate whether there were significant differences between several antipsychotic medications in effectiveness, tolerability, cost and quality of life of subjects with schizophrenia. Overall, 74 % of patients discontinued the study medication for various reasons before the end of 18 months in phase I of the study. When such a large percentage of study participants fail to complete the study schedule, it is not clear whether the apparent profile in effectiveness reflects genuine changes over time or is influenced by selection bias, with participants with worse (or better) outcome values being more likely to drop out or to discontinue. To assess the effect of dropouts for different reasons on inferences, we construct a joint model for the longitudinal outcome and cause-specific dropouts that allows for interval-censored dropout times. Incorporating the information regarding the cause of dropout improves inferences and provides better understanding of the association between cause-specific dropout and the outcome process. We use simulations to demonstrate the advantages of the joint modelling approach in terms of bias and efficiency. PMID:22468033

  3. Blasphemy or art: what art should be censored and who wants to censor it?

    PubMed

    Dunkel, Curtis S; Hillard, Erin E

    2014-01-01

    Current events have marked the increasing tension between freedom of artistic expression and religious tolerance and sensitivity. While there have been several controversies in the West concerning art critical of Christianity, a more complex dynamic has arisen as some Western artists have created art critical of Islam. Research was undertaken to examine what aspects of artwork lead to the most aversive reactions and desire to ban art and individual differences in response to controversial art. Of particular interest was the response to artwork critical of Christianity in comparison to artwork critical of Islam. Studies 1 and 2 suggest that the artwork that mixes the sacred and profane (whether critical of Christianity or Islam) is particularly likely to elicit a negative emotional response and is more likely to be the target for censorship. Also consistent across Studies 1 and 2 individuals who based their moral foundation on purity and have Christian religious beliefs were more likely to endorse banning said artwork. In Study 3 an even more complex picture emerged in which non-Christians were more likely to endorse banning art critical of Islam in comparison to art critical of Christianity.

  4. Semiparametric regression analysis of interval-censored competing risks data.

    PubMed

    Mao, Lu; Lin, Dan-Yu; Zeng, Donglin

    2017-09-01

    Interval-censored competing risks data arise when each study subject may experience an event or failure from one of several causes and the failure time is not observed directly but rather is known to lie in an interval between two examinations. We formulate the effects of possibly time-varying (external) covariates on the cumulative incidence or sub-distribution function of competing risks (i.e., the marginal probability of failure from a specific cause) through a broad class of semiparametric regression models that captures both proportional and non-proportional hazards structures for the sub-distribution. We allow each subject to have an arbitrary number of examinations and accommodate missing information on the cause of failure. We consider nonparametric maximum likelihood estimation and devise a fast and stable EM-type algorithm for its computation. We then establish the consistency, asymptotic normality, and semiparametric efficiency of the resulting estimators for the regression parameters by appealing to modern empirical process theory. In addition, we show through extensive simulation studies that the proposed methods perform well in realistic situations. Finally, we provide an application to a study on HIV-1 infection with different viral subtypes. © 2017, The International Biometric Society.

  5. Estimating unbiased magnitudes for the announced DPRK nuclear tests, 2006-2016

    NASA Astrophysics Data System (ADS)

    Peacock, Sheila; Bowers, David

    2017-04-01

    The seismic disturbances generated from the five (2006-2016) announced nuclear test explosions by the Democratic People's Republic of Korea (DPRK) are of moderate magnitude (body-wave magnitude mb 4-5) by global earthquake standards. An upward bias of network mean mb of low- to moderate-magnitude events is long established, and is caused by the censoring of readings from stations where the signal was below noise level at the time of the predicted arrival. This sampling bias can be overcome by maximum-likelihood methods using station thresholds at detecting (and non-detecting) stations. Bias in the mean mb can also be introduced by differences in the network of stations recording each explosion - this bias can reduced by using station corrections. We apply a maximum-likelihood (JML) inversion that jointly estimates station corrections and unbiased network mb for the five DPRK explosions recorded by the CTBTO International Monitoring Network (IMS) of seismic stations. The thresholds can either be directly measured from the noise preceding the observed signal, or determined by statistical analysis of bulletin amplitudes. The network mb of the first and smallest explosion is reduced significantly relative to the mean mb (to < 4.0 mb) by removal of the censoring bias.

  6. Postal censorship of Bosnian public health institutions during the Second World War: The Independent State of Croatia versus Dr. Stanko Sielski.

    PubMed

    Papalas, John A; Tahirović, Husref

    2016-11-01

    This study aims to present evidence of censorship during World War II by the Independent State of Croatia of one of its public health officials, Dr. Stanko Sielski who was a physician trained in epidemiology and public health. During World War II, he directed the Institute for Combating Endemic Syphilis in the Bosnian town Banja Luka. The staff under his direction consisted solely of Jewish physicians. We analyzed two groups of envelopes either sent by or to Dr. Stanko Sielski during the War and found evidence of censorship only in communications with a Jewish physician dated towards the end of the War. Dr. Stanko Sielski would be posthumously recognized for his efforts to shield his Jewish colleagues. The newly available, but still limited data, which we present indicates efforts to censor Dr. Stanko Sielski's postal communications towards the War's end. The censors targeted specifically Dr. Stanko Sielski's correspondences with the Jewish physicians he was protecting. This material highlights the many challenges his public health service experienced during the time of armed conflict. Copyright © 2016 by Academy of Sciences and Arts of Bosnia and Herzegovina.

  7. A multiple pointing-mount control strategy for space platforms

    NASA Technical Reports Server (NTRS)

    Johnson, C. D.

    1992-01-01

    A new disturbance-adaptive control strategy for multiple pointing-mount space platforms is proposed and illustrated by consideration of a simplified 3-link dynamic model of a multiple pointing-mount space platform. Simulation results demonstrate the effectiveness of the new platform control strategy. The simulation results also reveal a system 'destabilization phenomena' that can occur if the set of individual platform-mounted experiment controllers are 'too responsive.'

  8. Effectiveness of an audience response system in teaching pharmacology to baccalaureate nursing students.

    PubMed

    Vana, Kimberly D; Silva, Graciela E; Muzyka, Diann; Hirani, Lorraine M

    2011-06-01

    It has been proposed that students' use of an audience response system, commonly called clickers, may promote comprehension and retention of didactic material. Whether this method actually improves students' grades, however, is still not determined. The purpose of this study was to evaluate whether a lecture format utilizing multiple-choice PowerPoint slides and an audience response system was more effective than a lecture format using only multiple-choice PowerPoint slides in the comprehension and retention of pharmacological knowledge in baccalaureate nursing students. The study also assessed whether the additional use of clickers positively affected students' satisfaction with their learning. Results from 78 students who attended lecture classes with multiple-choice PowerPoint slides plus clickers were compared with those of 55 students who utilized multiple-choice PowerPoint slides only. Test scores between these two groups were not significantly different. A satisfaction questionnaire showed that 72.2% of the control students did not desire the opportunity to use clickers. Of the group utilizing the clickers, 92.3% recommend the use of this system in future courses. The use of multiple-choice PowerPoint slides and an audience response system did not seem to improve the students' comprehension or retention of pharmacological knowledge as compared with those who used solely multiple-choice PowerPoint slides.

  9. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features.

    PubMed

    Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara

    2017-01-01

    In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.

  10. Assisting People with Developmental Disabilities Improve Their Collaborative Pointing Efficiency with a Multiple Cursor Automatic Pointing Assistive Program

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Cheng, Hsiao-Fen; Li, Chia-Chun; Shih, Ching-Tien; Chiang, Ming-Shan

    2010-01-01

    This study evaluated whether four persons (two groups) with developmental disabilities would be able to improve their collaborative pointing performance through a Multiple Cursor Automatic Pointing Assistive Program (MCAPAP) with a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, and is able to…

  11. An Adaptive Dynamic Pointing Assistance Program to Help People with Multiple Disabilities Improve Their Computer Pointing Efficiency with Hand Swing through a Standard Mouse

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Shih, Ching-Tien; Wu, Hsiao-Ling

    2010-01-01

    The latest research adopted software technology to redesign the mouse driver, and turned a mouse into a useful pointing assistive device for people with multiple disabilities who cannot easily or possibly use a standard mouse, to improve their pointing performance through a new operation method, Extended Dynamic Pointing Assistive Program (EDPAP),…

  12. Coexistence and local μ-stability of multiple equilibrium points for memristive neural networks with nonmonotonic piecewise linear activation functions and unbounded time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2016-12-01

    In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Accelerometer profiles of physical activity and inactivity in normal weight, overweight, and obese U.S. men and women.

    PubMed

    Tudor-Locke, Catrine; Brashear, Meghan M; Johnson, William D; Katzmarzyk, Peter T

    2010-08-03

    The 2005-2006 National Health and Nutrition Examination Survey (NHANES) is used to describe an accelerometer-derived physical activity/inactivity profile in normal weight (BMI < 25 kg/m2), overweight (25 /= 30 kg/m2) U.S. adults. We computed physical activity volume indicators (activity counts/day, uncensored and censored steps/day), rate indicators (e.g., steps/minute), time indicators (employing NHANES activity counts/minute cut points to infer time in non-wear, sedentary, low, light, moderate, and vigorous intensities), the number of breaks in sedentary time (occasions when activity counts rose from < 100 activity/counts in one minute to >/= 100 activity counts in the subsequent minute), achievement of public health guidelines, and classification by step-defined physical activity levels. Data were examined for evidence of consistent and significant gradients across BMI-defined categories. In 2005-2006, U.S adults averaged 6,564 +/- SE 107 censored steps/day, and after considering non-wear time, they spent approximately 56.8% of the rest of the waking day in sedentary time, 23.7% in low intensity, 16.7% in light intensity, 2.6% in moderate intensity, and 0.2% in vigorous intensity. Overall, approximately 3.2% of U.S. adults achieved public health guidelines. The normal weight category took 7,190 +/- SE 157 steps/day, and spent 25.7 +/- 0.9 minutes/day in moderate intensity and 7.3 +/- 0.4 minutes/day in vigorous intensity physical activity. The corresponding numbers for the overweight category were 6,879 +/- 140 steps/day, 25.3 +/- 0.9 minutes/day, and 5.3 +/- 0.5 minutes/day and for the obese category 5,784 +/- 124 steps/day, 17.3 +/- 0.7 minutes/day and 3.2 +/- 0.4 minutes/day. Across BMI categories, increasing gradients and significant trends were apparent in males for sedentary time and decreasing gradients and significant trends were evident in time spent in light intensity, moderate intensity, and vigorous intensity. For females, there were only consistent gradients and significant trends apparent for decreasing amounts of time spent in moderate and vigorous intensity. Simple indicators of physical activity volume (i.e., steps/day) and time in light, moderate or vigorous intensity physical activity differ across BMI categories for both sexes, suggesting that these should continue to be targets for surveillance.

  14. Misspecification of Cox regression models with composite endpoints

    PubMed Central

    Wu, Longyang; Cook, Richard J

    2012-01-01

    Researchers routinely adopt composite endpoints in multicenter randomized trials designed to evaluate the effect of experimental interventions in cardiovascular disease, diabetes, and cancer. Despite their widespread use, relatively little attention has been paid to the statistical properties of estimators of treatment effect based on composite endpoints. We consider this here in the context of multivariate models for time to event data in which copula functions link marginal distributions with a proportional hazards structure. We then examine the asymptotic and empirical properties of the estimator of treatment effect arising from a Cox regression model for the time to the first event. We point out that even when the treatment effect is the same for the component events, the limiting value of the estimator based on the composite endpoint is usually inconsistent for this common value. We find that in this context the limiting value is determined by the degree of association between the events, the stochastic ordering of events, and the censoring distribution. Within the framework adopted, marginal methods for the analysis of multivariate failure time data yield consistent estimators of treatment effect and are therefore preferred. We illustrate the methods by application to a recent asthma study. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22736519

  15. Serum cystatin C concentration measured routinely is a prognostic marker for renal disease in dogs.

    PubMed

    Iwasa, Naoki; Takashima, Satoshi; Iwasa, Tatsuo; Iwasa, Kazuko; Suzuki, Tomomi; Kobatake, Yui; Kitagawa, Hitoshi; Nishii, Naohito

    2018-06-14

    This study examined the predictive value of serum cystatin C (Cys-C) concentration, measured during routine periodic health examinations, in the renal prognosis of dogs. A cohort of 140 dogs weighing <15 kg whose serum Cys-C concentrations were measured during periodic health examinations from December 2013 to March 2016 were prospectively studied, with renal disease-related death the predicted end point. Of the 140 dogs, nine died from renal diseases during the follow-up period (539 ± 249 days). Serum Cys-C concentrations were higher in the dogs that subsequently died of renal disease than in the censored group (0.8 ± 0.25 vs. 0.3 ± 0.1 mg/dl, respectively; P < .01). Dogs with high serum Cys-C concentrations (>0.55 mg/dl) had a shorter (P < .01) renal disease-specific survival period than those with low serum Cys-C concentrations (≤0.55 mg/dl). In conclusion, high serum Cys-C concentrations in periodic health examinations in dogs <15 kg predicted poorer prognosis for renal function. Copyright © 2018. Published by Elsevier Ltd.

  16. Contemporary Labor Patterns: The Impact of Maternal Body Mass Index

    PubMed Central

    KOMINIAREK, Michelle A.; ZHANG, Jun; VANVELDHUISEN, Paul; TROENDLE, James; BEAVER, Julie; HIBBARD, Judith U.

    2011-01-01

    Objective To compare labor patterns by body mass index (BMI). Study Design 118,978 gravidas with a singleton term cephalic gestation were studied. Repeated-measures analysis constructed average labor curves by parity and BMI categories for those that reached 10cm. Interval censored regression analysis determined median traverse times adjusting for covariates in vaginal deliveries and intrapartum cesareans. Results For nulliparas, the time difference to reach 10 cm was 1.2 hours from the lowest to highest BMI category. Multiparas entered active phase by 6 cm, but reaching this point took longer for BMI≥40.0 (3.4hours) compared to BMI<25.0 (2.4hours). Progression by centimeter (P<0.001) except from 7–9cm in multiparas (P>0.05), and from 4–10cm (P<0.001) increased as BMI increased for nulliparas and multiparas. Second stage length with and without an epidural was similar among BMI categories for nulliparas (P>0.05), but decreased as BMI increased for multiparas (P<0.001). Conclusion Labor proceeds more slowly as BMI increases suggesting that labor management be tailored to allow for these differences. PMID:21798510

  17. Proceedings of the Annual Meeting of the Association for Education in Journalism and Mass Communication (81st, Baltimore, Maryland, August 5-8, 1998). Mass Communication and Society.

    ERIC Educational Resources Information Center

    Association for Education in Journalism and Mass Communication.

    The Mass Communication and Society section of the Proceedings contains the following 19 papers: "Talk Radio as Forum and Companion: Listener Attitudes and Uses and Gratifications in Austin, Texas" (John Beatty); "'Willingness to Censor': Developing a Quantitative Measurement across Speech Categories and Types of Media"…

  18. Effects of Assuming Independent Component Failure Times, If They Are Actually Dependent, in a Series System.

    DTIC Science & Technology

    1985-11-26

    etc.).., Major decisions involving reliability ptudies, based on competing risk methodology , have been made in the past and will continue to be made...censoring mechanism. In such instances, the methodology for estimating relevant reliabili- ty probabilities has received considerable attention (cf. David...proposal for a discussion of the general methodology . .,4..% . - ’ -. - ’ . ’ , . * I - " . . - - - - . . ,_ . . . . . . . . .4

  19. "Cunt": On the Perception and Handling of Verbal Dynamite by L1 and Lx Users of English

    ERIC Educational Resources Information Center

    Dewaele, Jean-Marc

    2018-01-01

    "Cunt" is currently one of the most offensive words in the English language and is usually censored in the English press and media. The present study looks firstly at differences between 1159 first (L1) and 1165 foreign (LX) users of English in their perceived understanding of the word, its perceived offensiveness and their self-reported…

  20. Proceedings of the Annual Meeting of the Association for Education in Journalism and Mass Communication (85th, Miami, Florida, August 5-8, 2002). Visual Communication Division.

    ERIC Educational Resources Information Center

    2002

    The Visual Communication Division of the proceedings contains the following 7 papers: "Photography Editors as Gatekeepers: Choosing Between Publishing or Self-Censoring Disturbing Images of 9-11" (Renee Martin Kratzer and Brian Kratzer); "Jane Campion's 'The Piano': The Female Gaze, the Speculum and the Chora within the…

Top