Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
Likelihood Ratio Tests for Special Rasch Models
ERIC Educational Resources Information Center
Hessen, David J.
2010-01-01
In this article, a general class of special Rasch models for dichotomous item scores is considered. Although Andersen's likelihood ratio test can be used to test whether a Rasch model fits to the data, the test does not differentiate between special Rasch models. Therefore, in this article, new likelihood ratio tests are proposed for testing…
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
ERIC Educational Resources Information Center
Levy, Roy
2010-01-01
SEMModComp, a software package for conducting likelihood ratio tests for mean and covariance structure modeling is described. The package is written in R and freely available for download or on request.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes
ERIC Educational Resources Information Center
Leite, Walter L.; Stapleton, Laura M.
2011-01-01
In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…
NASA Technical Reports Server (NTRS)
Cash, W.
1979-01-01
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
Harrell-Williams, Leigh; Wolfe, Edward W
2014-01-01
Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.
Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models
Hillis, Stephen L.
2015-01-01
A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405
Validation of DNA-based identification software by computation of pedigree likelihood ratios.
Slooten, K
2011-08-01
Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Likelihood-Ratio DIF Testing: Effects of Nonnormality
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
Differential item functioning (DIF) occurs when an item has different measurement properties for members of one group versus another. Likelihood-ratio (LR) tests for DIF based on item response theory (IRT) involve statistically comparing IRT models that vary with respect to their constraints. A simulation study evaluated how violation of the…
A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grana, Justin; Wolpert, David; Neil, Joshua
The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less
A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks
Grana, Justin; Wolpert, David; Neil, Joshua; ...
2016-03-11
The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less
On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai
2007-01-01
In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…
Change-in-ratio estimators for populations with more than two subclasses
Udevitz, Mark S.; Pollock, Kenneth H.
1991-01-01
Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.
2013-01-01
Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639
Bivariate categorical data analysis using normal linear conditional multinomial probability model.
Sun, Bingrui; Sutradhar, Brajendra
2015-02-10
Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.
On the occurrence of false positives in tests of migration under an isolation with migration model
Hey, Jody; Chung, Yujin; Sethuraman, Arun
2015-01-01
The population genetic study of divergence is often done using a Bayesian genealogy sampler, like those implemented in IMa2 and related programs, and these analyses frequently include a likelihood-ratio test of the null hypothesis of no migration between populations. Cruickshank and Hahn (2014, Molecular Ecology, 23, 3133–3157) recently reported a high rate of false positive test results with IMa2 for data simulated with small numbers of loci under models with no migration and recent splitting times. We confirm these findings and discover that they are caused by a failure of the assumptions underlying likelihood ratio tests that arises when using marginal likelihoods for a subset of model parameters. We also show that for small data sets, with little divergence between samples from two populations, an excellent fit can often be found by a model with a low migration rate and recent splitting time and a model with a high migration rate and a deep splitting time. PMID:26456794
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.
Li, Zhanzhan; Zhou, Qin; Li, Yanyan; Yan, Shipeng; Fu, Jun; Huang, Xinqiong; Shen, Liangfang
2017-02-28
We conducted a meta-analysis to evaluate the diagnostic values of mean cerebral blood volume for recurrent and radiation injury in glioma patients. We performed systematic electronic searches for eligible study up to August 8, 2016. Bivariate mixed effects models were used to estimate the combined sensitivity, specificity, positive likelihood ratios, negative likelihood ratios, diagnostic odds ratios and their 95% confidence intervals (CIs). Fifteen studies with a total number of 576 participants were enrolled. The pooled sensitivity and specificity of diagnostic were 0.88 (95%CI: 0.82-0.92) and 0.85 (95%CI: 0.68-0.93). The pooled positive likelihood ratio is 5.73 (95%CI: 2.56-12.81), negative likelihood ratio is 0.15 (95%CI: 0.10-0.22), and the diagnostic odds ratio is 39.34 (95%CI:13.96-110.84). The summary receiver operator characteristic is 0.91 (95%CI: 0.88-0.93). However, the Deek's plot suggested publication bias may exist (t=2.30, P=0.039). Mean cerebral blood volume measurement methods seems to be very sensitive and highly specific to differentiate recurrent and radiation injury in glioma patients. The results should be interpreted with caution because of the potential bias.
Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios
Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang
2014-01-01
Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553
Likelihood ratios for glaucoma diagnosis using spectral-domain optical coherence tomography.
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M; Weinreb, Robert N; Medeiros, Felipe A
2013-11-01
To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral-domain optical coherence tomography (spectral-domain OCT). Observational cohort study. A total of 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the receiver operating characteristic (ROC) curve. Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86 μm were associated with positive likelihood ratios (ie, likelihood ratios greater than 1), whereas RNFL thickness values higher than 86 μm were associated with negative likelihood ratios (ie, likelihood ratios smaller than 1). A modified Fagan nomogram was provided to assist calculation of posttest probability of disease from the calculated likelihood ratios and pretest probability of disease. The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision making. Copyright © 2013. Published by Elsevier Inc.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less
1982-04-01
S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES
A Likelihood Ratio Test Regarding Two Nested But Oblique Order Restricted Hypotheses.
1982-11-01
Report #90 DIC JAN 2 411 ISMO. H American Mathematical Society 1979 subject classification Primary 62F03 Secondary 62E15 Key words and phrases: Order...model. A likelihood ratio test for these two restrictions is studied . Asa *a .on . r 373 RA&J *iii - ,sa~m muwod [] v~ -F: :.v"’. os "- 1...investigation was stimulated partly by a problem encountered in psychiatric research. [Winokur et al., 1971] studied data on psychiatric illnesses afflicting
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Using the β-binomial distribution to characterize forest health
S.J. Zarnoch; R.L. Anderson; R.M. Sheffield
1995-01-01
The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
ERIC Educational Resources Information Center
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
Bayesian Hierarchical Random Effects Models in Forensic Science.
Aitken, Colin G G
2018-01-01
Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.
Ran, Li; Zhao, Wenli; Zhao, Ye; Bu, Huaien
2017-07-01
Contrast-enhanced ultrasound (CEUS) is considered a novel method for diagnosing pancreatic cancer, but currently, there is no conclusive evidence of its accuracy. Using CEUS in discriminating between pancreatic carcinoma and other pancreatic lesions, we aimed to evaluate the diagnostic accuracy of CEUS in predicting pancreatic tumours. Relevant studies were selected from the PubMed, Cochrane Library, Elsevier, CNKI, VIP, and WANFANG databases dating from January 2006 to May 2017. The following terms were used as keywords: "pancreatic cancer" OR "pancreatic carcinoma," "contrast-enhanced ultrasonography" OR "contrast-enhanced ultrasound" OR "CEUS," and "diagnosis." The selection criteria are as follows: pancreatic carcinomas diagnosed by CEUS while the main reference standard was surgical pathology or biopsy (if it involved a clinical diagnosis, particular criteria emphasized); SonoVue or Levovist was the contrast agent; true positive, false positive, false negative, and true negative rates were obtained or calculated to construct the 2 × 2 contingency table; English or Chinese articles; at least 20 patients were enrolled in each group. The Quality Assessment for Studies of Diagnostic Accuracy was employed to evaluate the quality of articles. Pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, summary receiver-operating characteristic curves, and the area under curve were evaluated to estimate the overall diagnostic efficiency. Pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with 95% confidence intervals (CIs) were calculated with fixed-effect models. Eight of 184 records were eligible for a meta-analysis after independent scrutinization by 2 reviewers. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were 0.86 (95% CI 0.81-0.90), 0.75 (95% CI 0.68-0.82), 3.56 (95% CI 2.64-4.78), 0.19 (95% CI 0.13-0.27), and 22.260 (95% CI 8.980-55.177), respectively. The area under the SROC curve was 0.9088. CEUS has a satisfying pooled sensitivity and specificity for discriminating pancreatic cancer from other pancreatic lesions.
Carey, David L; Blanch, Peter; Ong, Kok-Leong; Crossley, Kay M; Crow, Justin; Morris, Meg E
2017-08-01
(1) To investigate whether a daily acute:chronic workload ratio informs injury risk in Australian football players; (2) to identify which combination of workload variable, acute and chronic time window best explains injury likelihood. Workload and injury data were collected from 53 athletes over 2 seasons in a professional Australian football club. Acute:chronic workload ratios were calculated daily for each athlete, and modelled against non-contact injury likelihood using a quadratic relationship. 6 workload variables, 8 acute time windows (2-9 days) and 7 chronic time windows (14-35 days) were considered (336 combinations). Each parameter combination was compared for injury likelihood fit (using R 2 ). The ratio of moderate speed running workload (18-24 km/h) in the previous 3 days (acute time window) compared with the previous 21 days (chronic time window) best explained the injury likelihood in matches (R 2 =0.79) and in the immediate 2 or 5 days following matches (R 2 =0.76-0.82). The 3:21 acute:chronic workload ratio discriminated between high-risk and low-risk athletes (relative risk=1.98-2.43). Using the previous 6 days to calculate the acute workload time window yielded similar results. The choice of acute time window significantly influenced model performance and appeared to reflect the competition and training schedule. Daily workload ratios can inform injury risk in Australian football. Clinicians and conditioning coaches should consider the sport-specific schedule of competition and training when choosing acute and chronic time windows. For Australian football, the ratio of moderate speed running in a 3-day or 6-day acute time window and a 21-day chronic time window best explained injury risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Carey, David L; Blanch, Peter; Ong, Kok-Leong; Crossley, Kay M; Crow, Justin; Morris, Meg E
2017-01-01
Aims (1) To investigate whether a daily acute:chronic workload ratio informs injury risk in Australian football players; (2) to identify which combination of workload variable, acute and chronic time window best explains injury likelihood. Methods Workload and injury data were collected from 53 athletes over 2 seasons in a professional Australian football club. Acute:chronic workload ratios were calculated daily for each athlete, and modelled against non-contact injury likelihood using a quadratic relationship. 6 workload variables, 8 acute time windows (2–9 days) and 7 chronic time windows (14–35 days) were considered (336 combinations). Each parameter combination was compared for injury likelihood fit (using R2). Results The ratio of moderate speed running workload (18–24 km/h) in the previous 3 days (acute time window) compared with the previous 21 days (chronic time window) best explained the injury likelihood in matches (R2=0.79) and in the immediate 2 or 5 days following matches (R2=0.76–0.82). The 3:21 acute:chronic workload ratio discriminated between high-risk and low-risk athletes (relative risk=1.98–2.43). Using the previous 6 days to calculate the acute workload time window yielded similar results. The choice of acute time window significantly influenced model performance and appeared to reflect the competition and training schedule. Conclusions Daily workload ratios can inform injury risk in Australian football. Clinicians and conditioning coaches should consider the sport-specific schedule of competition and training when choosing acute and chronic time windows. For Australian football, the ratio of moderate speed running in a 3-day or 6-day acute time window and a 21-day chronic time window best explained injury risk. PMID:27789430
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Display size effects in visual search: analyses of reaction time distributions as mixtures.
Reynolds, Ann; Miller, Jeff
2009-05-01
In a reanalysis of data from Cousineau and Shiffrin (2004) and two new visual search experiments, we used a likelihood ratio test to examine the full distributions of reaction time (RT) for evidence that the display size effect is a mixture-type effect that occurs on only a proportion of trials, leaving RT in the remaining trials unaffected, as is predicted by serial self-terminating search models. Experiment 1 was a reanalysis of Cousineau and Shiffrin's data, for which a mixture effect had previously been established by a bimodal distribution of RTs, and the results confirmed that the likelihood ratio test could also detect this mixture. Experiment 2 applied the likelihood ratio test within a more standard visual search task with a relatively easy target/distractor discrimination, and Experiment 3 applied it within a target identification search task within the same types of stimuli. Neither of these experiments provided any evidence for the mixture-type display size effect predicted by serial self-terminating search models. Overall, these results suggest that serial self-terminating search models may generally be applicable only with relatively difficult target/distractor discriminations, and then only for some participants. In addition, they further illustrate the utility of analysing full RT distributions in addition to mean RT.
Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema
2016-08-10
Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.
A maximum likelihood convolutional decoder model vs experimental data comparison
NASA Technical Reports Server (NTRS)
Chen, R. Y.
1979-01-01
This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.
Xu, Maoqi; Chen, Liang
2018-01-01
The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Bueno, R. A.
1977-01-01
Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.
A data fusion approach to indications and warnings of terrorist attacks
NASA Astrophysics Data System (ADS)
McDaniel, David; Schaefer, Gregory
2014-05-01
Indications and Warning (I&W) of terrorist attacks, particularly IED attacks, require detection of networks of agents and patterns of behavior. Social Network Analysis tries to detect a network; activity analysis tries to detect anomalous activities. This work builds on both to detect elements of an activity model of terrorist attack activity - the agents, resources, networks, and behaviors. The activity model is expressed as RDF triples statements where the tuple positions are elements or subsets of a formal ontology for activity models. The advantage of a model is that elements are interdependent and evidence for or against one will influence others so that there is a multiplier effect. The advantage of the formality is that detection could occur hierarchically, that is, at different levels of abstraction. The model matching is expressed as a likelihood ratio between input text and the model triples. The likelihood ratio is designed to be analogous to track correlation likelihood ratios common in JDL fusion level 1. This required development of a semantic distance metric for positive and null hypotheses as well as for complex objects. The metric uses the Web 1Terabype database of one to five gram frequencies for priors. This size requires the use of big data technologies so a Hadoop cluster is used in conjunction with OpenNLP natural language and Mahout clustering software. Distributed data fusion Map Reduce jobs distribute parts of the data fusion problem to the Hadoop nodes. For the purposes of this initial testing, open source models and text inputs of similar complexity to terrorist events were used as surrogates for the intended counter-terrorist application.
A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits
Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling
2013-01-01
Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762
Dai, Cong; Jiang, Min; Sun, Ming-Jun; Cao, Qin
2018-05-01
Fecal immunochemical test (FIT) is a promising marker for assessment of inflammatory bowel disease activity. However, the utility of FIT for predicting mucosal healing (MH) of ulcerative colitis (UC) patients has yet to be clearly demonstrated. The objective of our study was to perform a diagnostic test accuracy test meta-analysis evaluating the diagnostic accuracy of FIT in predicting MH of UC patients. We systematically searched the databases from inception to November 2017 that evaluated MH in UC. The methodological quality of each study was assessed according to the Quality Assessment of Diagnostic Accuracy Studies checklist. The extracted data were pooled using a summary receiver operating characteristic curve model. Random-effects model was used to summarize the diagnostic odds ratio, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Six studies comprising 625 UC patients were included in the meta-analysis. The pooled sensitivity and specificity values for predicting MH in UC were 0.77 (95% confidence interval [CI], 0.72-0.81) and 0.81 (95% CI, 0.76-0.85), respectively. The FIT level had a high rule-in value (positive likelihood ratio, 3.79; 95% CI, 2.85-5.03) and a moderate rule-out value (negative likelihood ratio, 0.26; 95% CI, 0.16-0.43) for predicting MH in UC. The results of the receiver operating characteristic curve analysis (area under the curve, 0.88; standard error of the mean, 0.02) and diagnostic odds ratio (18.08; 95% CI, 9.57-34.13) also revealed improved discrimination for identifying MH in UC with FIT concentration. Our meta-analysis has found that FIT is a simple, reliable non-invasive marker for predicting MH in UC patients. © 2018 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Exclusion probabilities and likelihood ratios with applications to kinship problems.
Slooten, Klaas-Jan; Egeland, Thore
2014-05-01
In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.
Stochastic Ordering Using the Latent Trait and the Sum Score in Polytomous IRT Models.
ERIC Educational Resources Information Center
Hemker, Bas T.; Sijtsma, Klaas; Molenaar, Ivo W.; Junker, Brian W.
1997-01-01
Stochastic ordering properties are investigated for a broad class of item response theory (IRT) models for which the monotone likelihood ratio does not hold. A taxonomy is given for nonparametric and parametric models for polytomous models based on the hierarchical relationship between the models. (SLD)
Tree-Based Global Model Tests for Polytomous Rasch Models
ERIC Educational Resources Information Center
Komboz, Basil; Strobl, Carolin; Zeileis, Achim
2018-01-01
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
Likelihood Ratios for the Emergency Physician.
Peng, Paul; Coyle, Andrew
2018-04-26
The concept of likelihood ratios was introduced more than 40 years ago, yet this powerful metric has still not seen wider application or discussion in the medical decision-making process. There is concern that clinicians-in-training are still being taught an over-simplified approach to diagnostic test performance, and have limited exposure to likelihood ratios. Even for those familiar with likelihood ratios, they might perceive them as mathematically-cumbersome in application, if not difficult to determine for a particular disease process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Wang, Lina; Li, Hao; Yang, Zhongyuan; Guo, Zhuming; Zhang, Quan
2015-07-01
This study was designed to assess the efficiency of the serum thyrotropin to thyroglobulin ratio for thyroid nodule evaluation in euthyroid patients. Cross-sectional study. Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China. Retrospective analysis was performed for 400 previously untreated cases presenting with thyroid nodules. Thyroid function was tested with commercially available radioimmunoassays. The receiver operating characteristic curves were constructed to determine cutoff values. The efficacy of the thyrotropin:thyroglobulin ratio and thyroid-stimulating hormone for thyroid nodule evaluation was evaluated in terms of sensitivity, specificity, positive predictive value, positive likelihood ratio, negative likelihood ratio, and odds ratio. In receiver operating characteristic curve analysis, the area under the curve was 0.746 for the thyrotropin:thyroglobulin ratio and 0.659 for thyroid-stimulating hormone. With a cutoff point value of 24.97 IU/g for the thyrotropin:thyroglobulin ratio, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 78.9%, 60.8%, 75.5%, 2.01, and 0.35, respectively. The odds ratio for the thyrotropin:thyroglobulin ratio indicating malignancy was 5.80. With a cutoff point value of 1.525 µIU/mL for thyroid-stimulating hormone, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 74.0%, 53.2%, 70.8%, 1.58, and 0.49, respectively. The odds ratio indicating malignancy for thyroid-stimulating hormone was 3.23. Increasing preoperative serum thyrotropin:thyroglobulin ratio is a risk factor for thyroid carcinoma, and the correlation of the thyrotropin:thyroglobulin ratio to malignancy is higher than that for serum thyroid-stimulating hormone. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
Newman, Phil; Adams, Roger; Waddington, Gordon
2012-09-01
To examine the relationship between two clinical test results and future diagnosis of (Medial Tibial Stress Syndrome) MTSS in personnel at a military trainee establishment. Data from a preparticipation musculoskeletal screening test performed on 384 Australian Defence Force Academy Officer Cadets were compared against 693 injuries reported by 326 of the Officer Cadets in the following 16 months. Data were held in an Injury Surveillance database and analysed using χ² and Fisher's Exact tests, and Receiver Operating Characteristic Curve analysis. Diagnosis of MTSS, confirmed by an independent blinded health practitioner. Both the palpation and oedema clinical tests were each found to be significant predictors for later onset of MTSS. Specifically: Shin palpation test OR 4.63, 95% CI 2.5 to 8.5, Positive Likelihood Ratio 3.38, Negative Likelihood Ratio 0.732, Pearson χ² p<0.001; Shin oedema test OR 76.1 95% CI 9.6 to 602.7, Positive Likelihood Ratio 7.26, Negative Likelihood Ratio 0.095, Fisher's Exact p<0.001; Combined Shin Palpation Test and Shin Oedema Test Positive Likelihood Ratio 7.94, Negative Likelihood Ratio <0.001, Fisher's Exact p<0.001. Female gender was found to be an independent risk factor (OR 2.97, 95% CI 1.66 to 5.31, Positive Likelihood Ratio 2.09, Negative Likelihood Ratio 0.703, Pearson χ² p<0.001) for developing MTSS. The tests for MTSS employed here are components of a normal clinical examination used to diagnose MTSS. This paper confirms that these tests and female gender can also be confidently applied in predicting those in an asymptomatic population who are at greater risk of developing MTSS symptoms with activity at some point in the future.
Thakur, Jyoti; Pahuja, Sharvan Kumar; Pahuja, Roop
2017-01-01
In 2005, an international pediatric sepsis consensus conference defined systemic inflammatory response syndrome (SIRS) for children <18 years of age, but excluded premature infants. In 2012, Hofer et al. investigated the predictive power of SIRS for term neonates. In this paper, we examined the accuracy of SIRS in predicting sepsis in neonates, irrespective of their gestational age (i.e., pre-term, term, and post-term). We also created two prediction models, named Model A and Model B, using binary logistic regression. Both models performed better than SIRS. We also developed an android application so that physicians can easily use Model A and Model B in real-world scenarios. The sensitivity, specificity, positive likelihood ratio (PLR) and negative likelihood ratio (NLR) in cases of SIRS were 16.15%, 95.53%, 3.61, and 0.88, respectively, whereas they were 29.17%, 97.82%, 13.36, and 0.72, respectively, in the case of Model A, and 31.25%, 97.30%, 11.56, and 0.71, respectively, in the case of Model B. All models were significant with p < 0.001. PMID:29257099
The likelihood ratio as a random variable for linked markers in kinship analysis.
Egeland, Thore; Slooten, Klaas
2016-11-01
The likelihood ratio is the fundamental quantity that summarizes the evidence in forensic cases. Therefore, it is important to understand the theoretical properties of this statistic. This paper is the last in a series of three, and the first to study linked markers. We show that for all non-inbred pairwise kinship comparisons, the expected likelihood ratio in favor of a type of relatedness depends on the allele frequencies only via the number of alleles, also for linked markers, and also if the true relationship is another one than is tested for by the likelihood ratio. Exact expressions for the expectation and variance are derived for all these cases. Furthermore, we show that the expected likelihood ratio is a non-increasing function if the recombination rate increases between 0 and 0.5 when the actual relationship is the one investigated by the LR. Besides being of theoretical interest, exact expressions such as obtained here can be used for software validation as they allow to verify the correctness up to arbitrary precision. The paper also presents results and advice of practical importance. For example, we argue that the logarithm of the likelihood ratio behaves in a fundamentally different way than the likelihood ratio itself in terms of expectation and variance, in agreement with its interpretation as weight of evidence. Equipped with the results presented and freely available software, one may check calculations and software and also do power calculations.
Ou, Lu; Chow, Sy-Miin; Ji, Linying; Molenaar, Peter C M
2017-01-01
The autoregressive latent trajectory (ALT) model synthesizes the autoregressive model and the latent growth curve model. The ALT model is flexible enough to produce a variety of discrepant model-implied change trajectories. While some researchers consider this a virtue, others have cautioned that this may confound interpretations of the model's parameters. In this article, we show that some-but not all-of these interpretational difficulties may be clarified mathematically and tested explicitly via likelihood ratio tests (LRTs) imposed on the initial conditions of the model. We show analytically the nested relations among three variants of the ALT model and the constraints needed to establish equivalences. A Monte Carlo simulation study indicated that LRTs, particularly when used in combination with information criterion measures, can allow researchers to test targeted hypotheses about the functional forms of the change process under study. We further demonstrate when and how such tests may justifiably be used to facilitate our understanding of the underlying process of change using a subsample (N = 3,995) of longitudinal family income data from the National Longitudinal Survey of Youth.
Liou, Kevin; Negishi, Kazuaki; Ho, Suyen; Russell, Elizabeth A; Cranney, Greg; Ooi, Sze-Yuan
2016-08-01
Global longitudinal strain (GLS) is well validated and has important applications in contemporary clinical practice. The aim of this analysis was to evaluate the accuracy of resting peak GLS in the diagnosis of obstructive coronary artery disease (CAD). A systematic literature search was performed through July 2015 using four databases. Data were extracted independently by two authors and correlated before analyses. Using a random-effect model, the pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, and summary area under the curve for GLS were estimated with their respective 95% CIs. Screening of 1,669 articles yielded 10 studies with 1,385 patients appropriate for inclusion in the analysis. The mean age and left ventricular ejection fraction were 59.9 years and 61.1%. On the whole, 54.9% and 20.9% of the patients had hypertension and diabetes, respectively. Overall, abnormal GLS detected moderate to severe CAD with a pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of 74.4%, 72.1%, 2.9, and 0.35 respectively. The area under the curve and diagnostic odds ratio were 0.81 and 8.5. The mean values of GLS for those with and without CAD were -16.5% (95% CI, -15.8% to -17.3%) and -19.7% (95% CI, -18.8% to -20.7%), respectively. Subgroup analyses for patients with severe CAD and normal left ventricular ejection fractions yielded similar results. Current evidence supports the use of GLS in the detection of moderate to severe obstructive CAD in symptomatic patients. GLS may complement existing diagnostic algorithms and act as an early adjunctive marker of cardiac ischemia. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
IRT Model Selection Methods for Dichotomous Items
ERIC Educational Resources Information Center
Kang, Taehoon; Cohen, Allan S.
2007-01-01
Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…
Comparison of two weighted integration models for the cueing task: linear and likelihood
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2003-01-01
In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.
Ab initio solution of macromolecular crystal structures without direct methods.
McCoy, Airlie J; Oeffner, Robert D; Wrobel, Antoni G; Ojala, Juha R M; Tryggvason, Karl; Lohkamp, Bernhard; Read, Randy J
2017-04-04
The majority of macromolecular crystal structures are determined using the method of molecular replacement, in which known related structures are rotated and translated to provide an initial atomic model for the new structure. A theoretical understanding of the signal-to-noise ratio in likelihood-based molecular replacement searches has been developed to account for the influence of model quality and completeness, as well as the resolution of the diffraction data. Here we show that, contrary to current belief, molecular replacement need not be restricted to the use of models comprising a substantial fraction of the unknown structure. Instead, likelihood-based methods allow a continuum of applications depending predictably on the quality of the model and the resolution of the data. Unexpectedly, our understanding of the signal-to-noise ratio in molecular replacement leads to the finding that, with data to sufficiently high resolution, fragments as small as single atoms of elements usually found in proteins can yield ab initio solutions of macromolecular structures, including some that elude traditional direct methods.
Wang, Liang; Xia, Yu; Jiang, Yu-Xin; Dai, Qing; Li, Xiao-Yi
2012-11-01
To assess the efficacy of sonography for discriminating nodular Hashimoto thyroiditis from papillary thyroid carcinoma in patients with sonographically evident diffuse Hashimoto thyroiditis. This study included 20 patients with 24 surgically confirmed Hashimoto thyroiditis nodules and 40 patients with 40 papillary thyroid carcinoma nodules; all had sonographically evident diffuse Hashimoto thyroiditis. A retrospective review of the sonograms was performed, and significant benign and malignant sonographic features were selected by univariate and multivariate analyses. The combined likelihood ratio was calculated as the product of each feature's likelihood ratio for papillary thyroid carcinoma. We compared the abilities of the original sonographic features and combined likelihood ratios in diagnosing nodular Hashimoto thyroiditis and papillary thyroid carcinoma by their sensitivity, specificity, and Youden index. The diagnostic capabilities of the sonographic features varied greatly, with Youden indices ranging from 0.175 to 0.700. Compared with single features, combinations of features were unable to improve the Youden indices effectively because the sensitivity and specificity usually changed in opposite directions. For combined likelihood ratios, however, the sensitivity improved greatly without an obvious reduction in specificity, which resulted in the maximum Youden index (0.825). With a combined likelihood ratio greater than 7.00 as the diagnostic criterion for papillary thyroid carcinoma, sensitivity reached 82.5%, whereas specificity remained at 100.0%. With a combined likelihood ratio less than 1.00 for nodular Hashimoto thyroiditis, sensitivity and specificity were 90.0% and 92.5%, respectively. Several sonographic features of nodular Hashimoto thyroiditis and papillary thyroid carcinoma in a background of diffuse Hashimoto thyroiditis were significantly different. The combined likelihood ratio may be superior to original sonographic features for discrimination of nodular Hashimoto thyroiditis from papillary thyroid carcinoma; therefore, it is a promising risk index for thyroid nodules and warrants further investigation.
Transfer Entropy as a Log-Likelihood Ratio
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Transfer entropy as a log-likelihood ratio.
Barnett, Lionel; Bossomaier, Terry
2012-09-28
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Accounting for informatively missing data in logistic regression by means of reassessment sampling.
Lin, Ji; Lyles, Robert H
2015-05-20
We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.
A Computer-Aided Diagnosis System for Breast Cancer Combining Mammography and Proteomics
2007-05-01
findings in both Data sets C and M. The likelihood ratio is the probability of the features un- der the malignant case divided by the probability of...likelihood ratio value as a classification decision variable, the probabilities of detection and false alarm are calculated as follows: Pdfusion...lowered the fused classifier’s performance to near chance levels. A genetic algorithm searched over the likelihood- ratio thresh- old values for each
Measuring coherence of computer-assisted likelihood ratio methods.
Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H
2015-04-01
Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Suh, Youngsuk; Talley, Anna E.
2015-01-01
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Model-Free CUSUM Methods for Person Fit
ERIC Educational Resources Information Center
Armstrong, Ronald D.; Shi, Min
2009-01-01
This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…
A parimutuel gambling perspective to compare probabilistic seismicity forecasts
NASA Astrophysics Data System (ADS)
Zechar, J. Douglas; Zhuang, Jiancang
2014-10-01
Using analogies to gaming, we consider the problem of comparing multiple probabilistic seismicity forecasts. To measure relative model performance, we suggest a parimutuel gambling perspective which addresses shortcomings of other methods such as likelihood ratio, information gain and Molchan diagrams. We describe two variants of the parimutuel approach for a set of forecasts: head-to-head, in which forecasts are compared in pairs, and round table, in which all forecasts are compared simultaneously. For illustration, we compare the 5-yr forecasts of the Regional Earthquake Likelihood Models experiment for M4.95+ seismicity in California.
Wang, Jiun-Hao; Chang, Hung-Hao
2010-10-26
In contrast to the considerable body of literature concerning the disabilities of the general population, little information exists pertaining to the disabilities of the farm population. Focusing on the disability issue to the insurants in the Farmers' Health Insurance (FHI) program in Taiwan, this paper examines the associations among socio-demographic characteristics, insured factors, and the introduction of the national health insurance program, as well as the types and payments of disabilities among the insurants. A unique dataset containing 1,594,439 insurants in 2008 was used in this research. A logistic regression model was estimated for the likelihood of received disability payments. By focusing on the recipients, a disability payment and a disability type equation were estimated using the ordinary least squares method and a multinomial logistic model, respectively, to investigate the effects of the exogenous factors on their received payments and the likelihood of having different types of disabilities. Age and different job categories are significantly associated with the likelihood of receiving disability payments. Compared to those under age 45, the likelihood is higher among recipients aged 85 and above (the odds ratio is 8.04). Compared to hired workers, the odds ratios for self-employed and spouses of farm operators who were not members of farmers' associations are 0.97 and 0.85, respectively. In addition, older insurants are more likely to have eye problems; few differences in disability types are related to insured job categories. Results indicate that older farmers are more likely to receive disability payments, but the likelihood is not much different among insurants of various job categories. Among all of the selected types of disability, a highest likelihood is found for eye disability. In addition, the introduction of the national health insurance program decreases the likelihood of receiving disability payments. The experience in Taiwan can be valuable for other countries that are in an initial stage to implement a universal health insurance program.
Yang, Ji; Gu, Hongya; Yang, Ziheng
2004-01-01
Chalcone synthase (CHS) is a key enzyme in the biosynthesis of flavonoides, which are important for the pigmentation of flowers and act as attractants to pollinators. Genes encoding CHS constitute a multigene family in which the copy number varies among plant species and functional divergence appears to have occurred repeatedly. In morning glories (Ipomoea), five functional CHS genes (A-E) have been described. Phylogenetic analysis of the Ipomoea CHS gene family revealed that CHS A, B, and C experienced accelerated rates of amino acid substitution relative to CHS D and E. To examine whether the CHS genes of the morning glories underwent adaptive evolution, maximum-likelihood models of codon substitution were used to analyze the functional sequences in the Ipomoea CHS gene family. These models used the nonsynonymous/synonymous rate ratio (omega = d(N)/ d(S)) as an indicator of selective pressure and allowed the ratio to vary among lineages or sites. Likelihood ratio test suggested significant variation in selection pressure among amino acid sites, with a small proportion of them detected to be under positive selection along the branches ancestral to CHS A, B, and C. Positive Darwinian selection appears to have promoted the divergence of subfamily ABC and subfamily DE and is at least partially responsible for a rate increase following gene duplication.
On the Nature of SEM Estimates of ARMA Parameters.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
Interpreting DNA mixtures with the presence of relatives.
Hu, Yue-Qing; Fung, Wing K
2003-02-01
The assessment of DNA mixtures with the presence of relatives is discussed in this paper. The kinship coefficients are incorporated into the evaluation of the likelihood ratio and we first derive a unified expression of joint genotypic probabilities. A general formula and seven types of detailed expressions for calculating likelihood ratios are then developed for the case that a relative of the tested suspect is an unknown contributor to the mixed stain. These results can also be applied to the case of a non-tested suspect with one tested relative. Moreover, the formula for calculating the likelihood ratio when there are two related unknown contributors is given. Data for a real situation are given for illustration, and the effect of kinship on the likelihood ratio is shown therein. Some interesting findings are obtained.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
Statistical inference methods for sparse biological time series data.
Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita
2011-04-25
Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Mapping Quantitative Traits in Unselected Families: Algorithms and Examples
Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David
2009-01-01
Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016
NASA Technical Reports Server (NTRS)
Hall, Steven R.; Walker, Bruce K.
1990-01-01
A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.
A LANDSAT study of ephemeral and perennial rangeland vegetation and soils
NASA Technical Reports Server (NTRS)
Bentley, R. G., Jr. (Principal Investigator); Salmon-Drexler, B. C.; Bonner, W. J.; Vincent, R. K.
1976-01-01
The author has identified the following significant results. Several methods of computer processing were applied to LANDSAT data for mapping vegetation characteristics of perennial rangeland in Montana and ephemeral rangeland in Arizona. The choice of optimal processing technique was dependent on prescribed mapping and site condition. Single channel level slicing and ratioing of channels were used for simple enhancement. Predictive models for mapping percent vegetation cover based on data from field spectra and LANDSAT data were generated by multiple linear regression of six unique LANDSAT spectral ratios. Ratio gating logic and maximum likelihood classification were applied successfully to recognize plant communities in Montana. Maximum likelihood classification did little to improve recognition of terrain features when compared to a single channel density slice in sparsely vegetated Arizona. LANDSAT was found to be more sensitive to differences between plant communities based on percentages of vigorous vegetation than to actual physical or spectral differences among plant species.
Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.
2014-01-01
Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models
ERIC Educational Resources Information Center
Chun, So Yeon; Shapiro, Alexander
2009-01-01
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul W.
2010-01-01
In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…
Maximum likelihood estimation of signal-to-noise ratio and combiner weight
NASA Technical Reports Server (NTRS)
Kalson, S.; Dolinar, S. J.
1986-01-01
An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.
Identical twins in forensic genetics - Epidemiology and risk based estimation of weight of evidence.
Tvedebrink, Torben; Morling, Niels
2015-12-01
The increase in the number of forensic genetic loci used for identification purposes results in infinitesimal random match probabilities. These probabilities are computed under assumptions made for rather simple population genetic models. Often, the forensic expert reports likelihood ratios, where the alternative hypothesis is assumed not to encompass close relatives. However, this approach implies that important factors present in real human populations are discarded. This approach may be very unfavourable to the defendant. In this paper, we discuss some important aspects concerning the closest familial relationship, i.e., identical (monozygotic) twins, when reporting the weight of evidence. This can be done even when the suspect has no knowledge of an identical twin or when official records hold no twin information about the suspect. The derived expressions are not original as several authors previously have published results accounting for close familial relationships. However, we revisit the discussion to increase the awareness among forensic genetic practitioners and include new information on medical and societal factors to assess the risk of not considering a monozygotic twin as the true perpetrator. If accounting for a monozygotic twin in the weight of evidence, it implies that the likelihood ratio is truncated at a maximal value depending on the prevalence of monozygotic twins and the societal efficiency of recognising a monozygotic twin. If a monozygotic twin is considered as an alternative proposition, then data relevant for the Danish society suggests that the threshold of likelihood ratios should approximately be between 150,000 and 2,000,000 in order to take the risk of an unrecognised identical, monozygotic twin into consideration. In other societies, the threshold of the likelihood ratio in crime cases may reach other, often lower, values depending on the recognition of monozygotic twins and the age of the suspect. In general, more strictly kept registries will imply larger thresholds on the likelihood ratio as the monozygotic twin explanation gets less probable. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
On meeting capital requirements with a chance-constrained optimization model.
Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan
2016-01-01
This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.
Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors
ERIC Educational Resources Information Center
Guerra-Peña, Kiero; Steinley, Douglas
2016-01-01
Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…
Testing for Two-Way Interactions in the Multigroup Common Factor Model
ERIC Educational Resources Information Center
van Smeden, Maarten; Hessen, David J.
2013-01-01
In this article, a 2-way multigroup common factor model (MG-CFM) is presented. The MG-CFM can be used to estimate interaction effects between 2 grouping variables on 1 or more hypothesized latent variables. For testing the significance of such interactions, a likelihood ratio test is presented. In a simulation study, the robustness of the…
Detection of abrupt changes in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1984-01-01
Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.
Ma, Chunming; Liu, Yue; Lu, Qiang; Lu, Na; Liu, Xiaoli; Tian, Yiming; Wang, Rui; Yin, Fuzai
2016-02-01
The blood pressure-to-height ratio (BPHR) has been shown to be an accurate index for screening hypertension in children and adolescents. The aim of the present study was to perform a meta-analysis to assess the performance of BPHR for the assessment of hypertension. Electronic and manual searches were performed to identify studies of the BPHR. After methodological quality assessment and data extraction, pooled estimates of the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, area under the receiver operating characteristic curve and summary receiver operating characteristics were assessed systematically. The extent of heterogeneity for it was assessed. Six studies were identified for analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio values of BPHR, for assessment of hypertension, were 96% [95% confidence interval (CI)=0.95-0.97], 90% (95% CI=0.90-0.91), 10.68 (95% CI=8.03-14.21), 0.04 (95% CI=0.03-0.07) and 247.82 (95% CI=114.50-536.34), respectively. The area under the receiver operating characteristic curve was 0.9472. The BPHR had higher diagnostic accuracies for identifying hypertension in children and adolescents.
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
Mohammadi, Seyed-Farzad; Sabbaghi, Mostafa; Z-Mehrjardi, Hadi; Hashemi, Hassan; Alizadeh, Somayeh; Majdi, Mercede; Taee, Farough
2012-03-01
To apply artificial intelligence models to predict the occurrence of posterior capsule opacification (PCO) after phacoemulsification. Farabi Eye Hospital, Tehran, Iran. Clinical-based cross-sectional study. The posterior capsule status of eyes operated on for age-related cataract and the need for laser capsulotomy were determined. After a literature review, data polishing, and expert consultation, 10 input variables were selected. The QUEST algorithm was used to develop a decision tree. Three back-propagation artificial neural networks were constructed with 4, 20, and 40 neurons in 2 hidden layers and trained with the same transfer functions (log-sigmoid and linear transfer) and training protocol with randomly selected eyes. They were then tested on the remaining eyes and the networks compared for their performance. Performance indices were used to compare resultant models with the results of logistic regression analysis. The models were trained using 282 randomly selected eyes and then tested using 70 eyes. Laser capsulotomy for clinically significant PCO was indicated or had been performed 2 years postoperatively in 40 eyes. A sample decision tree was produced with accuracy of 50% (likelihood ratio 0.8). The best artificial neural network, which showed 87% accuracy and a positive likelihood ratio of 8, was achieved with 40 neurons. The area under the receiver-operating-characteristic curve was 0.71. In comparison, logistic regression reached accuracy of 80%; however, the likelihood ratio was not measurable because the sensitivity was zero. A prototype artificial neural network was developed that predicted posterior capsule status (requiring capsulotomy) with reasonable accuracy. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Validation of software for calculating the likelihood ratio for parentage and kinship.
Drábek, J
2009-03-01
Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.
NASA Astrophysics Data System (ADS)
Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.
2003-12-01
Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.
Is it possible to predict office hysteroscopy failure?
Cobellis, Luigi; Castaldi, Maria Antonietta; Giordano, Valentino; De Franciscis, Pasquale; Signoriello, Giuseppe; Colacurci, Nicola
2014-10-01
The purpose of this study was to develop a clinical tool, the HFI (Hysteroscopy Failure Index), which gives criteria to predict hysteroscopic examination failure. This was a retrospective diagnostic test study, aimed to validate the HFI, set at the Department of Gynaecology, Obstetric and Reproductive Science of the Second University of Naples, Italy. The HFI was applied to our database of 995 consecutive women, who underwent office based to assess abnormal uterine bleeding (AUB), infertility, cervical polyps, and abnormal sonographic patterns (postmenopausal endometrial thickness of more than 5mm, endometrial hyperechogenic spots, irregular endometrial line, suspect of uterine septa). Demographic characteristics, previous surgery, recurrent infections, sonographic data, Estro-Progestins, IUD and menopausal status were collected. Receiver operating characteristic (ROC) curve analysis was used to assess the ability of the model to identify patients who were correctly identified (true positives) divided by the total number of failed hysteroscopies (true positives+false negatives). Positive and Negative Likelihood Ratios with 95%CI were calculated. The HFI score is able to predict office hysteroscopy failure in 76% of cases. Moreover, the Positive likelihood ratio was 11.37 (95% CI: 8.49-15.21), and the Negative likelihood ratio was 0.33 (95% CI: 0.27-0.41). Hysteroscopy failure index was able to retrospectively predict office hysteroscopy failure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Avoiding overstating the strength of forensic evidence: Shrunk likelihood ratios/Bayes factors.
Morrison, Geoffrey Stewart; Poh, Norman
2018-05-01
When strength of forensic evidence is quantified using sample data and statistical models, a concern may be raised as to whether the output of a model overestimates the strength of evidence. This is particularly the case when the amount of sample data is small, and hence sampling variability is high. This concern is related to concern about precision. This paper describes, explores, and tests three procedures which shrink the value of the likelihood ratio or Bayes factor toward the neutral value of one. The procedures are: (1) a Bayesian procedure with uninformative priors, (2) use of empirical lower and upper bounds (ELUB), and (3) a novel form of regularized logistic regression. As a benchmark, they are compared with linear discriminant analysis, and in some instances with non-regularized logistic regression. The behaviours of the procedures are explored using Monte Carlo simulated data, and tested on real data from comparisons of voice recordings, face images, and glass fragments. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Micheyl, Christophe; Dai, Huanping
2010-01-01
The equal-variance Gaussian signal-detection-theory (SDT) decision model for the dual-pair change-detection (or “4IAX”) paradigm has been described in earlier publications. In this note, we consider the equal-variance Gaussian SDT model for the related dual-pair AB vs BA identification paradigm. The likelihood ratios, optimal decision rules, receiver operating characteristics (ROCs), and relationships between d' and proportion-correct (PC) are analyzed for two special cases: that of statistically independent observations, which is likely to apply in constant-stimuli experiments, and that of highly correlated observations, which is likely to apply in experiments where stimuli are roved widely across trials or pairs. A surprising outcome of this analysis is that although these two situations lead to different optimal decision rules, the predicted ROCs and proportions of correct responses (PCs) for these two cases are not substantially different, and are either identical or similar to those observed in the basic Yes-No paradigm. PMID:19633356
Chen, Helen; Bautista, Dianne; Ch'ng, Ying Chia; Li, Wenyun; Chan, Edwin; Rush, A John
2013-06-01
The Edinburgh Postnatal Depression Scale (EPDS) may not be a uniformly valid postnatal depression (PND) screen across populations. We evaluated the performance of a Chinese translation of 10-item (HK-EPDS) and six-item (HK-EPDS-6) versions in post-partum women in Singapore. Chinese-speaking post-partum obstetric clinic patients were recruited for this study. They completed the HK-EPDS, from which we derived the six-item HK-EPDS-6. All women were clinically assessed for PND based on Diagnostic and Statistical Manual, Fourth Edition-Text Revision criteria. Receiver-operator curve (ROC) analyses and likelihood ratio computations informed scale cutoff choices. Clinical fitness was judged by thresholds for internal consistency [α ≥ 0.70] and for diagnostic performance by true-positive rate (>85%), false-positive rate (≤10%), positive likelihood ratio (>1), negative likelihood ratio (<0.2), area under the ROC curve (AUC, ≥90%) and effect size (≥0.80). Based on clinical interview, prevalence of PND was 6.2% in 487 post-partum women. HK-EPDS internal consistency was 0.84. At 13 or more cutoff, the true-positive rate was 86.7%, false-positive rate 3.3%, positive likelihood ratio 26.4, negative likelihood ratio 0.14, AUC 94.4% and effect size 0.81. For the HK-EPDS-6, internal consistency was 0.76. At 8 or more cutoff, we found a true-positive rate of 86.7%, false-positive rate 6.6%, positive likelihood ratio 13.2, negative likelihood ration 0.14, AUC 92.9% and effect size 0.98. The HK-EPDS (cutoff ≥13) and HK-EPDS6 (cutoff ≥8) are fit for PND screening for general population post-partum women. The brief six-item version appears to be clinically suitable for quick screening in Chinese speaking women. Copyright © 2013 Wiley Publishing Asia Pty Ltd.
Masch, William R; Cohan, Richard H; Ellis, James H; Dillman, Jonathan R; Rubin, Jonathan M; Davenport, Matthew S
2016-02-01
The purpose of this study was to determine the clinical effectiveness of prospectively reported sonographic twinkling artifact for the diagnosis of renal calculus in patients without known urolithiasis. All ultrasound reports finalized in one health system from June 15, 2011, to June 14, 2014, that contained the words "twinkle" or "twinkling" in reference to suspected renal calculus were identified. Patients with known urolithiasis or lack of a suitable reference standard (unenhanced abdominal CT with ≤ 2.5-mm slice thickness performed ≤ 30 days after ultrasound) were excluded. The sensitivity, specificity, and positive likelihood ratio of sonographic twinkling artifact for the diagnosis of renal calculus were calculated by renal unit and stratified by two additional diagnostic features for calcification (echogenic focus, posterior acoustic shadowing). Eighty-five patients formed the study population. Isolated sonographic twinkling artifact had sensitivity of 0.78 (82/105), specificity of 0.40 (26/65), and a positive likelihood ratio of 1.30 for the diagnosis of renal calculus. Specificity and positive likelihood ratio improved and sensitivity declined when the following additional diagnostic features were present: sonographic twinkling artifact and echogenic focus (sensitivity, 0.61 [64/105]; specificity, 0.65 [42/65]; positive likelihood ratio, 1.72); sonographic twinkling artifact and posterior acoustic shadowing (sensitivity, 0.31 [33/105]; specificity, 0.95 [62/65]; positive likelihood ratio, 6.81); all three features (sensitivity, 0.31 [33/105]; specificity, 0.95 [62/65]; positive likelihood ratio, 6.81). Isolated sonographic twinkling artifact has a high false-positive rate (60%) for the diagnosis of renal calculus in patients without known urolithiasis.
Van Hoeyveld, Erna; Nickmans, Silvie; Ceuppens, Jan L; Bossuyt, Xavier
2015-10-23
Cut-off values and predictive values are used for the clinical interpretation of specific IgE antibody results. However, cut-off levels are not well defined, and predictive values are dependent on the prevalence of disease. The objective of this study was to document clinically relevant diagnostic accuracy of specific IgE for inhalant allergens (grass pollen and birch pollen) based on test result interval-specific likelihood ratios. Likelihood ratios are independent of the prevalence and allow to provide diagnostic accuracy information for test result intervals. In a prospective study we included consecutive adult patients presenting at an allergy clinic with complaints of rhinitis or rhinoconjunctivitis. The standard for diagnosis was a suggestive clinical history of grass or birch pollen allergy and a positive skin test. Specific IgE was determined with the ImmunoCAP Fluorescence Enzyme Immuno-Assay. We established specific IgE test result interval related likelihood ratios for clinical allergy to inhalant allergens (grass pollen, rPhl p 1,5, birch pollen, rBet v 1). The likelihood ratios for allergy increased with increasing specific IgE antibody levels. The likelihood ratio was <0.03 for specific IgE <0.1 kU/L, between 0.1 and 1.4 for specific IgE between 0.1 kU/L and 0.35 kU/L, between 1.4 and 4.2 for specific IgE between 0.35 kU/L and 3.5 kU/L, >6.3 for specific IgE>0.7, and very high (∞) for specific IgE >3.5 kU/L. Test result interval specific likelihood ratios provide a useful tool for the interpretation of specific IgE test results for inhalant allergens. Copyright © 2015 Elsevier B.V. All rights reserved.
el Galta, Rachid; Uitte de Willige, Shirley; de Visser, Marieke C H; Helmer, Quinta; Hsu, Li; Houwing-Duistermaat, Jeanine J
2007-09-24
In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. By means of a simulation study, we compare the performance of the score statistic to Pearson's chi-square statistic and the likelihood ratio statistic proposed by Terwilliger. We illustrate the method on three candidate genes studied in the Leiden Thrombophilia Study. We conclude that the statistic follows a chi square distribution under the null hypothesis and that the score statistic is more powerful than Terwilliger's likelihood ratio statistic when the associated haplotype has frequency between 0.1 and 0.4 and has a small impact on the studied disorder. With regard to Pearson's chi-square statistic, the score statistic has more power when the associated haplotype has frequency above 0.2 and the number of variants is above five.
A Bayesian estimation of the helioseismic solar age
NASA Astrophysics Data System (ADS)
Bonanno, A.; Fröhlich, H.-E.
2015-08-01
Context. The helioseismic determination of the solar age has been a subject of several studies because it provides us with an independent estimation of the age of the solar system. Aims: We present the Bayesian estimates of the helioseismic age of the Sun, which are determined by means of calibrated solar models that employ different equations of state and nuclear reaction rates. Methods: We use 17 frequency separation ratios r02(n) = (νn,l = 0-νn-1,l = 2)/(νn,l = 1-νn-1,l = 1) from 8640 days of low-ℓBiSON frequencies and consider three likelihood functions that depend on the handling of the errors of these r02(n) ratios. Moreover, we employ the 2010 CODATA recommended values for Newton's constant, solar mass, and radius to calibrate a large grid of solar models spanning a conceivable range of solar ages. Results: It is shown that the most constrained posterior distribution of the solar age for models employing Irwin EOS with NACRE reaction rates leads to t⊙ = 4.587 ± 0.007 Gyr, while models employing the Irwin EOS and Adelberger, et al. (2011, Rev. Mod. Phys., 83, 195) reaction rate have t⊙ = 4.569 ± 0.006 Gyr. Implementing OPAL EOS in the solar models results in reduced evidence ratios (Bayes factors) and leads to an age that is not consistent with the meteoritic dating of the solar system. Conclusions: An estimate of the solar age that relies on an helioseismic age indicator such as r02(n) turns out to be essentially independent of the type of likelihood function. However, with respect to model selection, abandoning any information concerning the errors of the r02(n) ratios leads to inconclusive results, and this stresses the importance of evaluating the trustworthiness of error estimates.
Reliable and More Powerful Methods for Power Analysis in Structural Equation Modeling
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun
2017-01-01
The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…
ERIC Educational Resources Information Center
Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen
2014-01-01
It is a well-known problem in testing the fit of models to multinomial data that the full underlying contingency table will inevitably be sparse for tests of reasonable length and for realistic sample sizes. Under such conditions, full-information test statistics such as Pearson's X[superscript 2]?? and the likelihood ratio statistic…
A case study for the integration of predictive mineral potential maps
NASA Astrophysics Data System (ADS)
Lee, Saro; Oh, Hyun-Joo; Heo, Chul-Ho; Park, Inhye
2014-09-01
This study aims to elaborate on the mineral potential maps using various models and verify the accuracy for the epithermal gold (Au) — silver (Ag) deposits in a Geographic Information System (GIS) environment assuming that all deposits shared a common genesis. The maps of potential Au and Ag deposits were produced by geological data in Taebaeksan mineralized area, Korea. The methodological framework consists of three main steps: 1) identification of spatial relationships 2) quantification of such relationships and 3) combination of multiple quantified relationships. A spatial database containing 46 Au-Ag deposits was constructed using GIS. The spatial association between training deposits and 26 related factors were identified and quantified by probabilistic and statistical modelling. The mineral potential maps were generated by integrating all factors using the overlay method and recombined afterwards using the likelihood ratio model. They were verified by comparison with test mineral deposit locations. The verification revealed that the combined mineral potential map had the greatest accuracy (83.97%), whereas it was 72.24%, 65.85%, 72.23% and 71.02% for the likelihood ratio, weight of evidence, logistic regression and artificial neural network models, respectively. The mineral potential map can provide useful information for the mineral resource development.
A quantum framework for likelihood ratios
NASA Astrophysics Data System (ADS)
Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.
The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.
Nasal Airway Microbiota Profile and Severe Bronchiolitis in Infants: A Case-control Study.
Hasegawa, Kohei; Linnemann, Rachel W; Mansbach, Jonathan M; Ajami, Nadim J; Espinola, Janice A; Petrosino, Joseph F; Piedra, Pedro A; Stevenson, Michelle D; Sullivan, Ashley F; Thompson, Amy D; Camargo, Carlos A
2017-11-01
Little is known about the relationship of airway microbiota with bronchiolitis in infants. We aimed to identify nasal airway microbiota profiles and to determine their association with the likelihood of bronchiolitis in infants. A case-control study was conducted. As a part of a multicenter prospective study, we collected nasal airway samples from 40 infants hospitalized with bronchiolitis. We concurrently enrolled 110 age-matched healthy controls. By applying 16S ribosomal RNA gene sequencing and an unbiased clustering approach to these 150 nasal samples, we identified microbiota profiles and determined the association of microbiota profiles with likelihood of bronchiolitis. Overall, the median age was 3 months and 56% were male. Unbiased clustering of airway microbiota identified 4 distinct profiles: Moraxella-dominant profile (37%), Corynebacterium/Dolosigranulum-dominant profile (27%), Staphylococcus-dominant profile (15%) and mixed profile (20%). Proportion of bronchiolitis was lowest in infants with Moraxella-dominant profile (14%) and highest in those with Staphylococcus-dominant profile (57%), corresponding to an odds ratio of 7.80 (95% confidence interval, 2.64-24.9; P < 0.001). In the multivariable model, the association between Staphylococcus-dominant profile and greater likelihood of bronchiolitis persisted (odds ratio for comparison with Moraxella-dominant profile, 5.16; 95% confidence interval, 1.26-22.9; P = 0.03). By contrast, Corynebacterium/Dolosigranulum-dominant profile group had low proportion of infants with bronchiolitis (17%); the likelihood of bronchiolitis in this group did not significantly differ from those with Moraxella-dominant profile in both unadjusted and adjusted analyses. In this case-control study, we identified 4 distinct nasal airway microbiota profiles in infants. Moraxella-dominant and Corynebacterium/Dolosigranulum-dominant profiles were associated with low likelihood of bronchiolitis, while Staphylococcus-dominant profile was associated with high likelihood of bronchiolitis.
Silveira, Maria J; Copeland, Laurel A; Feudtner, Chris
2006-07-01
We tested whether local cultural and social values regarding the use of health care are associated with the likelihood of home death, using variation in local rates of home births as a proxy for geographic variation in these values. For each of 351110 adult decedents in Washington state who died from 1989 through 1998, we calculated the home birth rate in each zip code during the year of death and then used multivariate regression modeling to estimate the relation between the likelihood of home death and the local rate of home births. Individuals residing in local areas with higher home birth rates had greater adjusted likelihood of dying at home (odds ratio [OR]=1.04 for each percentage point increase in home birth rate; 95% confidence interval [CI] = 1.03, 1.05). Moreover, the likelihood of dying at home increased with local wealth (OR=1.04 per $10000; 95% CI=1.02, 1.06) but decreased with local hospital bed availability (OR=0.96 per 1000 beds; 95% CI=0.95, 0.97). The likelihood of home death is associated with local rates of home births, suggesting the influence of health care use preferences.
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Branching-ratio approximation for the self-exciting Hawkes process
NASA Astrophysics Data System (ADS)
Hardiman, Stephen J.; Bouchaud, Jean-Philippe
2014-12-01
We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.
Human Behavior Drift Detection in a Smart Home Environment.
Masciadri, Andrea; Trofimova, Anna A; Matteucci, Matteo; Salice, Fabio
2017-01-01
The proposed system aims at elderly people independent living by providing an early indicator of habits changes which might be relevant for a diagnosis of diseases. It relies on Hidden Markov Model to describe the behavior observing sensors data, while Likelihood Ratio Test gives the variation within different time periods.
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
NASA Astrophysics Data System (ADS)
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
Identifying common donors in DNA mixtures, with applications to database searches.
Slooten, K
2017-01-01
Several methods exist to compute the likelihood ratio LR(M, g) evaluating the possible contribution of a person of interest with genotype g to a mixed trace M. In this paper we generalize this LR to a likelihood ratio LR(M 1 , M 2 ) involving two possibly mixed traces M 1 and M 2 , where the question is whether there is a donor in common to both traces. In case one of the traces is in fact a single genotype, then this likelihood ratio reduces to the usual LR(M, g). We explain how our method conceptually is a logical consequence of the fact that LR calculations of the form LR(M, g) can be equivalently regarded as a probabilistic deconvolution of the mixture. Based on simulated data, and using a semi-continuous mixture evaluation model, we derive ROC curves of our method applied to various types of mixtures. From these data we conclude that searches for a common donor are often feasible in the sense that a very small false positive rate can be combined with a high probability to detect a common donor if there is one. We also show how database searches comparing all traces to each other can be carried out efficiently, as illustrated by the application of the method to the mixed traces in the Dutch DNA database. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Interpretation of diagnostic data: 5. How to do it with simple maths.
1983-11-01
The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator.
Interpretation of diagnostic data: 5. How to do it with simple maths.
1983-01-01
The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator. PMID:6671182
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
ERIC Educational Resources Information Center
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Understanding the properties of diagnostic tests - Part 2: Likelihood ratios.
Ranganathan, Priya; Aggarwal, Rakesh
2018-01-01
Diagnostic tests are used to identify subjects with and without disease. In a previous article in this series, we examined some attributes of diagnostic tests - sensitivity, specificity, and predictive values. In this second article, we look at likelihood ratios, which are useful for the interpretation of diagnostic test results in everyday clinical practice.
Srkalović Imširagić, Azijada; Begić, Dražen; Šimičević, Livija; Bajić, Žarko
2017-02-01
Following childbirth, a vast number of women experience some degree of mood swings, while some experience symptoms of postpartum posttraumatic stress disorder. Using a biopsychosocial model, the primary aim of this study was to identify predictors of posttraumatic stress disorder and its symptomatology following childbirth. This observational, longitudinal study included 372 postpartum women. In order to explore biopsychosocial predictors, participants completed several questionnaires 3-5 days after childbirth: the Impact of Events Scale Revised, the Big Five Inventory, The Edinburgh Postnatal Depression Scale, breastfeeding practice and social and demographic factors. Six to nine weeks after childbirth, participants re-completed the questionnaires regarding psychiatric symptomatology and breastfeeding practice. Using a multivariate level of analysis, the predictors that increased the likelihood of postpartum posttraumatic stress disorder symptomatology at the first study phase were: emergency caesarean section (odds ratio 2.48; confidence interval 1.13-5.43) and neuroticism personality trait (odds ratio 1.12; confidence interval 1.05-1.20). The predictor that increased the likelihood of posttraumatic stress disorder symptomatology at the second study phase was the baseline Impact of Events Scale Revised score (odds ratio 12.55; confidence interval 4.06-38.81). Predictors that decreased the likelihood of symptomatology at the second study phase were life in a nuclear family (odds ratio 0.27; confidence interval 0.09-0.77) and life in a city (odds ratio 0.29; confidence interval 0.09-0.94). Biopsychosocial theory is applicable to postpartum psychiatric disorders. In addition to screening for depression amongst postpartum women, there is a need to include other postpartum psychiatric symptomatology screenings in routine practice. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
Meta-analysis: accuracy of rapid tests for malaria in travelers returning from endemic areas.
Marx, Arthur; Pewsner, Daniel; Egger, Matthias; Nüesch, Reto; Bucher, Heiner C; Genton, Blaise; Hatz, Christoph; Jüni, Peter
2005-05-17
Microscopic diagnosis of malaria is unreliable outside specialized centers. Rapid tests have become available in recent years, but their accuracy has not been assessed systematically. To determine the accuracy of rapid diagnostic tests for ruling out malaria in nonimmune travelers returning from malaria-endemic areas. The authors searched MEDLINE, EMBASE, CAB Health, and CINAHL (1988 to September 2004); hand-searched conference proceedings; checked reference lists; and contacted experts and manufacturers. Diagnostic accuracy studies in nonimmune individuals with suspected malaria were included if they compared rapid tests with expert microscopic examination or polymerase chain reaction tests. Data on study and patient characteristics and results were extracted in duplicate. The main outcome was the likelihood ratio for a negative test result (negative likelihood ratio) for Plasmodium falciparum malaria. Likelihood ratios were combined by using random-effects meta-analysis, stratified by the antigen targeted (histidine-rich protein-2 [HRP-2] or parasite lactate dehydrogenase [LDH]) and by test generation. Nomograms of post-test probabilities were constructed. The authors included 21 studies and 5747 individuals. For P. falciparum, HRP-2-based tests were more accurate than parasite LDH-based tests: Negative likelihood ratios were 0.08 and 0.13, respectively (P = 0.019 for difference). Three-band HRP-2 tests had similar negative likelihood ratios but higher positive likelihood ratios compared with 2-band tests (34.7 vs. 98.5; P = 0.003). For P. vivax, negative likelihood ratios tended to be closer to 1.0 for HRP-2-based tests than for parasite LDH-based tests (0.24 vs. 0.13; P = 0.22), but analyses were based on a few heterogeneous studies. Negative likelihood ratios for the diagnosis of P. malariae or P. ovale were close to 1.0 for both types of tests. In febrile travelers returning from sub-Saharan Africa, the typical probability of P. falciparum malaria is estimated at 1.1% (95% CI, 0.6% to 1.9%) after a negative 3-band HRP-2 test result and 97% (CI, 92% to 99%) after a positive test result. Few studies evaluated 3-band HRP-2 tests. The evidence is also limited for species other than P. falciparum because of the few available studies and their more heterogeneous results. Further studies are needed to determine whether the use of rapid diagnostic tests improves outcomes in returning travelers with suspected malaria. Rapid malaria tests may be a useful diagnostic adjunct to microscopy in centers without major expertise in tropical medicine. Initial decisions on treatment initiation and choice of antimalarial drugs can be based on travel history and post-test probabilities after rapid testing. Expert microscopy is still required for species identification and confirmation.
Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods
ERIC Educational Resources Information Center
Merkle, Edgar C.; Zeileis, Achim
2013-01-01
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
Using Fit Indexes to Select a Covariance Model for Longitudinal Data
ERIC Educational Resources Information Center
Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.
2012-01-01
This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…
Testing Measurement Invariance Using MIMIC: Likelihood Ratio Test with a Critical Value Adjustment
ERIC Educational Resources Information Center
Kim, Eun Sook; Yoon, Myeongsun; Lee, Taehun
2012-01-01
Multiple-indicators multiple-causes (MIMIC) modeling is often used to test a latent group mean difference while assuming the equivalence of factor loadings and intercepts over groups. However, this study demonstrated that MIMIC was insensitive to the presence of factor loading noninvariance, which implies that factor loading invariance should be…
Gowin, Joshua L; Ball, Tali M; Wittmann, Marc; Tapert, Susan F; Paulus, Martin P
2015-07-01
Nearly half of individuals with substance use disorders relapse in the year after treatment. A diagnostic tool to help clinicians make decisions regarding treatment does not exist for psychiatric conditions. Identifying individuals with high risk for relapse to substance use following abstinence has profound clinical consequences. This study aimed to develop neuroimaging as a robust tool to predict relapse. 68 methamphetamine-dependent adults (15 female) were recruited from 28-day inpatient treatment. During treatment, participants completed a functional MRI scan that examined brain activation during reward processing. Patients were followed 1 year later to assess abstinence. We examined brain activation during reward processing between relapsing and abstaining individuals and employed three random forest prediction models (clinical and personality measures, neuroimaging measures, a combined model) to generate predictions for each participant regarding their relapse likelihood. 18 individuals relapsed. There were significant group by reward-size interactions for neural activation in the left insula and right striatum for rewards. Abstaining individuals showed increased activation for large, risky relative to small, safe rewards, whereas relapsing individuals failed to show differential activation between reward types. All three random forest models yielded good test characteristics such that a positive test for relapse yielded a likelihood ratio 2.63, whereas a negative test had a likelihood ratio of 0.48. These findings suggest that neuroimaging can be developed in combination with other measures as an instrument to predict relapse, advancing tools providers can use to make decisions about individualized treatment of substance use disorders. Published by Elsevier Ireland Ltd.
An ERTS-1 investigation for Lake Ontario and its basin
NASA Technical Reports Server (NTRS)
Polcyn, F. C.; Falconer, A. (Principal Investigator); Wagner, T. W.; Rebel, D. L.
1975-01-01
The author has identified the following significant results. Methods of manual, semi-automatic, and automatic (computer) data processing were evaluated, as were the requirements for spatial physiographic and limnological information. The coupling of specially processed ERTS data with simulation models of the watershed precipitation/runoff process provides potential for water resources management. Optimal and full use of the data requires a mix of data processing and analysis techniques, including single band editing, two band ratios, and multiband combinations. A combination of maximum likelihood ratio and near-IR/red band ratio processing was found to be particularly useful.
ERIC Educational Resources Information Center
Moses, Tim
2008-01-01
Nine statistical strategies for selecting equating functions in an equivalent groups design were evaluated. The strategies of interest were likelihood ratio chi-square tests, regression tests, Kolmogorov-Smirnov tests, and significance tests for equated score differences. The most accurate strategies in the study were the likelihood ratio tests…
Optimal Methods for Classification of Digitally Modulated Signals
2013-03-01
of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test
Early pregnancy angiogenic markers and spontaneous abortion: an Odense Child Cohort study.
Andersen, Louise B; Dechend, Ralf; Karumanchi, S Ananth; Nielsen, Jan; Joergensen, Jan S; Jensen, Tina K; Christesen, Henrik T
2016-11-01
Spontaneous abortion is the most commonly observed adverse pregnancy outcome. The angiogenic factors soluble Fms-like kinase 1 and placental growth factor are critical for normal pregnancy and may be associated to spontaneous abortion. We investigated the association between maternal serum concentrations of soluble Fms-like kinase 1 and placental growth factor, and subsequent spontaneous abortion. In the prospective observational Odense Child Cohort, 1676 pregnant women donated serum in early pregnancy, gestational week <22 (median 83 days of gestation, interquartile range 71-103). Concentrations of soluble Fms-like kinase 1 and placental growth factor were determined with novel automated assays. Spontaneous abortion was defined as complete or incomplete spontaneous abortion, missed abortion, or blighted ovum <22+0 gestational weeks, and the prevalence was 3.52% (59 cases). The time-dependent effect of maternal serum concentrations of soluble Fms-like kinase 1 and placental growth factor on subsequent late first-trimester or second-trimester spontaneous abortion (n = 59) was evaluated using a Cox proportional hazards regression model, adjusting for body mass index, parity, season of blood sampling, and age. Furthermore, receiver operating characteristics were employed to identify predictive values and optimal cut-off values. In the adjusted Cox regression analysis, increasing continuous concentrations of both soluble Fms-like kinase 1 and placental growth factor were significantly associated with a decreased hazard ratio for spontaneous abortion: soluble Fms-like kinase 1, 0.996 (95% confidence interval, 0.995-0.997), and placental growth factor, 0.89 (95% confidence interval, 0.86-0.93). When analyzed by receiver operating characteristic cut-offs, women with soluble Fms-like kinase 1 <742 pg/mL had an odds ratio for spontaneous abortion of 12.1 (95% confidence interval, 6.64-22.2), positive predictive value of 11.70%, negative predictive value of 98.90%, positive likelihood ratio of 3.64 (3.07-4.32), and negative likelihood ratio of 0.30 (0.19-0.48). For placental growth factor <19.7 pg/mL, odds ratio was 13.2 (7.09-24.4), positive predictive value was 11.80%, negative predictive value was 99.0%, positive likelihood ratio was 3.68 (3.12-4.34), and negative likelihood ratio was 0.28 (0.17-0.45). In the sensitivity analysis of 54 spontaneous abortions matched 1:4 to controls on gestational age at blood sampling, the highest area under the curve was seen for soluble Fms-like kinase 1 in prediction of first-trimester spontaneous abortion, 0.898 (0.834-0.962), and at the optimum cut-off of 725 pg/mL, negative predictive value was 51.4%, positive predictive value was 94.6%, positive likelihood ratio was 4.04 (2.57-6.35), and negative likelihood ratio was 0.22 (0.09-0.54). A strong, novel prospective association was identified between lower concentrations of soluble Fms-like kinase 1 and placental growth factor measured in early pregnancy and spontaneous abortion. A soluble Fms-like kinase 1 cut-off <742 pg/mL in maternal serum was optimal to stratify women at high vs low risk of spontaneous abortion. The cause and effect of angiogenic factor alterations in spontaneous abortions remain to be elucidated. Copyright © 2016 Elsevier Inc. All rights reserved.
[Waist-to-height ratio is an indicator of metabolic risk in children].
Valle-Leal, Jaime; Abundis-Castro, Leticia; Hernández-Escareño, Juan; Flores-Rubio, Salvador
2016-01-01
Abdominal fat, particularly visceral, is associated with a high risk of metabolic complications. The waist-height ratio (WHtR) is used to assess abdominal fat in individuals of all ages. To determine the ability of the waist-to-height ratio to detect metabolic risk in mexican schoolchildren. A study was conducted on children between 6 and 12 years. Obesity was diagnosed as a body mass index (BMI) ≥ 85th percentile, and an ICE ≥0.5 was considered abdominal obesity. Blood levels of glucose, cholesterol and triglycerides were measured. The sensitivity, specificity, positive predictive and negative value, area under curve, the positive likelihood ratio and negative likelihood ratio of the WHtR and BMI were calculated in order to identify metabolic alterations. WHtR and BMI were compared to determine which had the best diagnostic efficiency. Of the 223 children included in the study, 51 had hypertriglyceridaemia, 27 with hypercholesterolaemia, and 9 with hyperglycaemia. On comparing the diagnostic efficiency of WHtR with that of BMI, there was a sensitivity of 100% vs. 56% for hyperglycaemia, 93 vs. 70% for cholesterol, and 76 vs. 59% for hypertriglyceridaemia. The specificity, negative predictive value, positive predictive value, positive likelihood ratio, negative likelihood ratio, and area under curve were also higher for WHtR. The WHtR is a more efficient indicator than BMI in identifying metabolic risk in mexican school-age. Copyright © 2015 Sociedad Chilena de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.
Survivorship analysis when cure is a possibility: a Monte Carlo study.
Goldman, A I
1984-01-01
Parametric survivorship analyses of clinical trials commonly involves the assumption of a hazard function constant with time. When the empirical curve obviously levels off, one can modify the hazard function model by use of a Gompertz or Weibull distribution with hazard decreasing over time. Some cancer treatments are thought to cure some patients within a short time of initiation. Then, instead of all patients having the same hazard, decreasing over time, a biologically more appropriate model assumes that an unknown proportion (1 - pi) have constant high risk whereas the remaining proportion (pi) have essentially no risk. This paper discusses the maximum likelihood estimation of pi and the power curves of the likelihood ratio test. Monte Carlo studies provide results for a variety of simulated trials; empirical data illustrate the methods.
Al-Radi, Osman O; Harrell, Frank E; Caldarone, Christopher A; McCrindle, Brian W; Jacobs, Jeffrey P; Williams, M Gail; Van Arsdell, Glen S; Williams, William G
2007-04-01
The Aristotle Basic Complexity score and the Risk Adjustment in Congenital Heart Surgery system were developed by consensus to compare outcomes of congenital cardiac surgery. We compared the predictive value of the 2 systems. Of all index congenital cardiac operations at our institution from 1982 to 2004 (n = 13,675), we were able to assign an Aristotle Basic Complexity score, a Risk Adjustment in Congenital Heart Surgery score, and both scores to 13,138 (96%), 11,533 (84%), and 11,438 (84%) operations, respectively. Models of in-hospital mortality and length of stay were generated for Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery using an identical data set in which both Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery scores were assigned. The likelihood ratio test for nested models and paired concordance statistics were used. After adjustment for year of operation, the odds ratios for Aristotle Basic Complexity score 3 versus 6, 9 versus 6, 12 versus 6, and 15 versus 6 were 0.29, 2.22, 7.62, and 26.54 (P < .0001). Similarly, odds ratios for Risk Adjustment in Congenital Heart Surgery categories 1 versus 2, 3 versus 2, 4 versus 2, and 5/6 versus 2 were 0.23, 1.98, 5.80, and 20.71 (P < .0001). Risk Adjustment in Congenital Heart Surgery added significant predictive value over Aristotle Basic Complexity (likelihood ratio chi2 = 162, P < .0001), whereas Aristotle Basic Complexity contributed much less predictive value over Risk Adjustment in Congenital Heart Surgery (likelihood ratio chi2 = 13.4, P = .009). Neither system fully adjusted for the child's age. The Risk Adjustment in Congenital Heart Surgery scores were more concordant with length of stay compared with Aristotle Basic Complexity scores (P < .0001). The predictive value of Risk Adjustment in Congenital Heart Surgery is higher than that of Aristotle Basic Complexity. The use of Aristotle Basic Complexity or Risk Adjustment in Congenital Heart Surgery as risk stratification and trending tools to monitor outcomes over time and to guide risk-adjusted comparisons may be valuable.
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
A method for interactive satellite failure diagnosis: Towards a connectionist solution
NASA Technical Reports Server (NTRS)
Bourret, P.; Reggia, James A.
1989-01-01
Various kinds of processes which allow one to make a diagnosis are analyzed. The analyses then focuses on one of these processes used for satellite failure diagnosis. This process consists of sending the satellite instructions about system status alterations: to mask the effects of one possible component failure or to look for additional abnormal measures. A formal model of this process is given. This model is an extension of a previously defined connectionist model which allows computation of ratios between the likelihoods of observed manifestations according to various diagnostic hypotheses. The expected mean value of these likelihood measures for each possible status of the satellite can be computed in a similar way. Therefore, it is possible to select the most appropriate status according to three different purposes: to confirm an hypothesis, to eliminate an hypothesis, or to choose between two hypotheses. Finally, a first connectionist schema of computation of these expected mean values is given.
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
ERIC Educational Resources Information Center
Yuan, Ke-Hai
2008-01-01
In the literature of mean and covariance structure analysis, noncentral chi-square distribution is commonly used to describe the behavior of the likelihood ratio (LR) statistic under alternative hypothesis. Due to the inaccessibility of the rather technical literature for the distribution of the LR statistic, it is widely believed that the…
Hirose, H
1997-01-01
This paper proposes a new treatment for electrical insulation degradation. Some types of insulation which have been used under various circumstances are considered to degrade at various rates in accordance with their stress circumstances. The cross-linked polyethylene (XLPE) insulated cables inspected by major Japanese electric companies clearly indicate such phenomena. By assuming that the inspected specimen is sampled from one of the clustered groups, a mixed degradation model can be constructed. Since the degradation of the insulation under common circumstances is considered to follow a Weibull distribution, a mixture model and a Weibull power law can be combined. This is called The mixture Weibull power law model. By using the maximum likelihood estimation for the newly proposed model to Japanese 22 and 33 kV insulation class cables, they are clustered into a certain number of groups by using the AIC and the generalized likelihood ratio test method. The reliability of the cables at specified years are assessed.
Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H
2007-02-01
Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.
Methane photochemistry and methane production on Neptune
NASA Technical Reports Server (NTRS)
Romani, P. N.; Atreya, S. K.
1988-01-01
The Neptune stratosphere's methane photochemistry is presently studied by means of a numerical model in which the observed mixing ratio of methane prompts photolysis near the CH4 homopause. Haze generation by methane photochemistry has its basis in the formation of hydrocarbon ices and polyacetylenes; the hazes can furnish the requisite aerosol haze at the appropriate pressure levels required by observations of Neptune in the visible and near-IR. Comparisons of model predictions with Uranus data indicate a lower ratio of polyacetylene production to hydrocarbon ice, as well as a lower likelihood of UV postprocessing of the acetylene ice to polymers on Neptune, compared to Uranus.
Analysis of Multiple Contingency Tables by Exact Conditional Tests for Zero Partial Association.
ERIC Educational Resources Information Center
Kreiner, Svend
The tests for zero partial association in a multiple contingency table have gained new importance with the introduction of graphical models. It is shown how these may be performed as exact conditional tests, using as test criteria either the ordinary likelihood ratio, the standard x squared statistic, or any other appropriate statistics. A…
Recreating a functional ancestral archosaur visual pigment.
Chang, Belinda S W; Jönsson, Karolina; Kazmi, Manija A; Donoghue, Michael J; Sakmar, Thomas P
2002-09-01
The ancestors of the archosaurs, a major branch of the diapsid reptiles, originated more than 240 MYA near the dawn of the Triassic Period. We used maximum likelihood phylogenetic ancestral reconstruction methods and explored different models of evolution for inferring the amino acid sequence of a putative ancestral archosaur visual pigment. Three different types of maximum likelihood models were used: nucleotide-based, amino acid-based, and codon-based models. Where possible, within each type of model, likelihood ratio tests were used to determine which model best fit the data. Ancestral reconstructions of the ancestral archosaur node using the best-fitting models of each type were found to be in agreement, except for three amino acid residues at which one reconstruction differed from the other two. To determine if these ancestral pigments would be functionally active, the corresponding genes were chemically synthesized and then expressed in a mammalian cell line in tissue culture. The expressed artificial genes were all found to bind to 11-cis-retinal to yield stable photoactive pigments with lambda(max) values of about 508 nm, which is slightly redshifted relative to that of extant vertebrate pigments. The ancestral archosaur pigments also activated the retinal G protein transducin, as measured in a fluorescence assay. Our results show that ancestral genes from ancient organisms can be reconstructed de novo and tested for function using a combination of phylogenetic and biochemical methods.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
Diagnosis and Management of Deployed Adults with Chest Pain.
1983-01-31
sequential Bayesian model using the likelihood ratios that he supplied to the United States Navy. Because the results of his study were not provided...THE EKG A. All BWH Patients ( n = 899 ) MODEL MI NO MI MI 179 20 199 TRUTH NO MI 248 452 700 427 472 899 SENSITIVITY = 79 = .90199 SPECIFICITY...WITH THE EKG C. BWH Males, < 60 years old, ( n = 250) MODEL MI NO MI MI 49 4 53 TRUTH NO MI 59 138 197 108 142 250 SENSITIVITY = - = .92 53 SPECIFICITY
Urabe, Naohisa; Sano, Go; Suzuki, Junko; Hebisawa, Akira; Nakamura, Yasuhiko; Koyama, Kazuya; Ishii, Yoshikazu; Tateda, Kazuhiro; Homma, Sakae
2017-01-01
ABSTRACT We evaluated the usefulness of an Aspergillus galactomannan (GM) test, a β-d-glucan (βDG) test, and two different Aspergillus PCR assays of bronchoalveolar lavage fluid (BALF) samples for the diagnosis of chronic pulmonary aspergillosis (CPA). BALF samples from 30 patients with and 120 patients without CPA were collected. We calculated the sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each test individually and in combination with other tests. The optical density index values, as determined by receiver operating characteristic analysis, for the diagnosis of CPA were 0.5 and 100 for GM and βDG testing of BALF, respectively. The sensitivity and specificity of the GM test, βDG test, and PCR assays 1 and 2 were 77.8% and 90.0%, 77.8% and 72.5%, 86.7% and 84.2%, and 66.7% and 94.2%, respectively. A comparison of the PCR assays showed that PCR assay 1 had a better sensitivity, a better negative predictive value, and a better negative likelihood ratio and PCR assay 2 had a better specificity, a better positive predictive value, and a better positive likelihood ratio. The combination of the GM and βDG tests had the highest diagnostic odds ratio. The combination of the GM and βDG tests on BALF was more useful than any single test for diagnosing CPA. PMID:28330887
Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang, Yulei
2012-01-01
Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the “scale” of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists’ rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on “matching” classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist’s ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic. PMID:22559651
Wang, Chi-Chuan; Lin, Chia-Hui; Lin, Kuan-Yin; Chuang, Yu-Chung; Sheng, Wang-Huei
2016-01-01
Abstract Community-acquired pneumonia (CAP) is a common but potentially life-threatening condition, but limited information exists on the effectiveness of fluoroquinolones compared to β-lactams in outpatient settings. We aimed to compare the effectiveness and outcomes of penicillins versus respiratory fluoroquinolones for CAP at outpatient clinics. This was a claim-based retrospective cohort study. Patients aged 20 years or older with at least 1 new pneumonia treatment episode were included, and the index penicillin or respiratory fluoroquinolone therapies for a pneumonia episode were at least 5 days in duration. The 2 groups were matched by propensity scores. Cox proportional hazard models were used to compare the rates of hospitalizations/emergence service visits and 30-day mortality. A logistic model was used to compare the likelihood of treatment failure between the 2 groups. After propensity score matching, 2622 matched pairs were included in the final model. The likelihood of treatment failure of fluoroquinolone-based therapy was lower than that of penicillin-based therapy (adjusted odds ratio [AOR], 0.88; 95% confidence interval [95%CI], 0.77–0.99), but no differences were found in hospitalization/emergence service (ES) visits (adjusted hazard ratio [HR], 1.27; 95% CI, 0.92–1.74) and 30-day mortality (adjusted HR, 0.69; 95% CI, 0.30–1.62) between the 2 groups. The likelihood of treatment failure of fluoroquinolone-based therapy was lower than that of penicillin-based therapy for CAP on an outpatient clinic basis. However, this effect may be marginal. Further investigation into the comparative effectiveness of these 2 treatment options is warranted. PMID:26871827
The effect of mis-specification on mean and selection between the Weibull and lognormal models
NASA Astrophysics Data System (ADS)
Jia, Xiang; Nadarajah, Saralees; Guo, Bo
2018-02-01
The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.
Bayesian model comparison and parameter inference in systems biology using nested sampling.
Pullen, Nick; Morris, Richard J
2014-01-01
Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.
Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat
2018-01-09
In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.
Cui, Jiangyu; Zhou, Yumin; Tian, Jia; Wang, Xinwang; Zheng, Jingping; Zhong, Nanshan; Ran, Pixin
2012-12-01
COPD is often underdiagnosed in a primary care setting where the spirometry is unavailable. This study was aimed to develop a simple, economical and applicable model for COPD screening in those settings. First we established a discriminant function model based on Bayes' Rule by stepwise discriminant analysis, using the data from 243 COPD patients and 112 non-COPD subjects from our COPD survey in urban and rural communities and local primary care settings in Guangdong Province, China. We then used this model to discriminate COPD in additional 150 subjects (50 non-COPD and 100 COPD ones) who had been recruited by the same methods as used to have established the model. All participants completed pre- and post-bronchodilator spirometry and questionnaires. COPD was diagnosed according to the Global Initiative for Chronic Obstructive Lung Disease criteria. The sensitivity and specificity of the discriminant function model was assessed. THE ESTABLISHED DISCRIMINANT FUNCTION MODEL INCLUDED NINE VARIABLES: age, gender, smoking index, body mass index, occupational exposure, living environment, wheezing, cough and dyspnoea. The sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, accuracy and error rate of the function model to discriminate COPD were 89.00%, 82.00%, 4.94, 0.13, 86.66% and 13.34%, respectively. The accuracy and Kappa value of the function model to predict COPD stages were 70% and 0.61 (95% CI, 0.50 to 0.71). This discriminant function model may be used for COPD screening in primary care settings in China as an alternative option instead of spirometry.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio.
Rottman, Benjamin Margolin
2017-02-01
Whether humans can accurately make decisions in line with Bayes' rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians' posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians' beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts' abilities, but that there is still considerable need for improvement.
Pharmacokinetic Modeling of Intranasal Scopolamine in Plasma Saliva and Urine
NASA Technical Reports Server (NTRS)
Wu, L.; Tam, V. H.; Chow, D. S. L.; Putcha, L.
2015-01-01
An intranasal gel dosage formulation of scopolamine (INSCOP) was developed for the treatment of Space Motion Sickness (SMS). The bioavailability and pharmacokinetics (PK) were evaluated under IND (Investigational New Drug) guidelines. The aim of the project was to develop a PK model that can predict the relationships among plasma, saliva and urinary scopolamine concentrations using data collected from the IND clinical trial protocol with INSCOP. Twelve healthy human subjects were administered at three dose levels (0.1, 0.2 and 0.4 mg) of INSCOP. Serial blood, saliva and urine samples were collected between 5 min to 24 h after dosing and scopolamine concentrations were measured by using a validated LC-MS-MS assay. PK compartmental models, using actual dosing and sampling time, were established using Phoenix (version 1.2). Model selection was based on a likelihood ratio test on the difference of criteria (-2LL (i.e. log-likelihood ratio test)) and comparison of the quality of fit plots. The results: Predictable correlations among scopolamine concentrations in compartments of plasma, saliva and urine were established, and for the first time the model satisfactorily predicted the population and individual PK of INSCOP in plasma, saliva and urine. The model can be utilized to predict the INSCOP plasma concentration by saliva and urine data, and it will be useful for monitoring the PK of scopolamine in space and other remote environments using non-invasive sampling of saliva and/or urine.
ERIC Educational Resources Information Center
Immekus, Jason C.; Maller, Susan J.
2009-01-01
The Kaufman Adolescent and Adult Intelligence Test (KAIT[TM]) is an individually administered test of intelligence for individuals ranging in age from 11 to 85+ years. The item response theory-likelihood ratio procedure, based on the two-parameter logistic model, was used to detect differential item functioning (DIF) in the KAIT across males and…
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.
1986-01-01
The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.
Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu
2015-06-01
Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.
De March, I; Sironi, E; Taroni, F
2016-09-01
Analysis of marks recovered from different crime scenes can be useful to detect a linkage between criminal cases, even though a putative source for the recovered traces is not available. This particular circumstance is often encountered in the early stage of investigations and thus, the evaluation of evidence association may provide useful information for the investigators. This association is evaluated here from a probabilistic point of view: a likelihood ratio based approach is suggested in order to quantify the strength of the evidence of trace association in the light of two mutually exclusive propositions, namely that the n traces come from a common source or from an unspecified number of sources. To deal with this kind of problem, probabilistic graphical models are used, in form of Bayesian networks and object-oriented Bayesian networks, allowing users to intuitively handle with uncertainty related to the inferential problem. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Islam, Md. Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Urabe, Naohisa; Sakamoto, Susumu; Sano, Go; Suzuki, Junko; Hebisawa, Akira; Nakamura, Yasuhiko; Koyama, Kazuya; Ishii, Yoshikazu; Tateda, Kazuhiro; Homma, Sakae
2017-06-01
We evaluated the usefulness of an Aspergillus galactomannan (GM) test, a β-d-glucan (βDG) test, and two different Aspergillus PCR assays of bronchoalveolar lavage fluid (BALF) samples for the diagnosis of chronic pulmonary aspergillosis (CPA). BALF samples from 30 patients with and 120 patients without CPA were collected. We calculated the sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each test individually and in combination with other tests. The optical density index values, as determined by receiver operating characteristic analysis, for the diagnosis of CPA were 0.5 and 100 for GM and βDG testing of BALF, respectively. The sensitivity and specificity of the GM test, βDG test, and PCR assays 1 and 2 were 77.8% and 90.0%, 77.8% and 72.5%, 86.7% and 84.2%, and 66.7% and 94.2%, respectively. A comparison of the PCR assays showed that PCR assay 1 had a better sensitivity, a better negative predictive value, and a better negative likelihood ratio and PCR assay 2 had a better specificity, a better positive predictive value, and a better positive likelihood ratio. The combination of the GM and βDG tests had the highest diagnostic odds ratio. The combination of the GM and βDG tests on BALF was more useful than any single test for diagnosing CPA. Copyright © 2017 American Society for Microbiology.
Grandmothering life histories and human pair bonding.
Coxworth, James E; Kim, Peter S; McQueen, John S; Hawkes, Kristen
2015-09-22
The evolution of distinctively human life history and social organization is generally attributed to paternal provisioning based on pair bonds. Here we develop an alternative argument that connects the evolution of human pair bonds to the male-biased mating sex ratios that accompanied the evolution of human life history. We simulate an agent-based model of the grandmother hypothesis, compare simulated sex ratios to data on great apes and human hunter-gatherers, and note associations between a preponderance of males and mate guarding across taxa. Then we explore a recent model that highlights the importance of mating sex ratios for differences between birds and mammals and conclude that lessons for human evolution cannot ignore mammalian reproductive constraints. In contradiction to our claim that male-biased sex ratios are characteristically human, female-biased ratios are reported in some populations. We consider the likelihood that fertile men are undercounted and conclude that the mate-guarding hypothesis for human pair bonds gains strength from explicit links with our grandmothering life history.
Bayesian comparison of conceptual models of abrupt climate changes during the last glacial period
NASA Astrophysics Data System (ADS)
Boers, Niklas; Ghil, Michael; Rousseau, Denis-Didier
2017-04-01
Records of oxygen isotope ratios and dust concentrations from the North Greenland Ice Core Project (NGRIP) provide accurate proxies for the evolution of Arctic temperature and atmospheric circulation during the last glacial period (12ka to 100ka b2k) [1]. The most distinctive feature of these records are sudden transitions, called Dansgaard-Oeschger (DO) events, during which Arctic temperatures increased by up to 10 K within a few decades. These warming events are consistently followed by more gradual cooling in Antarctica [2]. The physical mechanisms responsible for these transitions and their out-of-phase relationship between the northern and southern hemisphere remain unclear. Substantial evidence hints at variations of the Atlantic Meridional Overturning Circulation as a key mechanism [2,3], but also other mechanisms, such as variations of sea ice extent [4] or ice shelf coverage [5] may play an important role. Here, we intend to shed more light on the relevance of the different mechanisms suggested to explain the abrupt climate changes and their inter-hemispheric coupling. For this purpose, several conceptual differential equation models are developed that represent the suggested physical mechanisms. Optimal parameters for each model candidate are then determined via maximum likelihood estimation with respect to the observed paleoclimatic data. Our approach is thus semi-empirical: While a model's general form is deduced from physical arguments about relevant climatic mechanisms — oceanic and atmospheric — its specific parameters are obtained by training the model on observed data. The distinct model candidates are evaluated by comparing statistical properties of time series simulated with these models to the observed statistics. In particular, Bayesian model selection criteria like Maximum Likelihood Ratio tests are used to obtain a hierarchy of the different candidates in terms of their likelihood, given the observed oxygen isotope and dust time series. [1] Kindler et al., Clim. Past (2014) [2] WAIS, Nature (2015) [3] Henry et al., Science (2016) [4] Gildor and Tziperman, Phil. Trans. R. Soc. (2003) [5] Petersen et al., Paleoceanography (2013)
NASA Astrophysics Data System (ADS)
Pan, Zhen; Anderes, Ethan; Knox, Lloyd
2018-05-01
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-11-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-01-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
Sinharay, Sandip
2017-09-01
Benefiting from item preknowledge is a major type of fraudulent behavior during educational assessments. Belov suggested the posterior shift statistic for detection of item preknowledge and showed its performance to be better on average than that of seven other statistics for detection of item preknowledge for a known set of compromised items. Sinharay suggested a statistic based on the likelihood ratio test for detection of item preknowledge; the advantage of the statistic is that its null distribution is known. Results from simulated and real data and adaptive and nonadaptive tests are used to demonstrate that the Type I error rate and power of the statistic based on the likelihood ratio test are very similar to those of the posterior shift statistic. Thus, the statistic based on the likelihood ratio test appears promising in detecting item preknowledge when the set of compromised items is known.
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Pal, Suvra; Balakrishnan, N
2017-10-01
In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.; Morgan, B.J.T.; North, P.M.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
Ahn, Jaeil; Mukherjee, Bhramar; Banerjee, Mousumi; Cooney, Kathleen A.
2011-01-01
Summary The stereotype regression model for categorical outcomes, proposed by Anderson (1984) is nested between the baseline category logits and adjacent category logits model with proportional odds structure. The stereotype model is more parsimonious than the ordinary baseline-category (or multinomial logistic) model due to a product representation of the log odds-ratios in terms of a common parameter corresponding to each predictor and category specific scores. The model could be used for both ordered and unordered outcomes. For ordered outcomes, the stereotype model allows more flexibility than the popular proportional odds model in capturing highly subjective ordinal scaling which does not result from categorization of a single latent variable, but are inherently multidimensional in nature. As pointed out by Greenland (1994), an additional advantage of the stereotype model is that it provides unbiased and valid inference under outcome-stratified sampling as in case-control studies. In addition, for matched case-control studies, the stereotype model is amenable to classical conditional likelihood principle, whereas there is no reduction due to sufficiency under the proportional odds model. In spite of these attractive features, the model has been applied less, as there are issues with maximum likelihood estimation and likelihood based testing approaches due to non-linearity and lack of identifiability of the parameters. We present comprehensive Bayesian inference and model comparison procedure for this class of models as an alternative to the classical frequentist approach. We illustrate our methodology by analyzing data from The Flint Men’s Health Study, a case-control study of prostate cancer in African-American men aged 40 to 79 years. We use clinical staging of prostate cancer in terms of Tumors, Nodes and Metastatsis (TNM) as the categorical response of interest. PMID:19731262
Distribution of lod scores in oligogenic linkage analysis.
Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J
2001-01-01
In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.
An association between neighbourhood wealth inequality and HIV prevalence in sub-Saharan Africa.
Brodish, Paul Henry
2015-05-01
This paper investigates whether community-level wealth inequality predicts HIV serostatus using DHS household survey and HIV biomarker data for men and women ages 15-59 pooled from six sub-Saharan African countries with HIV prevalence rates exceeding 5%. The analysis relates the binary dependent variable HIV-positive serostatus and two weighted aggregate predictors generated from the DHS Wealth Index: the Gini coefficient, and the ratio of the wealth of households in the top 20% wealth quintile to that of those in the bottom 20%. In separate multilevel logistic regression models, wealth inequality is used to predict HIV prevalence within each statistical enumeration area, controlling for known individual-level demographic predictors of HIV serostatus. Potential individual-level sexual behaviour mediating variables are added to assess attenuation, and ordered logit models investigate whether the effect is mediated through extramarital sexual partnerships. Both the cluster-level wealth Gini coefficient and wealth ratio significantly predict positive HIV serostatus: a 1 point increase in the cluster-level Gini coefficient and in the cluster-level wealth ratio is associated with a 2.35 and 1.3 times increased likelihood of being HIV positive, respectively, controlling for individual-level demographic predictors, and associations are stronger in models including only males. Adding sexual behaviour variables attenuates the effects of both inequality measures. Reporting eleven plus lifetime sexual partners increases the odds of being HIV positive over five-fold. The likelihood of having more extramarital partners is significantly higher in clusters with greater wealth inequality measured by the wealth ratio. Disaggregating logit models by sex indicates important risk behaviour differences. Household wealth inequality within DHS clusters predicts HIV serostatus, and the relationship is partially mediated by more extramarital partners. These results emphasize the importance of incorporating higher-level contextual factors, investigating behavioural mediators, and disaggregating by sex in assessing HIV risk in order to uncover potential mechanisms of action and points of preventive intervention.
An association between neighborhood wealth inequality and HIV prevalence in sub-Saharan Africa
Brodish, Paul Henry
2016-01-01
Summary This paper investigates whether community-level wealth inequality predicts HIV serostatus, using DHS household survey and HIV biomarker data for men and women ages 15-59 pooled from six sub-Saharan African countries with HIV prevalence rates exceeding five percent. The analysis relates the binary dependent variable HIV positive serostatus and two weighted aggregate predictors generated from the DHS Wealth Index: the Gini coefficient, and the ratio of the wealth of households in the top 20% wealth quintile to that of those in the bottom 20%. In separate multilevel logistic regression models, wealth inequality is used to predict HIV prevalence within each SEA, controlling for known individual-level demographic predictors of HIV serostatus. Potential individual-level sexual behavior mediating variables are added to assess attenuation, and ordered logit models investigate whether the effect is mediated through extramarital sexual partnerships. Both the cluster-level wealth Gini coefficient and wealth ratio significantly predict positive HIV serostatus: a 1 point increase in the cluster-level Gini coefficient and in the cluster-level wealth ratio is associated with a 2.35 and 1.3 times increased likelihood of being HIV positive, respectively, controlling for individual-level demographic predictors, and associations are stronger in models including only males. Adding sexual behavior variables attenuates the effects of both inequality measures. Reporting 11 plus lifetime sexual partners increases the odds of being HIV positive over five-fold. The likelihood of having more extramarital partners is significantly higher in clusters with greater wealth inequality measured by the wealth ratio. Disaggregating logit models by sex indicates important risk behavior differences. Household wealth inequality within DHS clusters predicts HIV serostatus, and the relationship is partially mediated by more extramarital partners. These results emphasize the importance of incorporating higher-level contextual factors, investigating behavioral mediators, and disaggregating by sex in assessing HIV risk in order to uncover potential mechanisms of action and points of preventive intervention PMID:24406021
Cheng, Juan-Juan; Zhao, Shi-Di; Gao, Ming-Zhu; Huang, Hong-Yu; Gu, Bing; Ma, Ping; Chen, Yan; Wang, Jun-Hong; Yang, Cheng-Jian; Yan, Zi-He
2015-01-01
Background Previous studies have reported that natriuretic peptides in the blood and pleural fluid (PF) are effective diagnostic markers for heart failure (HF). These natriuretic peptides include N-terminal pro-brain natriuretic peptide (NT-proBNP), brain natriuretic peptide (BNP), and midregion pro-atrial natriuretic peptide (MR-proANP). This systematic review and meta-analysis evaluates the diagnostic accuracy of blood and PF natriuretic peptides for HF in patients with pleural effusion. Methods PubMed and EMBASE databases were searched to identify articles published in English that investigated the diagnostic accuracy of BNP, NT-proBNP, and MR-proANP for HF. The last search was performed on 9 October 2014. The quality of the eligible studies was assessed using the revised Quality Assessment of Diagnostic Accuracy Studies tool. The diagnostic performance characteristics (sensitivity, specificity, and other measures of accuracy) were pooled and examined using a bivariate model. Results In total, 14 studies were included in the meta-analysis, including 12 studies reporting the diagnostic accuracy of PF NT-proBNP and 4 studies evaluating blood NT-proBNP. The summary estimates of PF NT-proBNP for HF had a diagnostic sensitivity of 0.94 (95% confidence interval [CI]: 0.90–0.96), specificity of 0.91 (95% CI: 0.86–0.95), positive likelihood ratio of 10.9 (95% CI: 6.4–18.6), negative likelihood ratio of 0.07 (95% CI: 0.04–0.12), and diagnostic odds ratio of 157 (95% CI: 57–430). The overall sensitivity of blood NT-proBNP for diagnosis of HF was 0.92 (95% CI: 0.86–0.95), with a specificity of 0.88 (95% CI: 0.77–0.94), positive likelihood ratio of 7.8 (95% CI: 3.7–16.3), negative likelihood ratio of 0.10 (95% CI: 0.06–0.16), and diagnostic odds ratio of 81 (95% CI: 27–241). The diagnostic accuracy of PF MR-proANP and blood and PF BNP was not analyzed due to the small number of related studies. Conclusions BNP, NT-proBNP, and MR-proANP, either in blood or PF, are effective tools for diagnosis of HF. Additional studies are needed to rigorously evaluate the diagnostic accuracy of PF and blood MR-proANP and BNP for the diagnosis of HF. PMID:26244664
Babafemi, Emmanuel O; Cherian, Benny P; Banting, Lee; Mills, Graham A; Ngianga, Kandala
2017-10-25
Rapid and accurate diagnosis of tuberculosis (TB) is key to manage the disease and to control and prevent its transmission. Many established diagnostic methods suffer from low sensitivity or delay of timely results and are inadequate for rapid detection of Mycobacterium tuberculosis (MTB) in pulmonary and extra-pulmonary clinical samples. This study examined whether a real-time polymerase chain reaction (RT-PCR) assay, with a turn-a-round time of 2 h, would prove effective for routine detection of MTB by clinical microbiology laboratories. A systematic literature search was performed for publications in any language on the detection of MTB in pathological samples by RT-PCR assay. The following sources were used MEDLINE via PubMed, EMBASE, BIOSIS Citation Index, Web of Science, SCOPUS, ISI Web of Knowledge and Cochrane Infectious Diseases Group Specialised Register, grey literature, World Health Organization and Centres for Disease Control and Prevention websites. Forty-six studies met set inclusion criteria. Generated pooled summary estimates (95% CIs) were calculated for overall accuracy and bivariate meta-regression model was used for meta-analysis. Summary estimates for pulmonary TB (31 studies) were as follows: sensitivity 0.82 (95% CI 0.81-0.83), specificity 0.99 (95% CI 0.99-0.99), positive likelihood ratio 43.00 (28.23-64.81), negative likelihood ratio 0.16 (0.12-0.20), diagnostic odds ratio 324.26 (95% CI 189.08-556.09) and area under curve 0.99. Summary estimates for extra-pulmonary TB (25 studies) were as follows: sensitivity 0.70 (95% CI 0.67-0.72), specificity 0.99 (95% CI 0.99-0.99), positive likelihood ratio 29.82 (17.86-49.78), negative likelihood ratio 0.33 (0.26-0.42), diagnostic odds ratio 125.20 (95% CI 65.75-238.36) and area under curve 0.96. RT-PCR assay demonstrated a high degree of sensitivity for pulmonary TB and good sensitivity for extra-pulmonary TB. It indicated a high degree of specificity for ruling in TB infection from sampling regimes. This was acceptable, but may better as a rule out add-on diagnostic test. RT-PCR assays demonstrate both a high degree of sensitivity in pulmonary samples and rapidity of detection of TB which is an important factor in achieving effective global control and for patient management in terms of initiating early and appropriate anti-tubercular therapy. PROSPERO CRD42015027534 .
A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy
Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw
2014-01-01
Objective This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. Design We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Results Advanced colorectal neoplasia was detected in 2544 of the 35 918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7–8. Conclusions Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. PMID:24385598
A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy.
Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw
2014-07-01
This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Advanced colorectal neoplasia was detected in 2544 of the 35,918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7-8. Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
On the Power Functions of Test Statistics in Order Restricted Inference.
1984-10-01
California-Davis Actuarial Science Davis, California 95616 The University of Iowa Iowa City, Iowa 52242 *F. T. Wright Department of Mathematics and...34 SUMMARY --We study the power functions of both the likelihood ratio and con- trast statistics for detecting a totally ordered trend in a collection...samples from normal populations, Bartholomew (1959 a,b; 1961) studied the likelihood ratio tests (LRTs) for H0 versus H -H assuming in one case that
Three regularities of recognition memory: the role of bias.
Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok
2015-12-01
A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.
Testing CEV stochastic volatility models using implied volatility index data
NASA Astrophysics Data System (ADS)
Kim, Jungmu; Park, Yuen Jung; Ryu, Doojin
2018-06-01
We test the goodness-of-fit of stochastic volatility (SV) models using the implied volatility index of the KOSPI200 options (VKOSPI). The likelihood ratio tests reject the Heston and Hull-White SV models, whether or not they include jumps. Our estimation results advocate the unconstrained constant elasticity of variance (CEV) model with return jumps for describing the physical-measure dynamics of the spot index. The sub-period analysis shows that there was a significant increase in the size and frequency of jumps during the crisis period, when compared to those in the normal periods.
TOO MANY MEN? SEX RATIOS AND WOMEN’S PARTNERING BEHAVIOR IN CHINA
Trent, Katherine; South, Scott J.
2011-01-01
The relative numbers of women and men are changing dramatically in China, but the consequences of these imbalanced sex ratios have received little attention. We merge data from the Chinese Health and Family Life Survey with community-level data from Chinese censuses to examine the relationship between cohort- and community-specific sex ratios and women’s partnering behavior. Consistent with demographic-opportunity theory and sociocultural theory, we find that high sex ratios (indicating more men relative to women) are associated with an increased likelihood that women marry before age 25. However, high sex ratios are also associated with an increased likelihood that women engage in premarital and extramarital sexual relationships and have had more than one sexual partner, findings consistent with demographic-opportunity theory but inconsistent with sociocultural theory. PMID:22199403
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
Wan, Bing; Wang, Siqi; Tu, Mengqi; Wu, Bo; Han, Ping; Xu, Haibo
2017-03-01
The purpose of this meta-analysis was to evaluate the diagnostic accuracy of perfusion magnetic resonance imaging (MRI) as a method for differentiating glioma recurrence from pseudoprogression. The PubMed, Embase, Cochrane Library, and Chinese Biomedical databases were searched comprehensively for relevant studies up to August 3, 2016 according to specific inclusion and exclusion criteria. The quality of the included studies was assessed according to the quality assessment of diagnostic accuracy studies (QUADAS-2). After performing heterogeneity and threshold effect tests, pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were calculated. Publication bias was evaluated visually by a funnel plot and quantitatively using Deek funnel plot asymmetry test. The area under the summary receiver operating characteristic curve was calculated to demonstrate the diagnostic performance of perfusion MRI. Eleven studies covering 416 patients and 418 lesions were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.88 (95% confidence interval [CI] 0.84-0.92), 0.77 (95% CI 0.69-0.84), 3.93 (95% CI 2.83-5.46), 0.16 (95% CI 0.11-0.22), and 27.17 (95% CI 14.96-49.35), respectively. The area under the summary receiver operating characteristic curve was 0.8899. There was no notable publication bias. Sensitivity analysis showed that the meta-analysis results were stable and credible. While perfusion MRI is not the ideal diagnostic method for differentiating glioma recurrence from pseudoprogression, it could improve diagnostic accuracy. Therefore, further research on combining perfusion MRI with other imaging modalities is warranted.
Safari, Saeed; Baratloo, Alireza; Hashemi, Behrooz; Rahmati, Farhad; Forouzanfar, Mohammad Mehdi; Motamedi, Maryam; Mirmohseni, Ladan
2016-01-01
Determining etiologic causes and prognosis can significantly improve management of syncope patients. The present study aimed to compare the values of San Francisco, Osservatorio Epidemiologico sulla Sincope nel Lazio (OESIL), Boston, and Risk Stratification of Syncope in the Emergency Department (ROSE) score clinical decision rules in predicting the short-term serious outcome of syncope patients. The present diagnostic accuracy study with 1-week follow-up was designed to evaluate the predictive values of the four mentioned clinical decision rules. Screening performance characteristics of each model in predicting mortality, myocardial infarction (MI), and cerebrovascular accidents (CVAs) were calculated and compared. To evaluate the value of each aforementioned model in predicting the outcome, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio were calculated and receiver-operating curve (ROC) curve analysis was done. A total of 187 patients (mean age: 64.2 ± 17.2 years) were enrolled in the study. Mortality, MI, and CVA were seen in 19 (10.2%), 12 (6.4%), and 36 (19.2%) patients, respectively. Area under the ROC curve for OESIL, San Francisco, Boston, and ROSE models in prediction the risk of 1-week mortality, MI, and CVA was in the 30-70% range, with no significant difference among models ( P > 0.05). The pooled model did not show higher accuracy in prediction of mortality, MI, and CVA compared to others ( P > 0.05). This study revealed the weakness of all four evaluated models in predicting short-term serious outcome of syncope patients referred to the emergency department without any significant advantage for one among others.
NASA Astrophysics Data System (ADS)
Yoon, Ilsang; Weinberg, Martin D.; Katz, Neal
2011-06-01
We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy PHotometric ATtributes), which is a front-end application of the Bayesian Inference Engine (BIE), a parallel Markov chain Monte Carlo package, to provide full posterior probability distributions and reliable confidence intervals for all model parameters. The BIE relies on GALPHAT to compute the likelihood function. GALPHAT generates scale-free cumulative image tables for the desired model family with precise error control. Interpolation of this table yields accurate pixellated images with any centre, scale and inclination angle. GALPHAT then rotates the image by position angle using a Fourier shift theorem, yielding high-speed, accurate likelihood computation. We benchmark this approach using an ensemble of simulated Sérsic model galaxies over a wide range of observational conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the point spread function (PSF) and the image size, and errors in the assumed PSF; and a range of structural parameters: the half-light radius re and the Sérsic index n. We characterize the strength of parameter covariance in the Sérsic model, which increases with S/N and n, and the results strongly motivate the need for the full posterior probability distribution in galaxy morphology analyses and later inferences. The test results for simulated galaxies successfully demonstrate that, with a careful choice of Markov chain Monte Carlo algorithms and fast model image generation, GALPHAT is a powerful analysis tool for reliably inferring morphological parameters from a large ensemble of galaxies over a wide range of different observational conditions.
Xu, Mei-Mei; Jia, Hong-Yu; Yan, Li-Li; Li, Shan-Shan; Zheng, Yue
2017-01-01
Abstract Background: This meta-analysis aimed to provide a pooled analysis of prospective controlled trials comparing the diagnostic accuracy of 22-G and 25-G needles on endoscopic ultrasonography (EUS-FNA) of the solid pancreatic mass. Methods: We established a rigorous study protocol according to Cochrane Collaboration recommendations. We systematically searched the PubMed and Embase databases to identify articles to include in the meta-analysis. Sensitivity, specificity, and corresponding 95% confidence intervals were calculated for 22-G and 25-G needles of individual studies from the contingency tables. Results: Eleven prospective controlled trials included a total of 837 patients (412 with 22-G vs 425 with 25-G). Our outcomes revealed that 25-G needles (92% [95% CI, 89%–95%]) have higher sensitivity than 22-G needles (88% [95% CI, 84%–91%]) on solid pancreatic mass EUS-FNA (P = 0.046). However, there were no significant differences between the 2 groups in overall diagnostic specificity (P = 0.842). The pooled positive and negative likelihood ratio of the 22-G needle were 12.61 (95% CI, 5.65–28.14) and 0.16 (95% CI, 0.12–0.21), respectively. The pooled positive likelihood ratio was 12.61 (95% CI, 5.65–28.14), and the negative likelihood ratio was 0.16 (95% CI, 0.12–0.21) for the 22-G needle. The pooled positive likelihood ratio was 8.44 (95% CI, 3.87–18.42), and the negative likelihood ratio was 0.13 (95% CI, 0.09–0.18) for the 25-G needle. The area under the summary receiver operating characteristic curve was 0.97 for the 22-G needle and 0.96 for the 25-G needle. Conclusion: Compared to the study of 22-G EUS-FNA needles, our study showed that 25-G needles have superior sensitivity in the evaluation of solid pancreatic lesions by EUS–FNA. PMID:28151856
Accuracy of diagnostic tests to detect asymptomatic bacteriuria during pregnancy.
Mignini, Luciano; Carroli, Guillermo; Abalos, Edgardo; Widmer, Mariana; Amigot, Susana; Nardin, Juan Manuel; Giordano, Daniel; Merialdi, Mario; Arciero, Graciela; Del Carmen Hourquescos, Maria
2009-02-01
A dipslide is a plastic paddle coated with agar that is attached to a plastic cap that screws onto a sterile plastic vial. Our objective was to estimate the diagnostic accuracy of the dipslide culture technique to detect asymptomatic bacteriuria during pregnancy and to evaluate the accuracy of nitrate and leucocyte esterase dipslides for screening. This was an ancillary study within a trial comparing single-day with 7-day therapy in treating asymptomatic bacteriuria. Clean-catch midstream samples were collected from pregnant women seeking routine care. Positive and negative likelihood ratios and sensitivity and specificity for the culture-based dipslide to detect and chemical dipsticks (nitrites, leukocyte esterase, or both) to screen were estimated using traditional urine culture as the "gold standard." : A total of 3,048 eligible pregnant women were screened. The prevalence of asymptomatic bacteriuria was 15%, with Escherichia coli the most prevalent organism. The likelihood ratio for detecting asymptomatic bacteriuria with a positive dipslide test was 225 (95% confidence interval [CI] 113-449), increasing the probability of asymptomatic bacteriuria to 98%; the likelihood ratio for a negative dipslide test was 0.02 (95% CI 0.01-0.05), reducing the probability of bacteriuria to less than 1%. The positive likelihood ratio of leukocyte esterase and nitrite dipsticks (when both or either one was positive) was 6.95 (95% CI 5.80-8.33), increasing the probability of bacteriuria to only 54%; the negative likelihood ratio was 0.50 (95% CI 0.45-0.57), reducing the probability to 8%. A pregnant woman with a positive dipslide test is very likely to have a definitive diagnosis of asymptomatic bacteriuria, whereas a negative result effectively rules out the presence of bacteriuria. Dipsticks that measure nitrites and leukocyte esterase have low sensitivity for use in screening for asymptomatic bacteriuria during gestation. ISRCTN, isrctn.org, 1196608 II.
SEPARABLE FACTOR ANALYSIS WITH APPLICATIONS TO MORTALITY DATA
Fosdick, Bailey K.; Hoff, Peter D.
2014-01-01
Human mortality data sets can be expressed as multiway data arrays, the dimensions of which correspond to categories by which mortality rates are reported, such as age, sex, country and year. Regression models for such data typically assume an independent error distribution or an error model that allows for dependence along at most one or two dimensions of the data array. However, failing to account for other dependencies can lead to inefficient estimates of regression parameters, inaccurate standard errors and poor predictions. An alternative to assuming independent errors is to allow for dependence along each dimension of the array using a separable covariance model. However, the number of parameters in this model increases rapidly with the dimensions of the array and, for many arrays, maximum likelihood estimates of the covariance parameters do not exist. In this paper, we propose a submodel of the separable covariance model that estimates the covariance matrix for each dimension as having factor analytic structure. This model can be viewed as an extension of factor analysis to array-valued data, as it uses a factor model to estimate the covariance along each dimension of the array. We discuss properties of this model as they relate to ordinary factor analysis, describe maximum likelihood and Bayesian estimation methods, and provide a likelihood ratio testing procedure for selecting the factor model ranks. We apply this methodology to the analysis of data from the Human Mortality Database, and show in a cross-validation experiment how it outperforms simpler methods. Additionally, we use this model to impute mortality rates for countries that have no mortality data for several years. Unlike other approaches, our methodology is able to estimate similarities between the mortality rates of countries, time periods and sexes, and use this information to assist with the imputations. PMID:25489353
Validation of a school-based amblyopia screening protocol in a kindergarten population.
Casas-Llera, Pilar; Ortega, Paula; Rubio, Inmaculada; Santos, Verónica; Prieto, María J; Alio, Jorge L
2016-08-04
To validate a school-based amblyopia screening program model by comparing its outcomes to those of a state-of-the-art conventional ophthalmic clinic examination in a kindergarten population of children between the ages of 4 and 5 years. An amblyopia screening protocol, which consisted of visual acuity measurement using Lea charts, ocular alignment test, ocular motility assessment, and stereoacuity with TNO random-dot test, was performed at school in a pediatric 4- to 5-year-old population by qualified healthcare professionals. The outcomes were validated in a selected group by a conventional ophthalmologic examination performed in a fully equipped ophthalmologic center. The ophthalmologic evaluation was used to confirm whether or not children were correctly classified by the screening protocol. The sensitivity and specificity of the test model to detect amblyopia were established. A total of 18,587 4- to 5-year-old children were subjected to the amblyopia screening program during the 2010-2011 school year. A population of 100 children were selected for the ophthalmologic validation screening. A sensitivity of 89.3%, specificity of 93.1%, positive predictive value of 83.3%, negative predictive value of 95.7%, positive likelihood ratio of 12.86, and negative likelihood ratio of 0.12 was obtained for the amblyopia screening validation model. The amblyopia screening protocol model tested in this investigation shows high sensitivity and specificity in detecting high-risk cases of amblyopia compared to the standard ophthalmologic examination. This screening program may be highly relevant for amblyopia screening at schools.
Modeling extreme PM10 concentration in Malaysia using generalized extreme value distribution
NASA Astrophysics Data System (ADS)
Hasan, Husna; Mansor, Nadiah; Salleh, Nur Hanim Mohd
2015-05-01
Extreme PM10 concentration from the Air Pollutant Index (API) at thirteen monitoring stations in Malaysia is modeled using the Generalized Extreme Value (GEV) distribution. The data is blocked into monthly selection period. The Mann-Kendall (MK) test suggests a non-stationary model so two models are considered for the stations with trend. The likelihood ratio test is used to determine the best fitted model and the result shows that only two stations favor the non-stationary model (Model 2) while the other eleven stations favor stationary model (Model 1). The return level of PM10 concentration that is expected to exceed the maximum once within a selected period is obtained.
The Fecal Microbiota Profile and Bronchiolitis in Infants
Linnemann, Rachel W.; Mansbach, Jonathan M.; Ajami, Nadim J.; Espinola, Janice A.; Petrosino, Joseph F.; Piedra, Pedro A.; Stevenson, Michelle D.; Sullivan, Ashley F.; Thompson, Amy D.; Camargo, Carlos A.
2016-01-01
BACKGROUND: Little is known about the association of gut microbiota, a potentially modifiable factor, with bronchiolitis in infants. We aimed to determine the association of fecal microbiota with bronchiolitis in infants. METHODS: We conducted a case–control study. As a part of multicenter prospective study, we collected stool samples from 40 infants hospitalized with bronchiolitis. We concurrently enrolled 115 age-matched healthy controls. By applying 16S rRNA gene sequencing and an unbiased clustering approach to these 155 fecal samples, we identified microbiota profiles and determined the association of microbiota profiles with likelihood of bronchiolitis. RESULTS: Overall, the median age was 3 months, 55% were male, and 54% were non-Hispanic white. Unbiased clustering of fecal microbiota identified 4 distinct profiles: Escherichia-dominant profile (30%), Bifidobacterium-dominant profile (21%), Enterobacter/Veillonella-dominant profile (22%), and Bacteroides-dominant profile (28%). The proportion of bronchiolitis was lowest in infants with the Enterobacter/Veillonella-dominant profile (15%) and highest in the Bacteroides-dominant profile (44%), corresponding to an odds ratio of 4.59 (95% confidence interval, 1.58–15.5; P = .008). In the multivariable model, the significant association between the Bacteroides-dominant profile and a greater likelihood of bronchiolitis persisted (odds ratio for comparison with the Enterobacter/Veillonella-dominant profile, 4.24; 95% confidence interval, 1.56–12.0; P = .005). In contrast, the likelihood of bronchiolitis in infants with the Escherichia-dominant or Bifidobacterium-dominant profile was not significantly different compared with those with the Enterobacter/Veillonella-dominant profile. CONCLUSIONS: In this case–control study, we identified 4 distinct fecal microbiota profiles in infants. The Bacteroides-dominant profile was associated with a higher likelihood of bronchiolitis. PMID:27354456
Alavi, Afsaneh; Sibbald, R Gary; Nabavizadeh, Reza; Valaei, Farnaz; Coutts, Pat; Mayer, Dieter
2015-12-01
To determine the accuracy of audible arterial foot signals with an audible handheld Doppler ultrasound for identification of significant peripheral arterial disease as a simple, quick, and readily available bedside screening tool. Two hundred consecutive patients referred to an interprofessional wound care clinic underwent audible handheld Doppler ultrasound of both legs. As a control and comparator, a formal bilateral lower leg vascular study including the calculation of Ankle Brachial Pressure Index and toe pressure (TP) was performed at the vascular lab. Diagnostic reliability of audible handheld Doppler ultrasound was calculated versus Ankle Brachial Pressure Index as the gold standard test. A sensitivity of 42.8%, a specificity of 97.5%, negative predictive value of 94.10%, positive predictive value of 65.22%, positive likelihood ratio of 17.52, and negative likelihood ratio of 0.59. The univariable logistic regression model had an area under the curve of 0.78. There was a statistically significant difference at the 5% level between univariable and multivariable area under the curves of the dorsalis pedis and posterior tibial models (p < 0.001). Audible handheld Doppler ultrasound proved to be a reliable, simple, rapid, and inexpensive bedside exclusion test of peripheral arterial disease in diabetic and nondiabetic patients. © The Author(s) 2015.
Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.
Waddell, Peter J; Ota, Rissa; Penny, David
2009-10-01
Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.
Schröter, Hannes; Studzinski, Beatrix; Dietz, Pavel; Ulrich, Rolf; Striegel, Heiko; Simon, Perikles
2016-01-01
Purpose This study assessed the prevalence of physical and cognitive doping in recreational triathletes with two different randomized response models, that is, the Cheater Detection Model (CDM) and the Unrelated Question Model (UQM). Since both models have been employed in assessing doping, the major objective of this study was to investigate whether the estimates of these two models converge. Material and Methods An anonymous questionnaire was distributed to 2,967 athletes at two triathlon events (Frankfurt and Wiesbaden, Germany). Doping behavior was assessed either with the CDM (Frankfurt sample, one Wiesbaden subsample) or the UQM (one Wiesbaden subsample). A generalized likelihood-ratio test was employed to check whether the prevalence estimates differed significantly between models. In addition, we compared the prevalence rates of the present survey with those of a previous study on a comparable sample. Results After exclusion of incomplete questionnaires and outliers, the data of 2,017 athletes entered the final data analysis. Twelve-month prevalence for physical doping ranged from 4% (Wiesbaden, CDM and UQM) to 12% (Frankfurt CDM), and for cognitive doping from 1% (Wiesbaden, CDM) to 9% (Frankfurt CDM). The generalized likelihood-ratio test indicated no differences in prevalence rates between the two methods. Furthermore, there were no significant differences in prevalences between the present (undertaken in 2014) and the previous survey (undertaken in 2011), although the estimates tended to be smaller in the present survey. Discussion The results suggest that the two models can provide converging prevalence estimates. The high rate of cheaters estimated by the CDM, however, suggests that the present results must be seen as a lower bound and that the true prevalence of doping might be considerably higher. PMID:27218830
A likelihood ratio model for the determination of the geographical origin of olive oil.
Własiuk, Patryk; Martyna, Agnieszka; Zadora, Grzegorz
2015-01-01
Food fraud or food adulteration may be of forensic interest for instance in the case of suspected deliberate mislabeling. On account of its potential health benefits and nutritional qualities, geographical origin determination of olive oil might be of special interest. The use of a likelihood ratio (LR) model has certain advantages in contrast to typical chemometric methods because the LR model takes into account the information about the sample rarity in a relevant population. Such properties are of particular interest to forensic scientists and therefore it has been the aim of this study to examine the issue of olive oil classification with the use of different LR models and their pertinence under selected data pre-processing methods (logarithm based data transformations) and feature selection technique. This was carried out on data describing 572 Italian olive oil samples characterised by the content of 8 fatty acids in the lipid fraction. Three classification problems related to three regions of Italy (South, North and Sardinia) have been considered with the use of LR models. The correct classification rate and empirical cross entropy were taken into account as a measure of performance of each model. The application of LR models in determining the geographical origin of olive oil has proven to be satisfactorily useful for the considered issues analysed in terms of many variants of data pre-processing since the rates of correct classifications were close to 100% and considerable reduction of information loss was observed. The work also presents a comparative study of the performance of the linear discriminant analysis in considered classification problems. An approach to the choice of the value of the smoothing parameter is highlighted for the kernel density estimation based LR models as well. Copyright © 2014 Elsevier B.V. All rights reserved.
Vermunt, Neeltje P C A; Westert, Gert P; Olde Rikkert, Marcel G M; Faber, Marjan J
2018-03-01
To assess the impact of patient characteristics, patient-professional engagement, communication and context on the probability that healthcare professionals will discuss goals or priorities with older patients. Secondary analysis of cross-sectional data from the 2014 Commonwealth Fund International Health Policy Survey of Older Adults. 11 western countries. Community-dwelling adults, aged 55 or older. Assessment of goals and priorities. The final sample size consisted of 17,222 respondents, 54% of whom reported an assessment of their goals and priorities (AGP) by healthcare professionals. In logistic regression model 1, which was used to analyse the entire population, the determinants found to have moderate to large effects on the likelihood of AGP were information exchange on stress, diet or exercise, or both. Country (living in Sweden) and continuity of care (no regular professional or organisation) had moderate to large negative effects on the likelihood of AGP. In model 2, which focussed on respondents who experienced continuity of care, country and information exchange on stress and lifestyle were the main determinants of AGP, with comparable odds ratios to model 1. Furthermore, a professional asking questions also increased the likelihood of AGP. Continuity of care and information exchange is associated with a higher probability of AGP, while people living in Sweden are less likely to experience these assessments. Further study is required to determine whether increasing information exchange and professionals asking more questions may improve goal setting with older patients. Key points A patient goal-oriented approach can be beneficial for older patients with chronic conditions or multimorbidity; however, discussing goals with these patients is not a common practice. The likelihood of discussing goals varies by country, occurring most commonly in the USA, and least often in Sweden. Country-level differences in continuity of care and questions asked by a regularly visited professional affect the goal discussion probability. Patient characteristics, including age, have less impact than expected on the likelihood of sharing goals.
Arellano, M; Garcia-Caselles, M P; Pi-Figueras, M; Miralles, R; Torres, R M; Aguilera, A; Cervera, A M
2004-01-01
It was aimed at evaluating the clinical usefulness of the mini nutritional assessment (MNA) to identify malnutrition in elderly patients with cognitive impairment, admitted to a geriatric convalescence unit (intermediate care facility). Sixty-three patients with cognitive impairment were studied. Cognitive impairment was considered when mini mental state examination (MMSE) scores were below 21. MNA and a nutritional evaluation according to the sequential model of the American Institute of Nutrition (AIN) were performed at admission. According to the AIN criteria, malnutrition was considered, if there were abnormalities in at least one of the following parameters: albumin, cholesterol, body mass index (BMI), and branchial circumference. Based on these criteria, 27 patients (42.8%) proved to be undernourished at admission, whereas if taking the original MNA scores, 39 patients (61.9%) were undernourished, 23 (36.5%) were at risk of malnutrition, and 1 (1.5%) was normal. The analyzed population was divided in four categories (quartiles) of the MNA scores: very low (= 13.5), low (> 13.5 and = 16), intermediate (> 16 and = 18.5) and high (> 18.5). Likelihood ratios of each MNA quartile were obtained by dividing the percentage of patients in a given MNA category who were undernourished (according to AIN) by the percentage of patients in the same MNA category who were not undernourished. In the very low MNA quartile, this likelihood ratio was 2.79 and for the low MNA quartile it was 0.49. For intermediate and high MNA categories, likelihood ratios were 1.0 and 0.07 respectively. In the present study, MNA identified undernourished patients with a high clinical diagnostic impact value only, when very low scores (= 13) are obtained.
A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.
Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf
2017-07-01
This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions. Copyright © 2016. Published by Elsevier B.V.
Order-restricted inference for means with missing values.
Wang, Heng; Zhong, Ping-Shou
2017-09-01
Missing values appear very often in many applications, but the problem of missing values has not received much attention in testing order-restricted alternatives. Under the missing at random (MAR) assumption, we impute the missing values nonparametrically using kernel regression. For data with imputation, the classical likelihood ratio test designed for testing the order-restricted means is no longer applicable since the likelihood does not exist. This article proposes a novel method for constructing test statistics for assessing means with an increasing order or a decreasing order based on jackknife empirical likelihood (JEL) ratio. It is shown that the JEL ratio statistic evaluated under the null hypothesis converges to a chi-bar-square distribution, whose weights depend on missing probabilities and nonparametric imputation. Simulation study shows that the proposed test performs well under various missing scenarios and is robust for normally and nonnormally distributed data. The proposed method is applied to an Alzheimer's disease neuroimaging initiative data set for finding a biomarker for the diagnosis of the Alzheimer's disease. © 2017, The International Biometric Society.
Bazot, Marc; Daraï, Emile
2018-03-01
The aim of the present review, conducted according to PRISMA statement recommendations, was to evaluate the contribution of transvaginal sonography (TVS) and magnetic resonance imaging (MRI) to diagnose adenomyosis. Although there is a lack of consensus on adenomyosis classification, three subtypes are described, internal, external adenomyosis, and adenomyomas. Using TVS, whatever the subtype, pooled sensitivities, pooled specificities, and pooled positive likelihood ratios are 0.72-0.82, 0.85-0.81, and 4.67-3.7, respectively, but with a high heterogeneity between the studies. MRI has a pooled sensitivity of 0.77, specificity of 0.89, positive likelihood ratio of 6.5, and negative likelihood ratio of 0.2 for all subtypes. Our results suggest that MRI is more useful than TVS in the diagnosis of adenomyosis. Further studies are required to determine the performance of direct signs (cystic component) and indirect signs (characteristics of junctional zone) to avoid misdiagnosis of adenomyosis. Copyright © 2018 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Sun, Changling; Zhang, Yayun; Han, Xue; Du, Xiaodong
2018-03-01
Objective The purposes of this study were to verify the effectiveness of the narrow band imaging (NBI) system in diagnosing nasopharyngeal cancer (NPC) as compared with white light endoscopy. Data Sources PubMed, Cochrane Library, EMBASE, CNKI, and Wan Fang databases. Review Methods Data analyses were performed with Meta-Disc. The updated Quality Assessment of Diagnostic Accuracy Studies-2 tool was used to assess study quality and potential bias. Publication bias was assessed with a Deeks asymmetry test. The registry number of the protocol published on PROSPERO is CRD42015026244. Results This meta-analysis included 10 studies of 1337 lesions. For NBI diagnosis of NPC, the pooled values were as follows: sensitivity, 0.83 (95% CI, 0.80-0.86); specificity, 0.91 (95% CI, 0.89-0.93); positive likelihood ratio, 8.82 (95% CI, 5.12-15.21); negative likelihood ratio, 0.18 (95% CI, 0.12-0.27); and diagnostic odds ratio, 65.73 (95% CI, 36.74-117.60). The area under the curve was 0.9549. For white light endoscopy in diagnosing NPC, the pooled values were as follows: sensitivity, 0.79 (95% CI, 0.75-0.83); specificity, 0.87 (95% CI, 0.84-0.90); positive likelihood ratio, 5.02 (95% CI, 1.99-12.65); negative likelihood ratio, 0.34 (95% CI, 0.24-0.49); and diagnostic odds ratio, 16.89 (95% CI, 5.98-47.66). The area under the curve was 0.8627. The evaluation of heterogeneity, calculated per the diagnostic odds ratio, gave an I 2 of 0.326. No marked publication bias ( P = .68) existed in this meta-analysis. Conclusion The sensitivity and specificity of NBI for the diagnosis of NPC are similar to those of white light endoscopy, and the potential value of NBI for the diagnosis of NPC needs to be validated further.
Jacob, Laurent; Combes, Florence; Burger, Thomas
2018-06-18
We propose a new hypothesis test for the differential abundance of proteins in mass-spectrometry based relative quantification. An important feature of this type of high-throughput analyses is that it involves an enzymatic digestion of the sample proteins into peptides prior to identification and quantification. Due to numerous homology sequences, different proteins can lead to peptides with identical amino acid chains, so that their parent protein is ambiguous. These so-called shared peptides make the protein-level statistical analysis a challenge and are often not accounted for. In this article, we use a linear model describing peptide-protein relationships to build a likelihood ratio test of differential abundance for proteins. We show that the likelihood ratio statistic can be computed in linear time with the number of peptides. We also provide the asymptotic null distribution of a regularized version of our statistic. Experiments on both real and simulated datasets show that our procedures outperforms state-of-the-art methods. The procedures are available via the pepa.test function of the DAPAR Bioconductor R package.
Predictors of intraoperative hypotension and bradycardia.
Cheung, Christopher C; Martyn, Alan; Campbell, Norman; Frost, Shaun; Gilbert, Kenneth; Michota, Franklin; Seal, Douglas; Ghali, William; Khan, Nadia A
2015-05-01
Perioperative hypotension and bradycardia in the surgical patient are associated with adverse outcomes, including stroke. We developed and evaluated a new preoperative risk model in predicting intraoperative hypotension or bradycardia in patients undergoing elective noncardiac surgery. Prospective data were collected in 193 patients undergoing elective, noncardiac surgery. Intraoperative hypotension was defined as systolic blood pressure <90 mm Hg for >5 minutes or a 35% decrease in the mean arterial blood pressure. Intraoperative bradycardia was defined as a heart rate of <60 beats/min for >5 minutes. A logistic regression model was developed for predicting intraoperative hypotension or bradycardia with bootstrap validation. Model performance was assessed using area under the receiver operating curves and Hosmer-Lemeshow tests. A total of 127 patients developed hypotension or bradycardia. The average age of participants was 67.6 ± 11.3 years, and 59.1% underwent major surgery. A final 5-item score was developed, including preoperative Heart rate (<60 beats/min), preoperative hypotension (<110/60 mm Hg), Elderly age (>65 years), preoperative renin-Angiotensin blockade (angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, or beta-blockers), Revised cardiac risk index (≥3 points), and Type of surgery (major surgery), entitled the "HEART" score. The HEART score was moderately predictive of intraoperative bradycardia or hypotension (odds ratio, 2.51; 95% confidence interval, 1.79-3.53; C-statistic, 0.75). Maximum points on the HEART score were associated with an increased likelihood ratio for intraoperative bradycardia or hypotension (likelihood ratio, +3.64). The 5-point HEART score was predictive of intraoperative hypotension or bradycardia. These findings suggest a role for using the HEART score to better risk-stratify patients preoperatively and may help guide decisions on perioperative management of blood pressure and heart rate-lowering medications and anesthetic agents. Copyright © 2015 Elsevier Inc. All rights reserved.
Langholz, Bryan; Thomas, Duncan C.; Stovall, Marilyn; Smith, Susan A.; Boice, John D.; Shore, Roy E.; Bernstein, Leslie; Lynch, Charles F.; Zhang, Xinbo; Bernstein, Jonine L.
2009-01-01
Summary Methods for the analysis of individually matched case-control studies with location-specific radiation dose and tumor location information are described. These include likelihood methods for analyses that just use cases with precise location of tumor information and methods that also include cases with imprecise tumor location information. The theory establishes that each of these likelihood based methods estimates the same radiation rate ratio parameters, within the context of the appropriate model for location and subject level covariate effects. The underlying assumptions are characterized and the potential strengths and limitations of each method are described. The methods are illustrated and compared using the WECARE study of radiation and asynchronous contralateral breast cancer. PMID:18647297
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
NASA Astrophysics Data System (ADS)
Goodman, Steven N.
1989-11-01
This dissertation explores the use of a mathematical measure of statistical evidence, the log likelihood ratio, in clinical trials. The methods and thinking behind the use of an evidential measure are contrasted with traditional methods of analyzing data, which depend primarily on a p-value as an estimate of the statistical strength of an observed data pattern. It is contended that neither the behavioral dictates of Neyman-Pearson hypothesis testing methods, nor the coherency dictates of Bayesian methods are realistic models on which to base inference. The use of the likelihood alone is applied to four aspects of trial design or conduct: the calculation of sample size, the monitoring of data, testing for the equivalence of two treatments, and meta-analysis--the combining of results from different trials. Finally, a more general model of statistical inference, using belief functions, is used to see if it is possible to separate the assessment of evidence from our background knowledge. It is shown that traditional and Bayesian methods can be modeled as two ends of a continuum of structured background knowledge, methods which summarize evidence at the point of maximum likelihood assuming no structure, and Bayesian methods assuming complete knowledge. Both schools are seen to be missing a concept of ignorance- -uncommitted belief. This concept provides the key to understanding the problem of sampling to a foregone conclusion and the role of frequency properties in statistical inference. The conclusion is that statistical evidence cannot be defined independently of background knowledge, and that frequency properties of an estimator are an indirect measure of uncommitted belief. Several likelihood summaries need to be used in clinical trials, with the quantitative disparity between summaries being an indirect measure of our ignorance. This conclusion is linked with parallel ideas in the philosophy of science and cognitive psychology.
Ink dating part II: Interpretation of results in a legal perspective.
Koenig, Agnès; Weyermann, Céline
2018-01-01
The development of an ink dating method requires an important investment of resources in order to step from the monitoring of ink ageing on paper to the determination of the actual age of a questioned ink entry. This article aimed at developing and evaluating the potential of three interpretation models to date ink entries in a legal perspective: (1) the threshold model comparing analytical results to tabulated values in order to determine the maximal possible age of an ink entry, (2) the trend tests that focusing on the "ageing status" of an ink entry, and (3) the likelihood ratio calculation comparing the probabilities to observe the results under at least two alternative hypotheses. This is the first report showing ink dating interpretation results on a ballpoint be ink reference population. In the first part of this paper three ageing parameters were selected as promising from the population of 25 ink entries aged during 4 to 304days: the quantity of phenoxyethanol (PE), the difference between the PE quantities contained in a naturally aged sample and an artificially aged sample (R NORM ) and the solvent loss ratio (R%). In the current part, each model was tested using the three selected ageing parameters. Results showed that threshold definition remains a simple model easily applicable in practice, but that the risk of false positive cannot be completely avoided without reducing significantly the feasibility of the ink dating approaches. The trend tests from the literature showed unreliable results and an alternative had to be developed yielding encouraging results. The likelihood ratio calculation introduced a degree of certainty to the ink dating conclusion in comparison to the threshold approach. The proposed model remains quite simple to apply in practice, but should be further developed in order to yield reliable results in practice. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
SAW, J.G.
THIS PAPER DEALS WITH SOME TESTS OF HYPOTHESIS FREQUENTLY ENCOUNTERED IN THE ANALYSIS OF MULTIVARIATE DATA. THE TYPE OF HYPOTHESIS CONSIDERED IS THAT WHICH THE STATISTICIAN CAN ANSWER IN THE NEGATIVE OR AFFIRMATIVE. THE DOOLITTLE METHOD MAKES IT POSSIBLE TO EVALUATE THE DETERMINANT OF A MATRIX OF HIGH ORDER, TO SOLVE A MATRIX EQUATION, OR TO…
Safari, Saeed; Baratloo, Alireza; Hashemi, Behrooz; Rahmati, Farhad; Forouzanfar, Mohammad Mehdi; Motamedi, Maryam; Mirmohseni, Ladan
2016-01-01
Background: Determining etiologic causes and prognosis can significantly improve management of syncope patients. The present study aimed to compare the values of San Francisco, Osservatorio Epidemiologico sulla Sincope nel Lazio (OESIL), Boston, and Risk Stratification of Syncope in the Emergency Department (ROSE) score clinical decision rules in predicting the short-term serious outcome of syncope patients. Materials and Methods: The present diagnostic accuracy study with 1-week follow-up was designed to evaluate the predictive values of the four mentioned clinical decision rules. Screening performance characteristics of each model in predicting mortality, myocardial infarction (MI), and cerebrovascular accidents (CVAs) were calculated and compared. To evaluate the value of each aforementioned model in predicting the outcome, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio were calculated and receiver-operating curve (ROC) curve analysis was done. Results: A total of 187 patients (mean age: 64.2 ± 17.2 years) were enrolled in the study. Mortality, MI, and CVA were seen in 19 (10.2%), 12 (6.4%), and 36 (19.2%) patients, respectively. Area under the ROC curve for OESIL, San Francisco, Boston, and ROSE models in prediction the risk of 1-week mortality, MI, and CVA was in the 30–70% range, with no significant difference among models (P > 0.05). The pooled model did not show higher accuracy in prediction of mortality, MI, and CVA compared to others (P > 0.05). Conclusion: This study revealed the weakness of all four evaluated models in predicting short-term serious outcome of syncope patients referred to the emergency department without any significant advantage for one among others. PMID:27904602
Rational clinical evaluation of suspected acute coronary syndromes: The value of more information.
Hancock, David G; Chuang, Ming-Yu Anthony; Bystrom, Rebecca; Halabi, Amera; Jones, Rachel; Horsfall, Matthew; Cullen, Louise; Parsonage, William A; Chew, Derek P
2017-12-01
Many meta-analyses have provided synthesised likelihood ratio data to aid clinical decision-making. However, much less has been published on how to safely combine clinical information in practice. We aimed to explore the benefits and risks of pooling clinical information during the ED assessment of suspected acute coronary syndrome. Clinical information on 1776 patients was collected within a randomised trial conducted across five South Australian EDs between July 2011 and March 2013. Bayes theorem was used to calculate patient-specific post-test probabilities using age- and gender-specific pre-test probabilities and likelihood ratios corresponding to the presence or absence of 18 clinical factors. Model performance was assessed as the presence of adverse cardiac outcomes among patients theoretically discharged at a post-test probability less than 1%. Bayes theorem-based models containing high-sensitivity troponin T (hs-troponin) outperformed models excluding hs-troponin, as well as models utilising TIMI and GRACE scores. In models containing hs-troponin, a plateau in improving discharge safety was observed after the inclusion of four clinical factors. Models with fewer clinical factors better approximated the true event rate, tended to be safer and resulted in a smaller standard deviation in post-test probability estimates. We showed that there is a definable point where additional information becomes uninformative and may actually lead to less certainty. This evidence supports the concept that clinical decision-making in the assessment of suspected acute coronary syndrome should be focused on obtaining the least amount of information that provides the highest benefit for informing the decisions of admission or discharge. © 2017 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Predicting In-State Workforce Retention After Graduate Medical Education Training.
Koehler, Tracy J; Goodfellow, Jaclyn; Davis, Alan T; Spybrook, Jessaca; vanSchagen, John E; Schuh, Lori
2017-02-01
There is a paucity of literature when it comes to identifying predictors of in-state retention of graduate medical education (GME) graduates, such as the demographic and educational characteristics of these physicians. The purpose was to use demographic and educational predictors to identify graduates from a single Michigan GME sponsoring institution, who are also likely to practice medicine in Michigan post-GME training. We included all residents and fellows who graduated between 2000 and 2014 from 1 of 18 GME programs at a Michigan-based sponsoring institution. Predictor variables identified by logistic regression with cross-validation were used to create a scoring tool to determine the likelihood of a GME graduate to practice medicine in the same state post-GME training. A 6-variable model, which included 714 observations, was identified. The predictor variables were birth state, program type (primary care versus non-primary care), undergraduate degree location, medical school location, state in which GME training was completed, and marital status. The positive likelihood ratio (+LR) for the scoring tool was 5.31, while the negative likelihood ratio (-LR) was 0.46, with an accuracy of 74%. The +LR indicates that the scoring tool was useful in predicting whether graduates who trained in a Michigan-based GME sponsoring institution were likely to practice medicine in Michigan following training. Other institutions could use these techniques to identify key information that could help pinpoint matriculating residents/fellows likely to practice medicine within the state in which they completed their training.
Haidar, Ziad A; Papanna, Ramesha; Sibai, Baha M; Tatevian, Nina; Viteri, Oscar A; Vowels, Patricia C; Blackwell, Sean C; Moise, Kenneth J
2017-08-01
Traditionally, 2-dimensional ultrasound parameters have been used for the diagnosis of a suspected morbidly adherent placenta previa. More objective techniques have not been well studied yet. The objective of the study was to determine the ability of prenatal 3-dimensional power Doppler analysis of flow and vascular indices to predict the morbidly adherent placenta objectively. A prospective cohort study was performed in women between 28 and 32 gestational weeks with known placenta previa. Patients underwent a two-dimensional gray-scale ultrasound that determined management decisions. 3-Dimensional power Doppler volumes were obtained during the same examination and vascular, flow, and vascular flow indices were calculated after manual tracing of the viewed placenta in the sweep; data were blinded to obstetricians. Morbidly adherent placenta was confirmed by histology. Severe morbidly adherent placenta was defined as increta/percreta on histology, blood loss >2000 mL, and >2 units of PRBC transfused. Sensitivities, specificities, predictive values, and likelihood ratios were calculated. Student t and χ 2 tests, logistic regression, receiver-operating characteristic curves, and intra- and interrater agreements using Kappa statistics were performed. The following results were found: (1) 50 women were studied: 23 had morbidly adherent placenta, of which 12 (52.2%) were severe morbidly adherent placenta; (2) 2-dimensional parameters diagnosed morbidly adherent placenta with a sensitivity of 82.6% (95% confidence interval, 60.4-94.2), a specificity of 88.9% (95% confidence interval, 69.7-97.1), a positive predictive value of 86.3% (95% confidence interval, 64.0-96.4), a negative predictive value of 85.7% (95% confidence interval, 66.4-95.3), a positive likelihood ratio of 7.4 (95% confidence interval, 2.5-21.9), and a negative likelihood ratio of 0.2 (95% confidence interval, 0.08-0.48); (3) mean values of the vascular index (32.8 ± 7.4) and the vascular flow index (14.2 ± 3.8) were higher in morbidly adherent placenta (P < .001); (4) area under the receiver-operating characteristic curve for the vascular and vascular flow indices were 0.99 and 0.97, respectively; (5) the vascular index ≥21 predicted morbidly adherent placenta with a sensitivity and a specificity of 95% (95% confidence interval, 88.2-96.9) and 91%, respectively (95% confidence interval, 87.5-92.4), 92% positive predictive value (95% confidence interval, 85.5-94.3), 90% negative predictive value (95% confidence interval, 79.9-95.3), positive likelihood ratio of 10.55 (95% confidence interval, 7.06-12.75), and negative likelihood ratio of 0.05 (95% confidence interval, 0.03-0.13); and (6) for the severe morbidly adherent placenta, 2-dimensional ultrasound had a sensitivity of 33.3% (95% confidence interval, 11.3-64.6), a specificity of 81.8% (95% confidence interval, 47.8-96.8), a positive predictive value of 66.7% (95% confidence interval, 24.1-94.1), a negative predictive value of 52.9% (95% confidence interval, 28.5-76.1), a positive likelihood ratio of 1.83 (95% confidence interval, 0.41-8.11), and a negative likelihood ratio of 0.81 (95% confidence interval, 0.52-1.26). A vascular index ≥31 predicted the diagnosis of a severe morbidly adherent placenta with a 100% sensitivity (95% confidence interval, 72-100), a 90% specificity (95% confidence interval, 81.7-93.8), an 88% positive predictive value (95% confidence interval, 55.0-91.3), a 100% negative predictive value (95% confidence interval, 90.9-100), a positive likelihood ratio of 10.0 (95% confidence interval, 3.93-16.13), and a negative likelihood ratio of 0 (95% confidence interval, 0-0.34). Intrarater and interrater agreements were 94% (P < .001) and 93% (P < .001), respectively. The vascular index accurately predicts the morbidly adherent placenta in patients with placenta previa. In addition, 3-dimensional power Doppler vascular and vascular flow indices were more predictive of severe cases of morbidly adherent placenta compared with 2-dimensional ultrasound. This objective technique may limit the variations in diagnosing morbidly adherent placenta because of the subjectivity of 2-dimensional ultrasound interpretations. Copyright © 2017 Elsevier Inc. All rights reserved.
Rampersaud, E; Morris, R W; Weinberg, C R; Speer, M C; Martin, E R
2007-01-01
Genotype-based likelihood-ratio tests (LRT) of association that examine maternal and parent-of-origin effects have been previously developed in the framework of log-linear and conditional logistic regression models. In the situation where parental genotypes are missing, the expectation-maximization (EM) algorithm has been incorporated in the log-linear approach to allow incomplete triads to contribute to the LRT. We present an extension to this model which we call the Combined_LRT that incorporates additional information from the genotypes of unaffected siblings to improve assignment of incompletely typed families to mating type categories, thereby improving inference of missing parental data. Using simulations involving a realistic array of family structures, we demonstrate the validity of the Combined_LRT under the null hypothesis of no association and provide power comparisons under varying levels of missing data and using sibling genotype data. We demonstrate the improved power of the Combined_LRT compared with the family-based association test (FBAT), another widely used association test. Lastly, we apply the Combined_LRT to a candidate gene analysis in Autism families, some of which have missing parental genotypes. We conclude that the proposed log-linear model will be an important tool for future candidate gene studies, for many complex diseases where unaffected siblings can often be ascertained and where epigenetic factors such as imprinting may play a role in disease etiology.
A likelihood ratio test for evolutionary rate shifts and functional divergence among proteins
Knudsen, Bjarne; Miyamoto, Michael M.
2001-01-01
Changes in protein function can lead to changes in the selection acting on specific residues. This can often be detected as evolutionary rate changes at the sites in question. A maximum-likelihood method for detecting evolutionary rate shifts at specific protein positions is presented. The method determines significance values of the rate differences to give a sound statistical foundation for the conclusions drawn from the analyses. A statistical test for detecting slowly evolving sites is also described. The methods are applied to a set of Myc proteins for the identification of both conserved sites and those with changing evolutionary rates. Those positions with conserved and changing rates are related to the structures and functions of their proteins. The results are compared with an earlier Bayesian method, thereby highlighting the advantages of the new likelihood ratio tests. PMID:11734650
Martell, R F; Desmet, A L
2001-12-01
This study departed from previous research on gender stereotyping in the leadership domain by adopting a more comprehensive view of leadership and using a diagnostic-ratio measurement strategy. One hundred and fifty-one managers (95 men and 56 women) judged the leadership effectiveness of male and female middle managers by providing likelihood ratings for 14 categories of leader behavior. As expected, the likelihood ratings for some leader behaviors were greater for male managers, whereas for other leader behaviors, the likelihood ratings were greater for female managers or were no different. Leadership ratings revealed some evidence of a same-gender bias. Providing explicit verification of managerial success had only a modest effect on gender stereotyping. The merits of adopting a probabilistic approach in examining the perception and treatment of stigmatized groups are discussed.
2009-01-01
Background The International Commission on Radiological Protection (ICRP) recommended annual occupational dose limit is 20 mSv. Cancer mortality in Japanese A-bomb survivors exposed to less than 20 mSv external radiation in 1945 was analysed previously, using a latency model with non-linear dose response. Questions were raised regarding statistical inference with this model. Methods Cancers with over 100 deaths in the 0 - 20 mSv subcohort of the 1950-1990 Life Span Study are analysed with Poisson regression models incorporating latency, allowing linear and non-linear dose response. Bootstrap percentile and Bias-corrected accelerated (BCa) methods and simulation of the Likelihood Ratio Test lead to Confidence Intervals for Excess Relative Risk (ERR) and tests against the linear model. Results The linear model shows significant large, positive values of ERR for liver and urinary cancers at latencies from 37 - 43 years. Dose response below 20 mSv is strongly non-linear at the optimal latencies for the stomach (11.89 years), liver (36.9), lung (13.6), leukaemia (23.66), and pancreas (11.86) and across broad latency ranges. Confidence Intervals for ERR are comparable using Bootstrap and Likelihood Ratio Test methods and BCa 95% Confidence Intervals are strictly positive across latency ranges for all 5 cancers. Similar risk estimates for 10 mSv (lagged dose) are obtained from the 0 - 20 mSv and 5 - 500 mSv data for the stomach, liver, lung and leukaemia. Dose response for the latter 3 cancers is significantly non-linear in the 5 - 500 mSv range. Conclusion Liver and urinary cancer mortality risk is significantly raised using a latency model with linear dose response. A non-linear model is strongly superior for the stomach, liver, lung, pancreas and leukaemia. Bootstrap and Likelihood-based confidence intervals are broadly comparable and ERR is strictly positive by bootstrap methods for all 5 cancers. Except for the pancreas, similar estimates of latency and risk from 10 mSv are obtained from the 0 - 20 mSv and 5 - 500 mSv subcohorts. Large and significant cancer risks for Japanese survivors exposed to less than 20 mSv external radiation from the atomic bombs in 1945 cast doubt on the ICRP recommended annual occupational dose limit. PMID:20003238
Human variability in mercury toxicokinetics and steady state biomarker ratios.
Bartell, S M; Ponce, R A; Sanga, R N; Faustman, E M
2000-10-01
Regulatory guidelines regarding methylmercury exposure depend on dose-response models relating observed mercury concentrations in maternal blood, cord blood, and maternal hair to developmental neurobehavioral endpoints. Generalized estimates of the maternal blood-to-hair, blood-to-intake, or hair-to-intake ratios are necessary for linking exposure to biomarker-based dose-response models. Most assessments have used point estimates for these ratios; however, significant interindividual and interstudy variability has been reported. For example, a maternal ratio of 250 ppm in hair per mg/L in blood is commonly used in models, but a 1990 WHO review reports mean ratios ranging from 140 to 370 ppm per mg/L. To account for interindividual and interstudy variation in applying these ratios to risk and safety assessment, some researchers have proposed representing the ratios with probability distributions and conducting probabilistic assessments. Such assessments would allow regulators to consider the range and like-lihood of mercury exposures in a population, rather than limiting the evaluation to an estimate of the average exposure or a single conservative exposure estimate. However, no consensus exists on the most appropriate distributions for representing these parameters. We discuss published reviews of blood-to-hair and blood-to-intake steady state ratios for mercury and suggest statistical approaches for combining existing datasets to form generalized probability distributions for mercury distribution ratios. Although generalized distributions may not be applicable to all populations, they allow a more informative assessment than point estimates where individual biokinetic information is unavailable. Whereas development and use of these distributions will improve existing exposure and risk models, additional efforts in data generation and model development are required.
Martyna, Agnieszka; Michalska, Aleksandra; Zadora, Grzegorz
2015-05-01
The problem of interpretation of common provenance of the samples within the infrared spectra database of polypropylene samples from car body parts and plastic containers as well as Raman spectra databases of blue solid and metallic automotive paints was under investigation. The research involved statistical tools such as likelihood ratio (LR) approach for expressing the evidential value of observed similarities and differences in the recorded spectra. Since the LR models can be easily proposed for databases described by a few variables, research focused on the problem of spectra dimensionality reduction characterised by more than a thousand variables. The objective of the studies was to combine the chemometric tools easily dealing with multidimensionality with an LR approach. The final variables used for LR models' construction were derived from the discrete wavelet transform (DWT) as a data dimensionality reduction technique supported by methods for variance analysis and corresponded with chemical information, i.e. typical absorption bands for polypropylene and peaks associated with pigments present in the car paints. Univariate and multivariate LR models were proposed, aiming at obtaining more information about the chemical structure of the samples. Their performance was controlled by estimating the levels of false positive and false negative answers and using the empirical cross entropy approach. The results for most of the LR models were satisfactory and enabled solving the stated comparison problems. The results prove that the variables generated from DWT preserve signal characteristic, being a sparse representation of the original signal by keeping its shape and relevant chemical information.
Tanaka, Ryo; Umehara, Takuya; Fujimura, Takafumi; Ozawa, Junya
2016-12-01
To develop and assess a clinical prediction rule (CPR) to predict declines in activities of daily living (ADL) at 6 months after surgery for hip fracture repair. Prospective, cohort study. From hospital to home. Patients (N=104) with hip fractures after surgery. Not applicable. ADL were assessed using the Barthel Index at 6 months after surgery. At 6 months after surgery, 86 patients (82.6%) were known to be alive, 1 patient (1.0%) had died, and 17 (16.3%) were lost to follow-up. Thirty-two patients (37.2%) did not recover their ADL at 6 months after surgery to levels before fracture. The classification and regression trees methodology was used to develop 2 models to predict a decline in ADL: (1) model 1 included age, type of fracture, and care level before fracture (sensitivity=75.0%, specificity=81.5%, positive predictive value=70.6%, positive likelihood ratio=4.050); and (2) model 2 included the degree of independence 2 weeks postsurgery for ADL chair transfer, ADL ambulation, and age (sensitivity=65.6%, specificity=87.0%, positive predictive value=75.0%, positive likelihood ratio=5.063). The areas under the receiver operating characteristic curves of both CPR models were .825 (95% confidential interval, .728-.923) and .790 (95% confidence interval, .683-.897), respectively. CPRs with moderate accuracy were developed to predict declines in ADL at 6 months after surgery for hip fracture repair. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
van Es, Andrew; Wiarda, Wim; Hordijk, Maarten; Alberink, Ivo; Vergeer, Peter
2017-05-01
For the comparative analysis of glass fragments, a method using Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) is in use at the NFI, giving measurements of the concentration of 18 elements. An important question is how to evaluate the results as evidence that a glass sample originates from a known glass source or from an arbitrary different glass source. One approach is the use of matching criteria e.g. based on a t-test or overlap of confidence intervals. An important drawback of this method is the fact that the rarity of the glass composition is not taken into account. A similar match can have widely different evidential values. In addition the use of fixed matching criteria can give rise to a "fall off the cliff" effect. Small differences may result in a match or a non-match. In this work a likelihood ratio system is presented, largely based on the two-level model as proposed by Aitken and Lucy [1], and Aitken, Zadora and Lucy [2]. Results show that the output from the two-level model gives good discrimination between same and different source hypotheses, but a post-hoc calibration step is necessary to improve the accuracy of the likelihood ratios. Subsequently, the robustness and performance of the LR system are studied. Results indicate that the output of the LR system is robust to the sample properties of the dataset used for calibration. Furthermore, the empirical upper and lower bound method [3], designed to deal with extrapolation errors in the density models, results in minimum and maximum values of the LR outputted by the system of 3.1×10 -3 and 3.4×10 4 . Calibration of the system, as measured by empirical cross-entropy, shows good behavior over the complete prior range. Rates of misleading evidence are small: for same-source comparisons, 0.3% of LRs support a different-source hypothesis; for different-source comparisons, 0.2% supports a same-source hypothesis. The authors use the LR system in reporting of glass cases to support expert opinion in the interpretation of glass evidence for origin of source questions. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Genetic modelling of test day records in dairy sheep using orthogonal Legendre polynomials.
Kominakis, A; Volanis, M; Rogdakis, E
2001-03-01
Test day milk yields of three lactations in Sfakia sheep were analyzed fitting a random regression (RR) model, regressing on orthogonal polynomials of the stage of the lactation period, i.e. days in milk. Univariate (UV) and multivariate (MV) analyses were also performed for four stages of the lactation period, represented by average days in milk, i.e. 15, 45, 70 and 105 days, to compare estimates obtained from RR models with estimates from UV and MV analyses. The total number of test day records were 790, 1314 and 1041 obtained from 214, 342 and 303 ewes in the first, second and third lactation, respectively. Error variances and covariances between regression coefficients were estimated by restricted maximum likelihood. Models were compared using likelihood ratio tests (LRTs). Log likelihoods were not significantly reduced when the rank of the orthogonal Legendre polynomials (LPs) of lactation stage was reduced from 4 to 2 and homogenous variances for lactation stages within lactations were considered. Mean weighted heritability estimates with RR models were 0.19, 0.09 and 0.08 for first, second and third lactation, respectively. The respective estimates obtained from UV analyses were 0.14, 0.12 and 0.08, respectively. Mean permanent environmental variance, as a proportion of the total, was high at all stages and lactations ranging from 0.54 to 0.71. Within lactations, genetic and permanent environmental correlations between lactation stages were in the range from 0.36 to 0.99 and 0.76 to 0.99, respectively. Genetic parameters for additive genetic and permanent environmental effects obtained from RR models were different from those obtained from UV and MV analyses.
Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin
2015-01-01
Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. PMID:26590280
A Note on Three Statistical Tests in the Logistic Regression DIF Procedure
ERIC Educational Resources Information Center
Paek, Insu
2012-01-01
Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…
NASA Astrophysics Data System (ADS)
Barkley, Brett E.
A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
Robustness of fit indices to outliers and leverage observations in structural equation modeling.
Yuan, Ke-Hai; Zhong, Xiaoling
2013-06-01
Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Remontet, L; Bossard, N; Belot, A; Estève, J
2007-05-10
Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression. Copyright 2006 John Wiley & Sons, Ltd.
Model building strategy for logistic regression: purposeful selection.
Zhang, Zhongheng
2016-03-01
Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.
Probabilistic Modeling of the Renal Stone Formation Module
NASA Technical Reports Server (NTRS)
Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.
2013-01-01
The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.
Optimum detection of tones transmitted by a spacecraft
NASA Technical Reports Server (NTRS)
Simon, M. K.; Shihabi, M. M.; Moon, T.
1995-01-01
The performance of a scheme proposed for automated routine monitoring of deep-space missions is presented. The scheme uses four different tones (sinusoids) transmitted from the spacecraft (S/C) to a ground station with the positive identification of each of them used to indicate different states of the S/C. Performance is measured in terms of detection probability versus false alarm probability with detection signal-to-noise ratio as a parameter. The cases where the phase of the received tone is unknown and where both the phase and frequency of the received tone are unknown are treated separately. The decision rules proposed for detecting the tones are formulated from average-likelihood ratio and maximum-likelihood ratio tests, the former resulting in optimum receiver structures.
1996-09-01
Generalized Likelihood Ratio (GLR) and voting techniques. The third class consisted of multiple hypothesis filter detectors, specifically the MMAE. The...vector version, versus a tensor if we use the matrix version of the power spectral density estimate. Using this notation, we will derive an...as MATLAB , have an intrinsic sample covariance computation available, which makes this method quite easy to implement. In practice, the mean for the
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Experimental study of near-field entrainment of moderately overpressured jets
Solovitz, S.A.; Mastin, L.G.; Saffaraval, F.
2011-01-01
Particle image velocimetry (PIV) experiments have been conducted to study the velocity flow fields in the developing flow region of high-speed jets. These velocity distributions were examined to determine the entrained mass flow over a range of geometric and flow conditions, including overpressured cases up to an overpressure ratio of 2.83. In the region near the jet exit, all measured flows exhibited the same entrainment up until the location of the first shock when overpressured. Beyond this location, the entrainment was reduced with increasing overpressure ratio, falling to approximately 60 of the magnitudes seen when subsonic. Since entrainment ratios based on lower speed, subsonic results are typically used in one-dimensional volcanological models of plume development, the current analytical methods will underestimate the likelihood of column collapse. In addition, the concept of the entrainment ratio normalization is examined in detail, as several key assumptions in this methodology do not apply when overpressured.
Pimentel, Mark; Purdy, Chris; Magar, Raf; Rezaie, Ali
2016-07-01
A high incidence of irritable bowel syndrome (IBS) is associated with significant medical costs. Diarrhea-predominant IBS (IBS-D) is diagnosed on the basis of clinical presentation and diagnostic test results and procedures that exclude other conditions. This study was conducted to estimate the potential cost savings of a novel IBS diagnostic blood panel that tests for the presence of antibodies to cytolethal distending toxin B and anti-vinculin associated with IBS-D. A cost-minimization (CM) decision tree model was used to compare the costs of a novel IBS diagnostic blood panel pathway versus an exclusionary diagnostic pathway (ie, standard of care). The probability that patients proceed to treatment was modeled as a function of sensitivity, specificity, and likelihood ratios of the individual biomarker tests. One-way sensitivity analyses were performed for key variables, and a break-even analysis was performed for the pretest probability of IBS-D. Budget impact analysis of the CM model was extrapolated to a health plan with 1 million covered lives. The CM model (base-case) predicted $509 cost savings for the novel IBS diagnostic blood panel versus the exclusionary diagnostic pathway because of the avoidance of downstream testing (eg, colonoscopy, computed tomography scans). Sensitivity analysis indicated that an increase in both positive likelihood ratios modestly increased cost savings. Break-even analysis estimated that the pretest probability of disease would be 0.451 to attain cost neutrality. The budget impact analysis predicted a cost savings of $3,634,006 ($0.30 per member per month). The novel IBS diagnostic blood panel may yield significant cost savings by allowing patients to proceed to treatment earlier, thereby avoiding unnecessary testing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
A unified partial likelihood approach for X-chromosome association on time-to-event outcomes.
Xu, Wei; Hao, Meiling
2018-02-01
The expression of X-chromosome undergoes three possible biological processes: X-chromosome inactivation (XCI), escape of the X-chromosome inactivation (XCI-E), and skewed X-chromosome inactivation (XCI-S). Although these expressions are included in various predesigned genetic variation chip platforms, the X-chromosome has generally been excluded from the majority of genome-wide association studies analyses; this is most likely due to the lack of a standardized method in handling X-chromosomal genotype data. To analyze the X-linked genetic association for time-to-event outcomes with the actual process unknown, we propose a unified approach of maximizing the partial likelihood over all of the potential biological processes. The proposed method can be used to infer the true biological process and derive unbiased estimates of the genetic association parameters. A partial likelihood ratio test statistic that has been proved asymptotically chi-square distributed can be used to assess the X-chromosome genetic association. Furthermore, if the X-chromosome expression pertains to the XCI-S process, we can infer the correct skewed direction and magnitude of inactivation, which can elucidate significant findings regarding the genetic mechanism. A population-level model and a more general subject-level model have been developed to model the XCI-S process. Finite sample performance of this novel method is examined via extensive simulation studies. An application is illustrated with implementation of the method on a cancer genetic study with survival outcome. © 2017 WILEY PERIODICALS, INC.
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Measures of accuracy and performance of diagnostic tests.
Drobatz, Kenneth J
2009-05-01
Diagnostic tests are integral to the practice of veterinary cardiology, any other specialty, and general veterinary medicine. Developing and understanding diagnostic tests is one of the cornerstones of clinical research. This manuscript describes the diagnostic test properties including sensitivity, specificity, predictive value, likelihood ratio, receiver operating characteristic curve. Review of practical book chapters and standard statistics manuscripts. Diagnostics such as sensitivity, specificity, predictive value, likelihood ratio, and receiver operating characteristic curve are described and illustrated. Basic understanding of how diagnostic tests are developed and interpreted is essential in reviewing clinical scientific papers and understanding evidence based medicine.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
Yamamoto, Yosuke; Terada, Kazuhiko; Ohta, Mitsuyasu; Mikami, Wakako; Yokota, Hajime; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuma, Shingo; Fukuhara, Shunichi
2017-01-01
Objective Diagnosis of community-acquired pneumonia (CAP) in the elderly is often delayed because of atypical presentation and non-specific symptoms, such as appetite loss, falls and disturbance in consciousness. The aim of this study was to investigate the external validity of existing prediction models and the added value of the non-specific symptoms for the diagnosis of CAP in elderly patients. Design Prospective cohort study. Setting General medicine departments of three teaching hospitals in Japan. Participants A total of 109 elderly patients who consulted for upper respiratory symptoms between 1 October 2014 and 30 September 2016. Main outcome measures The reference standard for CAP was chest radiograph evaluated by two certified radiologists. The existing models were externally validated for diagnostic performance by calibration plot and discrimination. To evaluate the additional value of the non-specific symptoms to the existing prediction models, we developed an extended logistic regression model. Calibration, discrimination, category-free net reclassification improvement (NRI) and decision curve analysis (DCA) were investigated in the extended model. Results Among the existing models, the model by van Vugt demonstrated the best performance, with an area under the curve of 0.75(95% CI 0.63 to 0.88); calibration plot showed good fit despite a significant Hosmer-Lemeshow test (p=0.017). Among the non-specific symptoms, appetite loss had positive likelihood ratio of 3.2 (2.0–5.3), negative likelihood ratio of 0.4 (0.2–0.7) and OR of 7.7 (3.0–19.7). Addition of appetite loss to the model by van Vugt led to improved calibration at p=0.48, NRI of 0.53 (p=0.019) and higher net benefit by DCA. Conclusions Information on appetite loss improved the performance of an existing model for the diagnosis of CAP in the elderly. PMID:29122806
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
Stationary and non-stationary extreme value modeling of extreme temperature in Malaysia
NASA Astrophysics Data System (ADS)
Hasan, Husna; Salleh, Nur Hanim Mohd; Kassim, Suraiya
2014-09-01
Extreme annual temperature of eighteen stations in Malaysia is fitted to the Generalized Extreme Value distribution. Stationary and non-stationary models with trend are considered for each station and the Likelihood Ratio test is used to determine the best-fitting model. Results show that three out of eighteen stations i.e. Bayan Lepas, Labuan and Subang favor a model which is linear in the location parameter. A hierarchical cluster analysis is employed to investigate the existence of similar behavior among the stations. Three distinct clusters are found in which one of them consists of the stations that favor the non-stationary model. T-year estimated return levels of the extreme temperature are provided based on the chosen models.
Code of Federal Regulations, 2010 CFR
2010-01-01
... that the facts that caused the deficient share-asset ratio no longer exist; and (ii) The likelihood of further depreciation of the share-asset ratio is not probable; and (iii) The return of the share-asset ratio to its normal limits within a reasonable time for the credit union concerned is probable; and (iv...
Investigating Gender Differences under Time Pressure in Financial Risk Taking.
Xie, Zhixin; Page, Lionel; Hardy, Ben
2017-01-01
There is a significant gender imbalance on financial trading floors. This motivated us to investigate gender differences in financial risk taking under pressure. We used a well-established approach from behavior economics to analyze a series of risky monetary choices by male and female participants with and without time pressure. We also used second to fourth digit ratio (2D:4D) and face width-to-height ratio (fWHR) as correlates of pre-natal exposure to testosterone. We constructed a structural model and estimated the participants' risk attitudes and probability perceptions via maximum likelihood estimation under both expected utility (EU) and rank-dependent utility (RDU) models. In line with existing research, we found that male participants are less risk averse and that the gender gap in risk attitudes increases under moderate time pressure. We found that female participants with lower 2D:4D ratios and higher fWHR are less risk averse in RDU estimates. Males with lower 2D:4D ratios were less risk averse in EU estimations, but more risk averse using RDU estimates. We also observe that men whose ratios indicate a greater prenatal exposure to testosterone exhibit a greater optimism and overestimation of small probabilities of success.
Husain, Shahid; Kwak, Eun Jeong; Obman, Asia; Wagener, Marilyn M; Kusne, Shimon; Stout, Janet E; McCurry, Kenneth R; Singh, Nina
2004-05-01
The clinical utility of Platelia trade mark Aspergillus galactomannan antigen for the early diagnosis of invasive aspergillosis was prospectively assessed in 70 consecutive lung transplant recipients. Sera were collected twice weekly and tested for galactomannan. Invasive aspergillosis was documented in 17.1% (12/70) of the patients. Using the generalized estimating equation model, at the cutoff value of >or= 0.5, the sensitivity of the test was 30%, specificity 93% with positive and negative likelihood ratios of 4.2 and 0.75, respectively. Increasing the cutoff value to >or= 0.66 yielded a sensitivity of 30%, specificity of 95%, and positive and negative likelihood ratios of 5.5 and 0.74. A total of 14 patients had false-positive tests, including nine who had cystic fibrosis or chronic obstructive pulmonary disease. False-positive tests occurred within 3 days of transplantation in 43% (6/14) of the patients, and within 7 days in 64% (9/14). Thus, the test demonstrated excellent specificity, but a low sensitivity for the diagnosis of aspergillosis in this patient population. Patients with cystic fibrosis or chronic obstructive pulmonary disease may transiently have a positive test in the early post-transplant period.
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
Adult Age Differences in Frequency Estimations of Happy and Angry Faces
ERIC Educational Resources Information Center
Nikitin, Jana; Freund, Alexandra M.
2015-01-01
With increasing age, the ratio of gains to losses becomes more negative, which is reflected in expectations that positive events occur with a high likelihood in young adulthood, whereas negative events occur with a high likelihood in old age. Little is known about expectations of social events. Given that younger adults are motivated to establish…
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
Top pair production in the dilepton decay channel with a tau lepton
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corbo, Matteo
2012-09-19
The top quark pair production and decay into leptons with at least one being a τ lepton is studied in the framework of the CDF experiment at the Tevatron proton antiproton collider at Fermilab (USA). The selection requires an electron or a muon produced either by the τ lepton decay or by a W decay. The analysis uses the complete Run II data set i.e. 9.0 fb -1, selected by one trigger based on a low transverse momentum electron or muon plus one isolated charged track. The top quark pair production cross section at 1.96 TeV is measured at 8.2more » ± 1.7 +1.2 -1.1 ± 0.5 pb, and the top branching ratio into τ lepton is measured at 0.120 ± 0.027 +0.022 -0.019 ± 0.007 with statistical, systematics and luminosity uncertainties. These are up to date the most accurate results in this top decay channel and are in good agreement with the results obtained using other decay channels of the top at the Tevatron. The branching ratio is also measured separating the single lepton from the two leptons events with a log likelihood method. This is the first time these two signatures are separately identified. With a fit to data along the log-likelihood variable an alternative measurement of the branching ratio is made: 0.098 ± 0.022(stat:) ± 0.014(syst:); it is in good agreement with the expectations of the Standard Model (with lepton universality) within the experimental uncertainties. The branching ratio is constrained to be less than 0.159 at 95% con dence level. This limit translates into a limit of a top branching ratio into a potential charged Higgs boson.« less
Figler, Bradley D; Mack, Christopher D; Kaufman, Robert; Wessells, Hunter; Bulger, Eileen; Smith, Thomas G; Voelzke, Bryan
2014-03-01
The National Highway Traffic Safety Administration's New Car Assessment Program (NCAP) implemented side-impact crash testing on all new vehicles since 1998 to assess the likelihood of major thoracoabdominal injuries during a side-impact crash. Higher crash test rating is intended to indicate a safer car, but the real-world applicability of these ratings is unknown. Our objective was to determine the relationship between a vehicle's NCAP side-impact crash test rating and the risk of major thoracoabdominal injury among the vehicle's occupants in real-world side-impact motor vehicle crashes. The National Automotive Sampling System Crashworthiness Data System contains detailed crash and injury data in a sample of major crashes in the United States. For model years 1998 to 2010 and crash years 1999 to 2010, 68,124 occupants were identified in the Crashworthiness Data System database. Because 47% of cases were missing crash severity (ΔV), multiple imputation was used to estimate the missing values. The primary predictor of interest was the occupant vehicle's NCAP side-impact crash test rating, and the outcome of interest was the presence of major (Abbreviated Injury Scale [AIS] score ≥ 3) thoracoabdominal injury. In multivariate analysis, increasing NCAP crash test rating was associated with lower likelihood of major thoracoabdominal injury at high (odds ratio [OR], 0.8; 95% confidence interval [CI], 0.7-0.9; p < 0.01) and medium (OR, 0.9; 95% CI, 0.8-1.0; p < 0.05) crash severity (ΔV), but not at low ΔV (OR, 0.95; 95% CI, 0.8-1.2; p = 0.55). In our model, older age and absence of seat belt use were associated with greater likelihood of major thoracoabdominal injury at low and medium ΔV (p < 0.001), but not at high ΔV (p ≥ 0.09). Among adults in model year 1998 to 2010 vehicles involved in medium and high severity motor vehicle crashes, a higher NCAP side-impact crash test rating is associated with a lower likelihood of major thoracoabdominal trauma. Epidemiologic study, level III.
Johnston, Heidi Bart; Ganatra, Bela; Nguyen, My Huong; Habib, Ndema; Afework, Mesganaw Fantahun; Harries, Jane; Iyengar, Kirti; Moodley, Jennifer; Lema, Hailu Yeneneh; Constant, Deborah; Sen, Swapnaleen
2016-01-01
To assess the accuracy of assessment of eligibility for early medical abortion by community health workers using a simple checklist toolkit. Diagnostic accuracy study. Ethiopia, India and South Africa. Two hundred seventeen women in Ethiopia, 258 in India and 236 in South Africa were enrolled into the study. A checklist toolkit to determine eligibility for early medical abortion was validated by comparing results of clinician and community health worker assessment of eligibility using the checklist toolkit with the reference standard exam. Accuracy was over 90% and the negative likelihood ratio <0.1 at all three sites when used by clinician assessors. Positive likelihood ratios were 4.3 in Ethiopia, 5.8 in India and 6.3 in South Africa. When used by community health workers the overall accuracy of the toolkit was 92% in Ethiopia, 80% in India and 77% in South Africa negative likelihood ratios were 0.08 in Ethiopia, 0.25 in India and 0.22 in South Africa and positive likelihood ratios were 5.9 in Ethiopia and 2.0 in India and South Africa. The checklist toolkit, as used by clinicians, was excellent at ruling out participants who were not eligible, and moderately effective at ruling in participants who were eligible for medical abortion. Results were promising when used by community health workers particularly in Ethiopia where they had more prior experience with use of diagnostic aids and longer professional training. The checklist toolkit assessments resulted in some participants being wrongly assessed as eligible for medical abortion which is an area of concern. Further research is needed to streamline the components of the tool, explore optimal duration and content of training for community health workers, and test feasibility and acceptability.
Sviklāne, Laura; Olmane, Evija; Dzērve, Zane; Kupčs, Kārlis; Pīrāgs, Valdis; Sokolovska, Jeļizaveta
2018-01-01
Little is known about the diagnostic value of hepatic steatosis index (HSI) and fatty liver index (FLI), as well as their link to metabolic syndrome in type 1 diabetes mellitus. We have screened the effectiveness of FLI and HSI in an observational pilot study of 40 patients with type 1 diabetes. FLI and HSI were calculated for 201 patients with type 1 diabetes. Forty patients with FLI/HSI values corresponding to different risk of liver steatosis were invited for liver magnetic resonance study. In-phase/opposed-phase technique of magnetic resonance was used. Accuracy of indices was assessed from the area under the receiver operating characteristic curve. Twelve (30.0%) patients had liver steatosis. For FLI, sensitivity was 90%; specificity, 74%; positive likelihood ratio, 3.46; negative likelihood ratio, 0.14; positive predictive value, 0.64; and negative predictive value, 0.93. For HSI, sensitivity was 86%; specificity, 66%; positive likelihood ratio, 1.95; negative likelihood ratio, 0.21; positive predictive value, 0.50; and negative predictive value, 0.92. Area under the receiver operating characteristic curve for FLI was 0.86 (95% confidence interval [0.72; 0.99]); for HSI 0.75 [0.58; 0.91]. Liver fat correlated with liver enzymes, waist circumference, triglycerides, and C-reactive protein. FLI correlated with C-reactive protein, liver enzymes, and blood pressure. HSI correlated with waist circumference and C-reactive protein. FLI ≥ 60 and HSI ≥ 36 were significantly associated with metabolic syndrome and nephropathy. The tested indices, especially FLI, can serve as surrogate markers for liver fat content and metabolic syndrome in type 1 diabetes. © 2017 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Dasgupta, Subhankar; Dasgupta, Shyamal; Sharma, Partha Pratim; Mukherjee, Amitabha; Ghosh, Tarun Kumar
2011-11-01
To investigate the effect of oral progesterone on the accuracy of imaging studies performed to detect endometrial pathology in comparison to hysteroscopy-guided biopsy in perimenopausal women on progesterone treatment for abnormal uterine bleeding. The study population comprised of women aged 40-55 years with complaints of abnormal uterine bleeding who were also undergoing oral progesterone therapy. Women with a uterus ≥ 12 weeks' gestation size, previous abnormal endometrial biopsy, cervical lesion on speculum examination, abnormal Pap smear, active pelvic infection, adnexal mass on clinical examination or during ultrasound scan and a positive pregnancy test were excluded. A transvaginal ultrasound followed by saline infusion sonography were done. On the following day, a hysteroscopy followed by a guided biopsy of the endometrium or any endometrial lesion was performed. Comparison between the results of the imaging study with the hysteroscopy and guided biopsy was done. The final analysis included 83 patients. For detection of overall pathology, polyp and fibroid transvaginal ultrasound had a positive likelihood ratio of 1.65, 5.45 and 5.4, respectively, and a negative likelihood ratio of 0.47, 0.6 and 0.43, respectively. For detection of overall pathology, polyp and fibroid saline infusion sonography had a positive likelihood ratio of 4.4, 5.35 and 11.8, respectively, and a negative likelihood ratio of 0.3, 0.2 and 0.15, respectively. In perimenopausal women on oral progesterone therapy for abnormal uterine bleeding, imaging studies cannot be considered as an accurate method for diagnosing endometrial pathology when compared to hysteroscopy and guided biopsy. © 2011 The Authors. Journal of Obstetrics and Gynaecology Research © 2011 Japan Society of Obstetrics and Gynecology.
Using DNA fingerprints to infer familial relationships within NHANES III households
Katki, Hormuzd A.; Sanders, Christopher L.; Graubard, Barry I.; Bergen, Andrew W.
2009-01-01
Developing, targeting, and evaluating genomic strategies for population-based disease prevention require population-based data. In response to this urgent need, genotyping has been conducted within the Third National Health and Nutrition Examination (NHANES III), the nationally-representative household-interview health survey in the U.S. However, before these genetic analyses can occur, family relationships within households must be accurately ascertained. Unfortunately, reported family relationships within NHANES III households based on questionnaire data are incomplete and inconclusive with regards to actual biological relatedness of family members. We inferred family relationships within households using DNA fingerprints (Identifiler®) that contain the DNA loci used by law enforcement agencies for forensic identification of individuals. However, performance of these loci for relationship inference is not well understood. We evaluated two competing statistical methods for relationship inference on pairs of household members: an exact likelihood ratio relying on allele frequencies to an Identical By State (IBS) likelihood ratio that only requires matching alleles. We modified these methods to account for genotyping errors and population substructure. The two methods usually agree on the rankings of the most likely relationships. However, the IBS method underestimates the likelihood ratio by not accounting for the informativeness of matching rare alleles. The likelihood ratio is sensitive to estimates of population substructure, and parent-child relationships are sensitive to the specified genotyping error rate. These loci were unable to distinguish second-degree relationships and cousins from being unrelated. The genetic data is also useful for verifying reported relationships and identifying data quality issues. An important by-product is the first explicitly nationally-representative estimates of allele frequencies at these ubiquitous forensic loci. PMID:20664713
Williamson, Scott; Fledel-Alon, Adi; Bustamante, Carlos D
2004-09-01
We develop a Poisson random-field model of polymorphism and divergence that allows arbitrary dominance relations in a diploid context. This model provides a maximum-likelihood framework for estimating both selection and dominance parameters of new mutations using information on the frequency spectrum of sequence polymorphisms. This is the first DNA sequence-based estimator of the dominance parameter. Our model also leads to a likelihood-ratio test for distinguishing nongenic from genic selection; simulations indicate that this test is quite powerful when a large number of segregating sites are available. We also use simulations to explore the bias in selection parameter estimates caused by unacknowledged dominance relations. When inference is based on the frequency spectrum of polymorphisms, genic selection estimates of the selection parameter can be very strongly biased even for minor deviations from the genic selection model. Surprisingly, however, when inference is based on polymorphism and divergence (McDonald-Kreitman) data, genic selection estimates of the selection parameter are nearly unbiased, even for completely dominant or recessive mutations. Further, we find that weak overdominant selection can increase, rather than decrease, the substitution rate relative to levels of polymorphism. This nonintuitive result has major implications for the interpretation of several popular tests of neutrality.
Macera, Márcia A C; Louzada, Francisco; Cancho, Vicente G; Fontes, Cor J F
2015-03-01
In this paper, we introduce a new model for recurrent event data characterized by a baseline rate function fully parametric, which is based on the exponential-Poisson distribution. The model arises from a latent competing risk scenario, in the sense that there is no information about which cause was responsible for the event occurrence. Then, the time of each recurrence is given by the minimum lifetime value among all latent causes. The new model has a particular case, which is the classical homogeneous Poisson process. The properties of the proposed model are discussed, including its hazard rate function, survival function, and ordinary moments. The inferential procedure is based on the maximum likelihood approach. We consider an important issue of model selection between the proposed model and its particular case by the likelihood ratio test and score test. Goodness of fit of the recurrent event models is assessed using Cox-Snell residuals. A simulation study evaluates the performance of the estimation procedure in the presence of a small and moderate sample sizes. Applications on two real data sets are provided to illustrate the proposed methodology. One of them, first analyzed by our team of researchers, considers the data concerning the recurrence of malaria, which is an infectious disease caused by a protozoan parasite that infects red blood cells. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A model of the human observer and decision maker
NASA Technical Reports Server (NTRS)
Wewerinke, P. H.
1981-01-01
The decision process is described in terms of classical sequential decision theory by considering the hypothesis that an abnormal condition has occurred by means of a generalized likelihood ratio test. For this, a sufficient statistic is provided by the innovation sequence which is the result of the perception an information processing submodel of the human observer. On the basis of only two model parameters, the model predicts the decision speed/accuracy trade-off and various attentional characteristics. A preliminary test of the model for single variable failure detection tasks resulted in a very good fit of the experimental data. In a formal validation program, a variety of multivariable failure detection tasks was investigated and the predictive capability of the model was demonstrated.
ERIC Educational Resources Information Center
Criss, Amy H.; McClelland, James L.
2006-01-01
The subjective likelihood model [SLiM; McClelland, J. L., & Chappell, M. (1998). Familiarity breeds differentiation: a subjective-likelihood approach to the effects of experience in recognition memory. "Psychological Review," 105(4), 734-760.] and the retrieving effectively from memory model [REM; Shiffrin, R. M., & Steyvers, M. (1997). A model…
Durand, Eric; Bauer, Fabrice; Mansencal, Nicolas; Azarine, Arshid; Diebold, Benoit; Hagege, Albert; Perdrix, Ludivine; Gilard, Martine; Jobic, Yannick; Eltchaninoff, Hélène; Bensalah, Mourad; Dubourg, Benjamin; Caudron, Jérôme; Niarra, Ralph; Chatellier, Gilles; Dacher, Jean-Nicolas; Mousseaux, Elie
2017-08-15
To perform a head-to-head comparison of coronary CT angiography (CCTA) and dobutamine-stress echocardiography (DSE) in patients presenting recent chest pain when troponin and ECG are negative. Two hundred seventeen patients with recent chest pain, normal ECG findings, and negative troponin were prospectively included in this multicenter study and were scheduled for CCTA and DSE. Invasive coronary angiography (ICA), was performed in patients when either DSE or CCTA was considered positive or when both were non-contributive or in case of recurrent chest pain during 6month follow-up. The presence of coronary artery stenosis was defined as a luminal obstruction >50% diameter in any coronary segment at ICA. ICA was performed in 75 (34.6%) patients. Coronary artery stenosis was identified in 37 (17%) patients. For CCTA, the sensitivity was 96.9% (95% CI 83.4-99.9), specificity 48.3% (29.4-67.5), positive likelihood ratio 2.06 (95% CI 1.36-3.11), and negative likelihood ratio 0.07 (95% CI 0.01-0.52). The sensitivity of DSE was 51.6% (95% CI 33.1-69.9), specificity 46.7% (28.3-65.7), positive likelihood ratio 1.03 (95% CI 0.62-1.72), and negative likelihood ratio 1.10 (95% CI 0.63-1.93). The CCTA: DSE ratio of true-positive and false-positive rates was 1.70 (95% CI 1.65-1.75) and 1.00 (95% CI 0.91-1.09), respectively, when non-contributive CCTA and DSE were both considered positive. Only one missed acute coronary syndrome was observed at six months. CCTA has higher diagnostic performance than DSE in the evaluation of patients with recent chest pain, normal ECG findings, and negative troponine to exclude coronary artery disease. Copyright © 2017. Published by Elsevier B.V.
Ruilong, Zong; Daohai, Xie; Li, Geng; Xiaohong, Wang; Chunjie, Wang; Lei, Tian
2017-01-01
To carry out a meta-analysis on the performance of fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (PET/CT) for the evaluation of solitary pulmonary nodules. In the meta-analysis, we performed searches of several electronic databases for relevant studies, including Google Scholar, PubMed, Cochrane Library, and several Chinese databases. The quality of all included studies was assessed by Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Two observers independently extracted data of eligible articles. For the meta-analysis, the total sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were pooled. A summary receiver operating characteristic curve was constructed. The I-test was performed to assess the impact of study heterogeneity on the results of the meta-analysis. Meta-regression and subgroup analysis were carried out to investigate the potential covariates that might have considerable impacts on heterogeneity. Overall, 12 studies were included in this meta-analysis, including a total of 1297 patients and 1301 pulmonary nodules. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with corresponding 95% confidence intervals (CIs) were 0.82 (95% CI, 0.76-0.87), 0.81 (95% CI, 0.66-0.90), 4.3 (95% CI, 2.3-7.9), and 0.22 (95% CI, 0.16-0.30), respectively. Significant heterogeneity was observed in sensitivity (I=81.1%) and specificity (I=89.6%). Subgroup analysis showed that the best results for sensitivity (0.90; 95% CI, 0.68-0.86) and accuracy (0.93; 95% CI, 0.90-0.95) were present in a prospective study. The results of our analysis suggest that PET/CT is a useful tool for detecting malignant pulmonary nodules qualitatively. Although current evidence showed moderate accuracy for PET/CT in differentiating malignant from benign solitary pulmonary nodules, further work needs to be carried out to improve its reliability.
NASA Astrophysics Data System (ADS)
Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran
2016-09-01
In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.
Clark, Matthew T.; Calland, James Forrest; Enfield, Kyle B.; Voss, John D.; Lake, Douglas E.; Moorman, J. Randall
2017-01-01
Background Charted vital signs and laboratory results represent intermittent samples of a patient’s dynamic physiologic state and have been used to calculate early warning scores to identify patients at risk of clinical deterioration. We hypothesized that the addition of cardiorespiratory dynamics measured from continuous electrocardiography (ECG) monitoring to intermittently sampled data improves the predictive validity of models trained to detect clinical deterioration prior to intensive care unit (ICU) transfer or unanticipated death. Methods and findings We analyzed 63 patient-years of ECG data from 8,105 acute care patient admissions at a tertiary care academic medical center. We developed models to predict deterioration resulting in ICU transfer or unanticipated death within the next 24 hours using either vital signs, laboratory results, or cardiorespiratory dynamics from continuous ECG monitoring and also evaluated models using all available data sources. We calculated the predictive validity (C-statistic), the net reclassification improvement, and the probability of achieving the difference in likelihood ratio χ2 for the additional degrees of freedom. The primary outcome occurred 755 times in 586 admissions (7%). We analyzed 395 clinical deteriorations with continuous ECG data in the 24 hours prior to an event. Using only continuous ECG measures resulted in a C-statistic of 0.65, similar to models using only laboratory results and vital signs (0.63 and 0.69 respectively). Addition of continuous ECG measures to models using conventional measurements improved the C-statistic by 0.01 and 0.07; a model integrating all data sources had a C-statistic of 0.73 with categorical net reclassification improvement of 0.09 for a change of 1 decile in risk. The difference in likelihood ratio χ2 between integrated models with and without cardiorespiratory dynamics was 2158 (p value: <0.001). Conclusions Cardiorespiratory dynamics from continuous ECG monitoring detect clinical deterioration in acute care patients and improve performance of conventional models that use only laboratory results and vital signs. PMID:28771487
Moss, Travis J; Clark, Matthew T; Calland, James Forrest; Enfield, Kyle B; Voss, John D; Lake, Douglas E; Moorman, J Randall
2017-01-01
Charted vital signs and laboratory results represent intermittent samples of a patient's dynamic physiologic state and have been used to calculate early warning scores to identify patients at risk of clinical deterioration. We hypothesized that the addition of cardiorespiratory dynamics measured from continuous electrocardiography (ECG) monitoring to intermittently sampled data improves the predictive validity of models trained to detect clinical deterioration prior to intensive care unit (ICU) transfer or unanticipated death. We analyzed 63 patient-years of ECG data from 8,105 acute care patient admissions at a tertiary care academic medical center. We developed models to predict deterioration resulting in ICU transfer or unanticipated death within the next 24 hours using either vital signs, laboratory results, or cardiorespiratory dynamics from continuous ECG monitoring and also evaluated models using all available data sources. We calculated the predictive validity (C-statistic), the net reclassification improvement, and the probability of achieving the difference in likelihood ratio χ2 for the additional degrees of freedom. The primary outcome occurred 755 times in 586 admissions (7%). We analyzed 395 clinical deteriorations with continuous ECG data in the 24 hours prior to an event. Using only continuous ECG measures resulted in a C-statistic of 0.65, similar to models using only laboratory results and vital signs (0.63 and 0.69 respectively). Addition of continuous ECG measures to models using conventional measurements improved the C-statistic by 0.01 and 0.07; a model integrating all data sources had a C-statistic of 0.73 with categorical net reclassification improvement of 0.09 for a change of 1 decile in risk. The difference in likelihood ratio χ2 between integrated models with and without cardiorespiratory dynamics was 2158 (p value: <0.001). Cardiorespiratory dynamics from continuous ECG monitoring detect clinical deterioration in acute care patients and improve performance of conventional models that use only laboratory results and vital signs.
Gui, Xuwei; Xiao, Heping
2014-01-01
This systematic review and meta-analysis was performed to determine accuracy and usefulness of adenosine deaminase (ADA) in diagnosis of tuberculosis pleurisy. Medline, Google scholar and Web of Science databases were searched to identify related studies until 2014. Two reviewers independently assessed quality of studies included according to standard Quality Assessment of Diagnosis Accuracy Studies (QUADAS) criteria. The sensitivity, specificity, diagnostic odds ratio and other parameters of ADA in diagnosis of tuberculosis pleurisy were analyzed with Meta-DiSC1.4 software, and pooled using the random effects model. Twelve studies including 865 tuberculosis pleurisy patients and 1379 non-tuberculosis pleurisy subjects were identified from 110 studies for this meta-analysis. The sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR) and diagnosis odds ratio (DOR) of ADA in the diagnosis of tuberculosis pleurisy were 45.25 (95% CI 27.63-74.08), 0.86 (95% CI 0.84-0.88), 0.88 (95% CI 0.86-0.90), 6.32 (95% CI 4.83-8.26) and 0.15 (95% 0.11-0.22), respectively. The area under the summary receiver operating characteristic curve (SROC) was 0.9340. Our results demonstrate that the sensitivity and specificity of ADA are high in the diagnosis of tuberculosis pleurisy especially when ADA≥50 (U/L). Thus, ADA is a relatively sensitive and specific marker for tuberculosis pleurisy diagnosis. However, it is cautious to apply these results due to the heterogeneity in study design of these studies. Further studies are required to confirm the optimal cut-off value of ADA.
Relationship Formation and Stability in Emerging Adulthood: Do Sex Ratios Matter?
ERIC Educational Resources Information Center
Warner, Tara D.; Manning, Wendy D.; Giordano, Peggy C.; Longmore, Monica A.
2011-01-01
Research links sex ratios with the likelihood of marriage and divorce. However, whether sex ratios similarly influence precursors to marriage (transitions in and out of dating or cohabiting relationships) is unknown. Utilizing data from the Toledo Adolescent Relationships Study and the 2000 U.S. Census, this study assesses whether sex ratios…
Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose
2017-01-01
Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.
Behavior of the maximum likelihood in quantum state tomography
NASA Astrophysics Data System (ADS)
Scholten, Travis L.; Blume-Kohout, Robin
2018-02-01
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) should not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.
Behavior of the maximum likelihood in quantum state tomography
Blume-Kohout, Robin J; Scholten, Travis L.
2018-02-22
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
Behavior of the maximum likelihood in quantum state tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin J; Scholten, Travis L.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
NASA Astrophysics Data System (ADS)
Neuer, Marcus J.
2013-11-01
A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.
A mixture model-based approach to the clustering of microarray expression data.
McLachlan, G J; Bean, R W; Peel, D
2002-03-01
This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/
How much to trust the senses: Likelihood learning
Sato, Yoshiyuki; Kording, Konrad P.
2014-01-01
Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.
Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M
2016-02-01
Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons. © The Author(s) 2012.
Spatial scan statistics for detection of multiple clusters with arbitrary shapes.
Lin, Pei-Sheng; Kung, Yi-Hung; Clayton, Murray
2016-12-01
In applying scan statistics for public health research, it would be valuable to develop a detection method for multiple clusters that accommodates spatial correlation and covariate effects in an integrated model. In this article, we connect the concepts of the likelihood ratio (LR) scan statistic and the quasi-likelihood (QL) scan statistic to provide a series of detection procedures sufficiently flexible to apply to clusters of arbitrary shape. First, we use an independent scan model for detection of clusters and then a variogram tool to examine the existence of spatial correlation and regional variation based on residuals of the independent scan model. When the estimate of regional variation is significantly different from zero, a mixed QL estimating equation is developed to estimate coefficients of geographic clusters and covariates. We use the Benjamini-Hochberg procedure (1995) to find a threshold for p-values to address the multiple testing problem. A quasi-deviance criterion is used to regroup the estimated clusters to find geographic clusters with arbitrary shapes. We conduct simulations to compare the performance of the proposed method with other scan statistics. For illustration, the method is applied to enterovirus data from Taiwan. © 2016, The International Biometric Society.
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
Validation of the diagnostic score for acute lower abdominal pain in women of reproductive age.
Jearwattanakanok, Kijja; Yamada, Sirikan; Suntornlimsiri, Watcharin; Smuthtai, Waratsuda; Patumanond, Jayanton
2014-01-01
Background. The differential diagnoses of acute appendicitis obstetrics, and gynecological conditions (OB-GYNc) or nonspecific abdominal pain in young adult females with lower abdominal pain are clinically challenging. The present study aimed to validate the recently developed clinical score for the diagnosis of acute lower abdominal pain in female of reproductive age. Method. Medical records of reproductive age women (15-50 years) who were admitted for acute lower abdominal pain were collected. Validation data were obtained from patients admitted during a different period from the development data. Result. There were 302 patients in the validation cohort. For appendicitis, the score had a sensitivity of 91.9%, a specificity of 79.0%, and a positive likelihood ratio of 4.39. The sensitivity, specificity, and positive likelihood ratio in diagnosis of OB-GYNc were 73.0%, 91.6%, and 8.73, respectively. The areas under the receiver operating curves (ROC), the positive likelihood ratios, for appendicitis and OB-GYNc in the validation data were not significantly different from the development data, implying similar performances. Conclusion. The clinical score developed for the diagnosis of acute lower abdominal pain in female of reproductive age may be applied to guide differential diagnoses in these patients.
Norström, Madelaine; Kristoffersen, Anja Bråthen; Görlach, Franziska Sophie; Nygård, Karin; Hopp, Petter
2015-01-01
In order to facilitate foodborne outbreak investigations there is a need to improve the methods for identifying the food products that should be sampled for laboratory analysis. The aim of this study was to examine the applicability of a likelihood ratio approach previously developed on simulated data, to real outbreak data. We used human case and food product distribution data from the Norwegian enterohaemorrhagic Escherichia coli outbreak in 2006. The approach was adjusted to include time, space smoothing and to handle missing or misclassified information. The performance of the adjusted likelihood ratio approach on the data originating from the HUS outbreak and control data indicates that the adjusted approach is promising and indicates that the adjusted approach could be a useful tool to assist and facilitate the investigation of food borne outbreaks in the future if good traceability are available and implemented in the distribution chain. However, the approach needs to be further validated on other outbreak data and also including other food products than meat products in order to make a more general conclusion of the applicability of the developed approach. PMID:26237468
Song, Hong Ji; Paek, Yu Jin; Choi, Min Kyu; Yoo, Ki-Bong; Kang, Jae-Heon; Lee, Hae-Jeung
2017-06-01
The aim of the present study was to investigate the association between hypertension and carbonated sugar-sweetened beverages (SSB) intake according to gender and obesity. The study used data from 2007, 2008 and 2009 Korea National Health and Nutrition Examination Surveys. A total of 9869 subjects (men = 3845 and women = 6024) were included. SSB intakes were calculated from food frequency questionnaires. Odds ratios (ORs) and 95 % confidence interval (CI) for hypertension were assessed using survey logistic regression and multivariable adjusted models. A total of 14.5 % of individuals were classified as having hypertension. The likelihood of hypertension in the third, fourth and fifth quintiles for SSB intake increased to OR 1.00, 1.20 and 1.42 respectively, after adjusting for confounding factors. Compared to the participants in the lowest tertile for SSB intake, participants in the third tertile showed an increased likelihood of hypertension with ORs (CI) of 2.00 (1.21-3.31) and 1.75 (1.23-2.49) for obese women and non-obese men, respectively. The present study showed gender differences in the relationship between carbonated SSB intake and the hypertension according to obesity.
Spatial design and strength of spatial signal: Effects on covariance estimation
Irvine, Kathryn M.; Gitelman, Alix I.; Hoeting, Jennifer A.
2007-01-01
In a spatial regression context, scientists are often interested in a physical interpretation of components of the parametric covariance function. For example, spatial covariance parameter estimates in ecological settings have been interpreted to describe spatial heterogeneity or “patchiness” in a landscape that cannot be explained by measured covariates. In this article, we investigate the influence of the strength of spatial dependence on maximum likelihood (ML) and restricted maximum likelihood (REML) estimates of covariance parameters in an exponential-with-nugget model, and we also examine these influences under different sampling designs—specifically, lattice designs and more realistic random and cluster designs—at differing intensities of sampling (n=144 and 361). We find that neither ML nor REML estimates perform well when the range parameter and/or the nugget-to-sill ratio is large—ML tends to underestimate the autocorrelation function and REML produces highly variable estimates of the autocorrelation function. The best estimates of both the covariance parameters and the autocorrelation function come under the cluster sampling design and large sample sizes. As a motivating example, we consider a spatial model for stream sulfate concentration.
Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.
Sabour, Siamak; Dastjerdi, Elahe Vahid
2012-08-20
Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be applied.
A radiographic study of the mandibular third molar root development in different ethnic groups.
Liversidge, H M; Peariasamy, K; Folayan, M O; Adeniyi, A O; Ngom, P I; Mikami, Y; Shimada, Y; Kuroe, K; Tvete, I F; Kvaal, S I
2017-12-01
The nature of differences in the timing of tooth formation between ethnic groups is important when estimating age. To calculate age of transition of the mandibular third (M3) molar tooth stages from archived dental radiographs from sub-Saharan Africa, Malaysia, Japan and two groups from London UK (Whites and Bangladeshi). The number of radiographs was 4555 (2028 males, 2527 females) with an age range 10-25 years. The left M3 was staged into Moorrees stages. A probit model was fitted to calculate mean ages for transitions between stages for males and females and each ethnic group separately. The estimated age distributions given each M3 stage was calculated. To assess differences in timing of M3 between ethnic groups, three models were proposed: a separate model for each ethnic group, a joint model and a third model combining some aspects across groups. The best model fit was tested using Bayesian and Akaikes information criteria (BIC and AIC) and log likelihood ratio test. Differences in mean ages of M3 root stages were found between ethnic groups, however all groups showed large standard deviation values. The AIC and log likelihood ratio test indicated that a separate model for each ethnic group was best. Small differences were also noted between timing of M3 between males and females, with the exception of the Malaysian group. These findings suggests that features of a reference data set (wide age range and uniform age distribution) and a Bayesian statistical approach are more important than population specific convenience samples to estimate age of an individual using M3. Some group differences were evident in M3 timing, however, this has some impact on the confidence interval of estimated age in females and little impact in males because of the large variation in age.
NASA Astrophysics Data System (ADS)
Akgun, Aykut; Dag, Serhat; Bulut, Fikri
2008-05-01
Landslides are very common natural problems in the Black Sea Region of Turkey due to the steep topography, improper use of land cover and adverse climatic conditions for landslides. In the western part of region, many studies have been carried out especially in the last decade for landslide susceptibility mapping using different evaluation methods such as deterministic approach, landslide distribution, qualitative, statistical and distribution-free analyses. The purpose of this study is to produce landslide susceptibility maps of a landslide-prone area (Findikli district, Rize) located at the eastern part of the Black Sea Region of Turkey by likelihood frequency ratio (LRM) model and weighted linear combination (WLC) model and to compare the results obtained. For this purpose, landslide inventory map of the area were prepared for the years of 1983 and 1995 by detailed field surveys and aerial-photography studies. Slope angle, slope aspect, lithology, distance from drainage lines, distance from roads and the land-cover of the study area are considered as the landslide-conditioning parameters. The differences between the susceptibility maps derived by the LRM and the WLC models are relatively minor when broad-based classifications are taken into account. However, the WLC map showed more details but the other map produced by LRM model produced weak results. The reason for this result is considered to be the fact that the majority of pixels in the LRM map have high values than the WLC-derived susceptibility map. In order to validate the two susceptibility maps, both of them were compared with the landslide inventory map. Although the landslides do not exist in the very high susceptibility class of the both maps, 79% of the landslides fall into the high and very high susceptibility zones of the WLC map while this is 49% for the LRM map. This shows that the WLC model exhibited higher performance than the LRM model.
Concepts, challenges, and successes in modeling thermodynamics of metabolism.
Cannon, William R
2014-01-01
The modeling of the chemical reactions involved in metabolism is a daunting task. Ideally, the modeling of metabolism would use kinetic simulations, but these simulations require knowledge of the thousands of rate constants involved in the reactions. The measurement of rate constants is very labor intensive, and hence rate constants for most enzymatic reactions are not available. Consequently, constraint-based flux modeling has been the method of choice because it does not require the use of the rate constants of the law of mass action. However, this convenience also limits the predictive power of constraint-based approaches in that the law of mass action is used only as a constraint, making it difficult to predict metabolite levels or energy requirements of pathways. An alternative to both of these approaches is to model metabolism using simulations of states rather than simulations of reactions, in which the state is defined as the set of all metabolite counts or concentrations. While kinetic simulations model reactions based on the likelihood of the reaction derived from the law of mass action, states are modeled based on likelihood ratios of mass action. Both approaches provide information on the energy requirements of metabolic reactions and pathways. However, modeling states rather than reactions has the advantage that the parameters needed to model states (chemical potentials) are much easier to determine than the parameters needed to model reactions (rate constants). Herein, we discuss recent results, assumptions, and issues in using simulations of state to model metabolism.
Prediction of hamstring injury in professional soccer players by isokinetic measurements
Dauty, Marc; Menu, Pierre; Fouasson-Chailloux, Alban; Ferréol, Sophie; Dubois, Charles
2016-01-01
Summary Objectives previous studies investigating the ability of isokinetic strength ratios to predict hamstring injuries in soccer players have reported conflicting results. Hypothesis to determine if isokinetic ratios are able to predict hamstring injury occurring during the season in professional soccer players. Study Design case-control study; Level of evidence: 3. Methods from 2001 to 2011, 350 isokinetic tests were performed in 136 professional soccer players at the beginning of the soccer season. Fifty-seven players suffered hamstring injury during the season that followed the isokinetic tests. These players were compared with the 79 uninjured players. The bilateral concentric ratio (hamstring-to-hamstring), ipsilateral concentric ratio (hamstring-to-quadriceps), and mixed ratio (eccentric/concentric hamstring-to-quadriceps) were studied. The predictive ability of each ratio was established based on the likelihood ratio and post-test probability. Results the mixed ratio (30 eccentric/240 concentric hamstring-to-quadriceps) <0.8, ipsilateral ratio (180 concentric hamstring-to-quadriceps) <0.47, and bilateral ratio (60 concentric hamstring-to-hamstring) <0.85 were the most predictive of hamstring injury. The ipsilateral ratio <0.47 allowed prediction of the severity of the hamstring injury, and was also influenced by the length of time since administration of the isokinetic tests. Conclusion isokinetic ratios are useful for predicting the likelihood of hamstring injury in professional soccer players during the competitive season. PMID:27331039
Detecting Multiple Model Components with the Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Protassov, R. S.; van Dyk, D. A.
2000-05-01
The likelihood ratio test (LRT) and F-test popularized in astrophysics by Bevington (Data Reduction and Error Analysis in the Physical Sciences ) and Cash (1977, ApJ 228, 939), do not (even asymptotically) adhere to their nominal χ2 and F distributions in many statistical tests commonly used in astrophysics. The many legitimate uses of the LRT (see, e.g., the examples given in Cash (1977)) notwithstanding, it can be impossible to compute the false positive rate of the LRT or related tests such as the F-test. For example, although Cash (1977) did not suggest the LRT for detecting a line profile in a spectral model, it has become common practice despite the lack of certain required mathematical regularity conditions. Contrary to common practice, the nominal distribution of the LRT statistic should not be used in these situations. In this paper, we characterize an important class of problems where the LRT fails, show the non-standard behavior of the test in this setting, and provide a Bayesian alternative to the LRT, i.e., posterior predictive p-values. We emphasize that there are many legitimate uses of the LRT in astrophysics, and even when the LRT is inappropriate, there remain several statistical alternatives (e.g., judicious use of error bars and Bayes factors). We illustrate this point in our analysis of GRB 970508 that was studied by Piro et al. in ApJ, 514:L73-L77, 1999.
Davidov, Ori; Rosen, Sophia
2011-04-01
In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.
Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation
NASA Astrophysics Data System (ADS)
Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.
2016-12-01
With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.
Male sexual strategies modify ratings of female models with specific waist-to-hip ratios.
Brase, Gary L; Walker, Gary
2004-06-01
Female waist-to-hip ratio (WHR) has generally been an important general predictor of ratings of physical attractiveness and related characteristics. Individual differences in ratings do exist, however, and may be related to differences in the reproductive tactics of the male raters such as pursuit of short-term or long-term relationships and adjustments based on perceptions of one's own quality as a mate. Forty males, categorized according to sociosexual orientation and physical qualities (WHR, Body Mass Index, and self-rated desirability), rated female models on both attractiveness and likelihood they would approach them. Sociosexually restricted males were less likely to approach females rated as most attractive (with 0.68-0.72 WHR), as compared with unrestricted males. Males with lower scores in terms of physical qualities gave ratings indicating more favorable evaluations of female models with lower WHR. The results indicate that attractiveness and willingness to approach are overlapping but distinguishable constructs, both of which are influenced by variations in characteristics of the raters.
The continuum fusion theory of signal detection applied to a bi-modal fusion problem
NASA Astrophysics Data System (ADS)
Schaum, A.
2011-05-01
A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies
Rukhin, Andrew L.
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Sull, Jae Woong; Liang, Kung-Yee; Hetmanski, Jacqueline B.; Fallin, M. Daniele; Ingersoll, Roxanne G.; Park, Ji Wan; Wu-Chou, Yah-Huei; Chen, Philip K.; Chong, Samuel S.; Cheah, Felicia; Yeow, Vincent; Park, Beyoung Yun; Jee, Sun Ha; Jabs, Ethylin W.; Redett, Richard; Scott, Alan F.; Beaty, Terri H.
2009-01-01
Isolated cleft palate is among the most common human birth defects. The TCOF1 gene has been suggested as a candidate gene for cleft palate based on animal models. This study tests for association between markers in TCOF1 and isolated, nonsyndromic cleft palate using a case-parent trio design considering parent-of-origin effects. Case-parent trios from three populations (comprising a total of 81 case-parent trios) were genotyped for single nucleotide polymorphisms (SNPs) in the TCOF1 gene. We used the transmission disequilibrium test and the transmission asymmetry test on individual SNPs. When all trios were combined, the odds ratio for transmission of the minor allele, OR(transmission), was significant for SNP rs15251 (OR = 2.88, P = 0.007), as well as rs2255796 and rs2569062 (OR = 2.08, P = 0.03; OR = 2.43, P = 0.041; respectively) when parent of origin was not considered. The transmission asymmetry test also revealed one SNP (rs15251) showing excess maternal transmission significant at the P = 0.005 level (OR = 6.50). Parent-of-origin effects were assessed using the parent-of-origin likelihood ratio test on both SNPs and haplotypes. While the parent-of-origin likelihood ratio test was only marginally significant for this SNP (P = 0.136), analysis of haplotypes of rs2255796 and rs15251 suggested excess maternal transmission. Therefore, these data suggest TCOF1 may influence risk of cleft palate through a parent-of-origin effect. PMID:18688869
Impact of a diagnosis-related group payment system on cesarean section in Korea.
Kim, Seung Ju; Han, Kyu-Tae; Kim, Sun Jung; Park, Eun-Cheol; Park, Hye Ki
2016-06-01
Cesarean sections (CSs) are the most expensive method of delivery, which may affect the physician's choice of treatment when providing health services to patients. We investigated the effects of the diagnosis-related group (DRG)-based payment system on CSs in Korea. We used National Health Insurance claim data from 2011 to 2014, which included 1,289,989 delivery cases at 674 hospitals. We used a generalized estimating equation model to evaluate the association between the likelihood of cesarean delivery and the length of the DRG adoption period. A total of 477,309 (37.0%) delivery cases were performed by CSs. We found that a longer DRG adoption period was associated with a lower odds ratio of CSs (odds ratio [OR]: 0.997, 95% CI: 0.996-0.998). In addition, a longer DRG adoption period was associated with a lower odds ratio for CSs in hospitals that had voluntarily adopted the DRG system. Similar results were also observed for urban hospitals, primiparas, and those under 28 years old and over 33 years old. Our results suggest that the change in the reimbursement system was associated with a low likelihood of CSs. The impact of DRG adoption on cesarean delivery can also be expected to increase with time, as our finding provides evidence that the reimbursement system is associated with the health provider's decision to provide health services for patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Relationships between dog ownership and physical activity in postmenopausal women.
Garcia, David O; Wertheim, Betsy C; Manson, JoAnn E; Chlebowski, Rowan T; Volpe, Stella L; Howard, Barbara V; Stefanick, Marcia L; Thomson, Cynthia A
2015-01-01
Positive associations between dog ownership and physical activity in older adults have been previously reported. The objective of this study was to examine cross-sectional associations between dog ownership and physical activity measures in a well-characterized, diverse sample of postmenopausal women. Analyses included 36,984 dog owners (mean age: 61.5years), and 115,645 non-dog owners (mean age: 63.9years) enrolled in a clinical trial or the observational study of the Women's Health Initiative between 1993 and 1998. Logistic regression models were used to test for associations between dog ownership and physical activity, adjusted for potential confounders. Owning a dog was associated with a higher likelihood of walking ≥150min/wk (Odds Ratio, 1.14; 95% Confidence Interval, 1.10-1.17) and a lower likelihood of being sedentary ≥8h/day (Odds Ratio, 0.86; 95% Confidence Interval, 0.83-0.89) as compared to not owning a dog. However, dog owners were less likely to meet ≥7.5MET-h/wk of total physical activity as compared to non-dog owners (Odds Ratio, 1.03; 95% Confidence Interval, 1.00-1.07). Dog ownership is associated with increased physical activity in older women, particularly among women living alone. Health promotion efforts aimed at older adults should highlight the benefits of regular dog walking for both dog owners and non-dog owners. Copyright © 2014 Elsevier Inc. All rights reserved.
Hurdle models for multilevel zero-inflated data via h-likelihood.
Molas, Marek; Lesaffre, Emmanuel
2010-12-30
Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Clinical Evaluation and Physical Exam Findings in Patients with Anterior Shoulder Instability.
Lizzio, Vincent A; Meta, Fabien; Fidai, Mohsin; Makhni, Eric C
2017-12-01
The goal of this paper is to provide an overview in evaluating the patient with suspected or known anteroinferior glenohumeral instability. There is a high rate of recurrent subluxations or dislocations in young patients with history of anterior shoulder dislocation, and recurrent instability will increase likelihood of further damage to the glenohumeral joint. Proper identification and treatment of anterior shoulder instability can dramatically reduce the rate of recurrent dislocation and prevent subsequent complications. Overall, the anterior release or surprise test demonstrates the best sensitivity and specificity for clinically diagnosing anterior shoulder instability, although other tests also have favorable sensitivities, specificities, positive likelihood ratios, negative likelihood ratios, and inter-rater reliabilities. Anterior shoulder instability is a relatively common injury in the young and athletic population. The combination of history and performing apprehension, relocation, release or surprise, anterior load, and anterior drawer exam maneuvers will optimize sensitivity and specificity for accurately diagnosing anterior shoulder instability in clinical practice.
Subjective global assessment of nutritional status in children.
Mahdavi, Aida Malek; Ostadrahimi, Alireza; Safaiyan, Abdolrasool
2010-10-01
This study was aimed to compare the subjective and objective nutritional assessments and to analyse the performance of subjective global assessment (SGA) of nutritional status in diagnosing undernutrition in paediatric patients. One hundred and forty children (aged 2-12 years) hospitalized consecutively in Tabriz Paediatric Hospital from June 2008 to August 2008 underwent subjective assessment using the SGA questionnaire and objective assessment, including anthropometric and biochemical measurements. Agreement between two assessment methods was analysed by the kappa (κ) statistic. Statistical indicators including (sensitivity, specificity, predictive values, error rates, accuracy, powers, likelihood ratios and odds ratio) between SGA and objective assessment method were determined. The overall prevalence of undernutrition according to the SGA (70.7%) was higher than that by objective assessment of nutritional status (48.5%). Agreement between the two evaluation methods was only fair to moderate (κ = 0.336, P < 0.001). The sensitivity, specificity, positive and negative predictive value of the SGA method for screening undernutrition in this population were 88.235%, 45.833%, 60.606% and 80.487%, respectively. Accuracy, positive and negative power of the SGA method were 66.428%, 56.074% and 41.25%, respectively. Likelihood ratio positive, likelihood ratio negative and odds ratio of the SGA method were 1.628, 0.256 and 6.359, respectively. Our findings indicated that in assessing nutritional status of children, there is not a good level of agreement between SGA and objective nutritional assessment. In addition, SGA is a highly sensitive tool for assessing nutritional status and could identify children at risk of developing undernutrition. © 2009 Blackwell Publishing Ltd.
Stram, Daniel O; Leigh Pearce, Celeste; Bretsky, Phillip; Freedman, Matthew; Hirschhorn, Joel N; Altshuler, David; Kolonel, Laurence N; Henderson, Brian E; Thomas, Duncan C
2003-01-01
The US National Cancer Institute has recently sponsored the formation of a Cohort Consortium (http://2002.cancer.gov/scpgenes.htm) to facilitate the pooling of data on very large numbers of people, concerning the effects of genes and environment on cancer incidence. One likely goal of these efforts will be generate a large population-based case-control series for which a number of candidate genes will be investigated using SNP haplotype as well as genotype analysis. The goal of this paper is to outline the issues involved in choosing a method of estimating haplotype-specific risk estimates for such data that is technically appropriate and yet attractive to epidemiologists who are already comfortable with odds ratios and logistic regression. Our interest is to develop and evaluate extensions of methods, based on haplotype imputation, that have been recently described (Schaid et al., Am J Hum Genet, 2002, and Zaykin et al., Hum Hered, 2002) as providing score tests of the null hypothesis of no effect of SNP haplotypes upon risk, which may be used for more complex tasks, such as providing confidence intervals, and tests of equivalence of haplotype-specific risks in two or more separate populations. In order to do so we (1) develop a cohort approach towards odds ratio analysis by expanding the E-M algorithm to provide maximum likelihood estimates of haplotype-specific odds ratios as well as genotype frequencies; (2) show how to correct the cohort approach, to give essentially unbiased estimates for population-based or nested case-control studies by incorporating the probability of selection as a case or control into the likelihood, based on a simplified model of case and control selection, and (3) finally, in an example data set (CYP17 and breast cancer, from the Multiethnic Cohort Study) we compare likelihood-based confidence interval estimates from the two methods with each other, and with the use of the single-imputation approach of Zaykin et al. applied under both null and alternative hypotheses. We conclude that so long as haplotypes are well predicted by SNP genotypes (we use the Rh2 criteria of Stram et al. [1]) the differences between the three methods are very small and in particular that the single imputation method may be expected to work extremely well. Copyright 2003 S. Karger AG, Basel
THE EFFECT OF A MALE SURPLUS ON INTIMATE PARTNER VIOLENCE IN INDIA.
Bose, Sunita; Trent, Katherine; South, Scott J
2013-08-31
Theories of the social consequences of imbalanced sex ratios posit that men will exercise extraordinarily strict control over women's behaviour when women's relationship options are plentiful and men's own options are limited. We use data from the third wave of the Indian National Family and Health Survey, conducted in 2005-06, to explore this issue, investigating the effect of the community sex ratio on women's experience of intimate partner violence in India. Multilevel logistic regression models show that a relative surplus of men in a community increases the likelihood of physical abuse by husbands even after adjusting for various other individual, household, and geographic characteristics. Further evidence of control over women when there is a sex ratio imbalance is provided by the increased odds of husbands distrusting wives with money when there is a male surplus in the local community.
THE EFFECT OF A MALE SURPLUS ON INTIMATE PARTNER VIOLENCE IN INDIA
Bose, Sunita; Trent, Katherine; South, Scott J.
2013-01-01
Theories of the social consequences of imbalanced sex ratios posit that men will exercise extraordinarily strict control over women’s behaviour when women’s relationship options are plentiful and men’s own options are limited. We use data from the third wave of the Indian National Family and Health Survey, conducted in 2005–06, to explore this issue, investigating the effect of the community sex ratio on women’s experience of intimate partner violence in India. Multilevel logistic regression models show that a relative surplus of men in a community increases the likelihood of physical abuse by husbands even after adjusting for various other individual, household, and geographic characteristics. Further evidence of control over women when there is a sex ratio imbalance is provided by the increased odds of husbands distrusting wives with money when there is a male surplus in the local community. PMID:24511150
Investigating Gender Differences under Time Pressure in Financial Risk Taking
Xie, Zhixin; Page, Lionel; Hardy, Ben
2017-01-01
There is a significant gender imbalance on financial trading floors. This motivated us to investigate gender differences in financial risk taking under pressure. We used a well-established approach from behavior economics to analyze a series of risky monetary choices by male and female participants with and without time pressure. We also used second to fourth digit ratio (2D:4D) and face width-to-height ratio (fWHR) as correlates of pre-natal exposure to testosterone. We constructed a structural model and estimated the participants' risk attitudes and probability perceptions via maximum likelihood estimation under both expected utility (EU) and rank-dependent utility (RDU) models. In line with existing research, we found that male participants are less risk averse and that the gender gap in risk attitudes increases under moderate time pressure. We found that female participants with lower 2D:4D ratios and higher fWHR are less risk averse in RDU estimates. Males with lower 2D:4D ratios were less risk averse in EU estimations, but more risk averse using RDU estimates. We also observe that men whose ratios indicate a greater prenatal exposure to testosterone exhibit a greater optimism and overestimation of small probabilities of success. PMID:29326566
Polcari, J.
2013-08-16
The signal processing concept of signal-to-noise ratio (SNR), in its role as a performance measure, is recast within the more general context of information theory, leading to a series of useful insights. Establishing generalized SNR (GSNR) as a rigorous information theoretic measure inherent in any set of observations significantly strengthens its quantitative performance pedigree while simultaneously providing a specific definition under general conditions. This directly leads to consideration of the log likelihood ratio (LLR): first, as the simplest possible information-preserving transformation (i.e., signal processing algorithm) and subsequently, as an absolute, comparable measure of information for any specific observation exemplar. Furthermore,more » the information accounting methodology that results permits practical use of both GSNR and LLR as diagnostic scalar performance measurements, directly comparable across alternative system/algorithm designs, applicable at any tap point within any processing string, in a form that is also comparable with the inherent performance bounds due to information conservation.« less
MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS
Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-07-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.
Chaikriangkrai, Kongkiat; Jhun, Hye Yeon; Shantha, Ghanshyam Palamaner Subash; Abdulhak, Aref Bin; Tandon, Rudhir; Alqasrawi, Musab; Klappa, Anthony; Pancholy, Samir; Deshmukh, Abhishek; Bhama, Jay; Sigurdsson, Gardar
2018-07-01
In aortic stenosis patients referred for surgical and transcatheter aortic valve replacement (AVR), the evidence of diagnostic accuracy of coronary computed tomography angiography (CCTA) has been limited. The objective of this study was to investigate the diagnostic accuracy of CCTA for significant coronary artery disease (CAD) in patients referred for AVR using invasive coronary angiography (ICA) as the gold standard. We searched databases for all diagnostic studies of CCTA in patients referred for AVR, which reported diagnostic testing characteristics on patient-based analysis required to pool summary sensitivity, specificity, positive-likelihood ratio, and negative-likelihood ratio. Significant CAD in both CCTA and ICA was defined by >50% stenosis in any coronary artery, coronary stent, or bypass graft. Thirteen studies evaluated 1498 patients (mean age, 74 y; 47% men; 76% transcatheter AVR). The pooled prevalence of significant stenosis determined by ICA was 43%. Hierarchical summary receiver-operating characteristic analysis demonstrated a summary area under curve of 0.96. The pooled sensitivity, specificity, and positive-likelihood and negative-likelihood ratios of CCTA in identifying significant stenosis determined by ICA were 95%, 79%, 4.48, and 0.06, respectively. In subgroup analysis, the diagnostic profiles of CCTA were comparable between surgical and transcatheter AVR. Despite the higher prevalence of significant CAD in patients with aortic stenosis than with other valvular heart diseases, our meta-analysis has shown that CCTA has a suitable diagnostic accuracy profile as a gatekeeper test for ICA. Our study illustrates a need for further study of the potential role of CCTA in preoperative planning for AVR.
NASA Astrophysics Data System (ADS)
Hasan, Husna; Salam, Norfatin; Kassim, Suraiya
2013-04-01
Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.
Mental Health Recovery in the Patient-Centered Medical Home
Aarons, Gregory A.; O’Connell, Maria; Davidson, Larry; Groessl, Erik J.
2015-01-01
Objectives. We examined the impact of transitioning clients from a mental health clinic to a patient-centered medical home (PCMH) on mental health recovery. Methods. We drew data from a large US County Behavioral Health Services administrative data set. We used propensity score analysis and multilevel modeling to assess the impact of the PCMH on mental health recovery by comparing PCMH participants (n = 215) to clients receiving service as usual (SAU; n = 22 394) from 2011 to 2013 in San Diego County, California. We repeatedly assessed mental health recovery over time (days since baseline assessment range = 0–1639; mean = 186) with the Illness Management and Recovery (IMR) scale and Recovery Markers Questionnaire. Results. For total IMR (log-likelihood ratio χ2[1] = 4696.97; P < .001) and IMR Factor 2 Management scores (log-likelihood ratio χ2[1] = 7.9; P = .005), increases in mental health recovery over time were greater for PCMH than SAU participants. Increases on all other measures over time were similar for PCMH and SAU participants. Conclusions. Greater increases in mental health recovery over time can be expected when patients with severe mental illness are provided treatment through the PCMH. Evaluative efforts should be taken to inform more widespread adoption of the PCMH. PMID:26180945
Osteoporosis, vitamin C intake, and physical activity in Korean adults aged 50 years and over
Kim, Min Hee; Lee, Hae-Jeung
2016-01-01
[Purpose] To investigate associations between vitamin C intake, physical activity, and osteoporosis among Korean adults aged 50 and over. [Subjects and Methods] This study was based on bone mineral density measurement data from the 2008 to 2011 Korean National Health and Nutritional Examination Survey. The study sample comprised 3,047 subjects. The normal group was defined as T-score ≥ −1.0, and the osteoporosis group as T-score ≤ −2.5. The odds ratios for osteoporosis were assessed by logistic regression of each vitamin C intake quartile. [Results] Compared to the lowest quartile of vitamin C intake, the other quartiles showed a lower likelihood of osteoporosis after adjusting for age and gender. In the multi-variate model, the odds ratio for the likelihood of developing osteoporosis in the non-physical activity group significantly decreased to 0.66, 0.57, and 0.46 (p for trend = 0.0046). However, there was no significant decrease (0.98, 1.00, and 0.97) in the physical activity group. [Conclusion] Higher vitamin C intake levels were associated with a lower risk of osteoporosis in Korean adults aged over 50 with low levels of physical activity. However, no association was seen between vitamin C intake and osteoporosis risk in those with high physical activity levels. PMID:27134348
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Statistical tests to compare motif count exceptionalities
Robin, Stéphane; Schbath, Sophie; Vandewalle, Vincent
2007-01-01
Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use. PMID:17346349
A close examination of double filtering with fold change and t test in microarray analysis
2009-01-01
Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.
2015-01-19
Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less
ERIC Educational Resources Information Center
Jones, Douglas H.
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
The Blue Arc Entoptic Phenomenon in Glaucoma (An American Ophthalmological Thesis)
Pasquale, Louis R.; Brusie, Steven
2013-01-01
Purpose: To determine whether the blue arc entoptic phenomenon, a positive visual response originating from the retina with a shape that conforms to the topology of the nerve fiber layer, is depressed in glaucoma. Methods: We recruited a cross-sectional, nonconsecutive sample of 202 patients from a single institution in a prospective manner. Subjects underwent full ophthalmic examination, including standard automated perimetry (Humphrey Visual Field 24–2) or frequency doubling technology (Screening C 20–5) perimetry. Eligible patients viewed computer-generated stimuli under conditions chosen to optimize perception of the blue arcs. Unmasked testers instructed patients to report whether they were able to perceive blue arcs but did not reveal what response was expected. We created multivariable logistic regression models to ascertain the demographic and clinical parameters associated with perceiving the blue arcs. Results: In multivariable analyses, each 0.1 unit increase in cup-disc ratio was associated with 36% reduced likelihood of perceiving the blue arcs (odds ratio [OR] = 0.66 [95% confidence interval (CI): 0.53–0.83], P<.001). A smaller mean defect was associated with an increased likelihood of perceiving the blue arcs (OR=1.79 [95% CI: 1.40–2.28]); P<.001), while larger pattern standard deviation (OR=0.72 [95% CI: 0.57–0.91]; P=.005) and abnormal glaucoma hemifield test (OR=0.25 [0.10–0.65]; P=.006) were associated with a reduced likelihood of perceiving them. Older age and media opacity were also associated with an inability to perceive the blue arcs. Conclusion: In this study, the inability to perceive the blue arcs correlated with structural and functional features associated with glaucoma, although older age and media opacity were also predictors of this entoptic response. PMID:24167324
Matthews, Lynn T; Ribaudo, Heather B; Kaida, Angela; Bennett, Kara; Musinguzi, Nicholas; Siedner, Mark J; Kabakyenga, Jerome; Hunt, Peter W; Martin, Jeffrey N; Boum, Yap; Haberer, Jessica E; Bangsberg, David R
2016-04-01
HIV-infected women risk sexual and perinatal HIV transmission during conception, pregnancy, childbirth, and breastfeeding. We compared HIV-1 RNA suppression and medication adherence across periconception, pregnancy, and postpartum periods, among women on antiretroviral therapy (ART) in Uganda. We analyzed data from women in a prospective cohort study, aged 18-49 years, enrolled at ART initiation and with ≥1 pregnancy between 2005 and 2011. Participants were seen quarterly. The primary exposure of interest was pregnancy period, including periconception (3 quarters before pregnancy), pregnancy, postpartum (6 months after pregnancy outcome), or nonpregnancy related. Regression models using generalized estimating equations compared the likelihood of HIV-1 RNA ≤400 copies per milliliter, <80% average adherence based on electronic pill caps (medication event monitoring system), and likelihood of 72-hour medication gaps across each period. One hundred eleven women contributed 486 person-years of follow-up. Viral suppression was present at 89% of nonpregnancy, 97% of periconception, 93% of pregnancy, and 89% of postpartum visits, and was more likely during periconception (adjusted odds ratio, 2.15) compared with nonpregnant periods. Average ART adherence was 90% [interquartile range (IQR), 70%-98%], 93% (IQR, 82%-98%), 92% (IQR, 72%-98%), and 88% (IQR, 63%-97%) during nonpregnant, periconception, pregnant, and postpartum periods, respectively. Average adherence <80% was less likely during periconception (adjusted odds ratio, 0.68), and 72-hour gaps per 90 days were less frequent during periconception (adjusted relative risk, 0.72) and more frequent during postpartum (adjusted relative risk, 1.40). Women with pregnancy were virologically suppressed at most visits, with an increased likelihood of suppression and high adherence during periconception follow-up. Increased frequency of 72-hour gaps suggests a need for increased adherence support during postpartum periods.
Brooks, Billy; McBee, Matthew; Pack, Robert; Alamian, Arsham
2017-05-01
Rates of accidental overdose mortality from substance use disorder (SUD) have risen dramatically in the United States since 1990. Between 1999 and 2004 alone rates increased 62% nationwide, with rural overdose mortality increasing at a rate 3 times that seen in urban populations. Cultural differences between rural and urban populations (e.g., educational attainment, unemployment rates, social characteristics, etc.) affect the nature of SUD, leading to disparate risk of overdose across these communities. Multiple-groups latent class analysis with covariates was applied to data from the 2011 and 2012 National Survey on Drug Use and Health (n=12.140) to examine potential differences in latent classifications of SUD between rural and urban adult (aged 18years and older) populations. Nine drug categories were used to identify latent classes of SUD defined by probability of diagnosis within these categories. Once the class structures were established for rural and urban samples, posterior membership probabilities were entered into a multinomial regression analysis of socio-demographic predictors' association with the likelihood of SUD latent class membership. Latent class structures differed across the sub-groups, with the rural sample fitting a 3-class structure (Bootstrap Likelihood Ratio Test P value=0.03) and the urban fitting a 6-class model (Bootstrap Likelihood Ratio Test P value<0.0001). Overall the rural class structure exhibited less diversity in class structure and lower prevalence of SUD in multiple drug categories (e.g. cocaine, hallucinogens, and stimulants). This result supports the hypothesis that different underlying elements exist in the two populations that affect SUD patterns, and thus can inform the development of surveillance instruments, clinical services, and prevention programming tailored to specific communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
The blue arc entoptic phenomenon in glaucoma (an American ophthalmological thesis).
Pasquale, Louis R; Brusie, Steven
2013-09-01
To determine whether the blue arc entoptic phenomenon, a positive visual response originating from the retina with a shape that conforms to the topology of the nerve fiber layer, is depressed in glaucoma. We recruited a cross-sectional, nonconsecutive sample of 202 patients from a single institution in a prospective manner. Subjects underwent full ophthalmic examination, including standard automated perimetry (Humphrey Visual Field 24-2) or frequency doubling technology (Screening C 20-5) perimetry. Eligible patients viewed computer-generated stimuli under conditions chosen to optimize perception of the blue arcs. Unmasked testers instructed patients to report whether they were able to perceive blue arcs but did not reveal what response was expected. We created multivariable logistic regression models to ascertain the demographic and clinical parameters associated with perceiving the blue arcs. In multivariable analyses, each 0.1 unit increase in cup-disc ratio was associated with 36% reduced likelihood of perceiving the blue arcs (odds ratio [OR] = 0.66 [95% confidence interval (CI): 0.53-0.83], P<.001). A smaller mean defect was associated with an increased likelihood of perceiving the blue arcs (OR=1.79 [95% CI: 1.40-2.28]); P<.001), while larger pattern standard deviation (OR=0.72 [95% CI: 0.57-0.91]; P=.005) and abnormal glaucoma hemifield test (OR=0.25 [0.10-0.65]; P=.006) were associated with a reduced likelihood of perceiving them. Older age and media opacity were also associated with an inability to perceive the blue arcs. In this study, the inability to perceive the blue arcs correlated with structural and functional features associated with glaucoma, although older age and media opacity were also predictors of this entoptic response.
Long working hours and use of psychotropic medicine: a follow-up study with register linkage.
Hannerz, Harald; Albertsen, Karen
2016-03-01
This study aimed to investigate the possibility of a prospective association between long working hours and use of psychotropic medicine. Survey data drawn from random samples of the general working population of Denmark in the time period 1995-2010 were linked to national registers covering all inhabitants. The participants were followed for first occurrence of redeemed prescriptions for psychotropic medicine. The primary analysis included 25,959 observations (19,259 persons) and yielded a total of 2914 new cases of psychotropic drug use in 99,018 person-years at risk. Poisson regression was used to model incidence rates of redeemed prescriptions for psychotropic medicine as a function of working hours (32-40, 41-48, >48 hours/week). The analysis was controlled for gender, age, sample, shift work, and socioeconomic status. A likelihood ratio test was used to test the null hypothesis, which stated that the incidence rates were independent of weekly working hours. The likelihood ratio test did not reject the null hypothesis (P=0.085). The rate ratio (RR) was 1.04 [95% confidence interval (95% CI) 0.94-1.15] for the contrast 41-48 versus 32-40 work hours/week and 1.15 (95% CI 1.02-1.30) for >48 versus 32-40 hours/week. None of the rate ratios that were estimated in the present study were statistically significant after adjustment for multiple testing. However, stratified analyses, in which 30 RR were estimated, generated the hypothesis that overtime work (>48 hours/week) might be associated with an increased risk among night or shift workers (RR=1.51, 95% CI 1.15-1.98). The present study did not find a statistically significant association between long working hours and incidence of psychotropic drug usage among Danish employees.
Average Likelihood Methods for Code Division Multiple Access (CDMA)
2014-05-01
lengths in the range of 22 to 213 and possibly higher. Keywords: DS / CDMA signals, classification, balanced CDMA load, synchronous CDMA , decision...likelihood ratio test (ALRT). We begin this classification problem by finding the size of the spreading matrix that generated the DS - CDMA signal. As...Theoretical Background The classification of DS / CDMA signals should not be confused with the problem of multiuser detection. The multiuser detection deals
Non-Linear Cosmological Power Spectra in Real and Redshift Space
NASA Technical Reports Server (NTRS)
Taylor, A. N.; Hamilton, A. J. S.
1996-01-01
We present an expression for the non-linear evolution of the cosmological power spectrum based on Lagrangian trajectories. This is simplified using the Zel'dovich approximation to trace particle displacements, assuming Gaussian initial conditions. The model is found to exhibit the transfer of power from large to small scales expected in self-gravitating fields. Some exact solutions are found for power-law initial spectra. We have extended this analysis into red-shift space and found a solution for the non-linear, anisotropic redshift-space power spectrum in the limit of plane-parallel redshift distortions. The quadrupole-to-monopole ratio is calculated for the case of power-law initial spectra. We find that the shape of this ratio depends on the shape of the initial spectrum, but when scaled to linear theory depends only weakly on the redshift-space distortion parameter, beta. The point of zero-crossing of the quadrupole, kappa(sub o), is found to obey a simple scaling relation and we calculate this scale in the Zel'dovich approximation. This model is found to be in good agreement with a series of N-body simulations on scales down to the zero-crossing of the quadrupole, although the wavenumber at zero-crossing is underestimated. These results are applied to the quadrupole-to-monopole ratio found in the merged QDOT plus 1.2-Jy-IRAS redshift survey. Using a likelihood technique we have estimated that the distortion parameter is constrained to be beta greater than 0.5 at the 95 percent level. Our results are fairly insensitive to the local primordial spectral slope, but the likelihood analysis suggests n = -2 un the translinear regime. The zero-crossing scale of the quadrupole is k(sub 0) = 0.5 +/- 0.1 h Mpc(exp -1) and from this we infer that the amplitude of clustering is sigma(sub 8) = 0.7 +/- 0.05. We suggest that the success of this model is due to non-linear redshift-space effects arising from infall on to caustic and is not dominated by virialized cluster cores. The latter should start to dominate on scales below the zero-crossing of the quadrupole, where our model breaks down.
Risk prediction and aversion by anterior cingulate cortex.
Brown, Joshua W; Braver, Todd S
2007-12-01
The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.
NASA Technical Reports Server (NTRS)
1976-01-01
Analytic techniques have been developed for detecting and identifying abrupt changes in dynamic systems. The GLR technique monitors the output of the Kalman filter and searches for the time that the failure occured, thus allowing it to be sensitive to new data and consequently increasing the chances for fast system recovery following detection of a failure. All failure detections are based on functional redundancy. Performance tests of the F-8 aircraft flight control system and computerized modelling of the technique are presented.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia
2009-06-30
Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.
A model for evidence accumulation in the lexical decision task.
Wagenmakers, Eric-Jan; Steyvers, Mark; Raaijmakers, Jeroen G W; Shiffrin, Richard M; van Rijn, Hedderik; Zeelenberg, René
2004-05-01
We present a new model for lexical decision, REM-LD, that is based on REM theory (e.g., ). REM-LD uses a principled (i.e., Bayes' rule) decision process that simultaneously considers the diagnosticity of the evidence for the 'WORD' response and the 'NONWORD' response. The model calculates the odds ratio that the presented stimulus is a word or a nonword by averaging likelihood ratios for lexical entries from a small neighborhood of similar words. We report two experiments that used a signal-to-respond paradigm to obtain information about the time course of lexical processing. Experiment 1 verified the prediction of the model that the frequency of the word stimuli affects performance for nonword stimuli. Experiment 2 was done to study the effects of nonword lexicality, word frequency, and repetition priming and to demonstrate how REM-LD can account for the observed results. We discuss how REM-LD could be extended to account for effects of phonology such as the pseudohomophone effect, and how REM-LD can predict response times in the traditional 'respond-when-ready' paradigm.
Eken, Cenker; Bilge, Ugur; Kartal, Mutlu; Eray, Oktay
2009-06-03
Logistic regression is the most common statistical model for processing multivariate data in the medical literature. Artificial intelligence models like an artificial neural network (ANN) and genetic algorithm (GA) may also be useful to interpret medical data. The purpose of this study was to perform artificial intelligence models on a medical data sheet and compare to logistic regression. ANN, GA, and logistic regression analysis were carried out on a data sheet of a previously published article regarding patients presenting to an emergency department with flank pain suspicious for renal colic. The study population was composed of 227 patients: 176 patients had a diagnosis of urinary stone, while 51 ultimately had no calculus. The GA found two decision rules in predicting urinary stones. Rule 1 consisted of being male, pain not spreading to back, and no fever. In rule 2, pelvicaliceal dilatation on bedside ultrasonography replaced no fever. ANN, GA rule 1, GA rule 2, and logistic regression had a sensitivity of 94.9, 67.6, 56.8, and 95.5%, a specificity of 78.4, 76.47, 86.3, and 47.1%, a positive likelihood ratio of 4.4, 2.9, 4.1, and 1.8, and a negative likelihood ratio of 0.06, 0.42, 0.5, and 0.09, respectively. The area under the curve was found to be 0.867, 0.720, 0.715, and 0.713 for all applications, respectively. Data mining techniques such as ANN and GA can be used for predicting renal colic in emergency settings and to constitute clinical decision rules. They may be an alternative to conventional multivariate analysis applications used in biostatistics.
Weemhoff, M; Kluivers, K B; Govaert, B; Evers, J L H; Kessels, A G H; Baeten, C G
2013-03-01
This study concerns the level of agreement between transperineal ultrasound and evacuation proctography for diagnosing enteroceles and intussusceptions. In a prospective observational study, 50 consecutive women who were planned to have an evacuation proctography underwent transperineal ultrasound too. Sensitivity, specificity, positive (PPV) and negative predictive value, as well as the positive and negative likelihood ratio of transperineal ultrasound were assessed in comparison to evacuation proctography. To determine the interobserver agreement of transperineal ultrasound, the quadratic weighted kappa was calculated. Furthermore, receiver operating characteristic curves were generated to show the diagnostic capability of transperineal ultrasound. For diagnosing intussusceptions (PPV 1.00), a positive finding on transperineal ultrasound was predictive of an abnormal evacuation proctography. Sensitivity of transperineal ultrasound was poor for intussusceptions (0.25). For diagnosing enteroceles, the positive likelihood ratio was 2.10 and the negative likelihood ratio, 0.85. There are many false-positive findings of enteroceles on ultrasonography (PPV 0.29). The interobserver agreement of the two ultrasonographers assessed as the quadratic weighted kappa of diagnosing enteroceles was 0.44 and that of diagnosing intussusceptions was 0.23. An intussusception on ultrasound is predictive of an abnormal evacuation proctography. For diagnosing enteroceles, the diagnostic quality of transperineal ultrasound was limited compared to evacuation proctography.
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
Takada, Toshihiko; Yamamoto, Yosuke; Terada, Kazuhiko; Ohta, Mitsuyasu; Mikami, Wakako; Yokota, Hajime; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuma, Shingo; Fukuhara, Shunichi
2017-11-08
Diagnosis of community-acquired pneumonia (CAP) in the elderly is often delayed because of atypical presentation and non-specific symptoms, such as appetite loss, falls and disturbance in consciousness. The aim of this study was to investigate the external validity of existing prediction models and the added value of the non-specific symptoms for the diagnosis of CAP in elderly patients. Prospective cohort study. General medicine departments of three teaching hospitals in Japan. A total of 109 elderly patients who consulted for upper respiratory symptoms between 1 October 2014 and 30 September 2016. The reference standard for CAP was chest radiograph evaluated by two certified radiologists. The existing models were externally validated for diagnostic performance by calibration plot and discrimination. To evaluate the additional value of the non-specific symptoms to the existing prediction models, we developed an extended logistic regression model. Calibration, discrimination, category-free net reclassification improvement (NRI) and decision curve analysis (DCA) were investigated in the extended model. Among the existing models, the model by van Vugt demonstrated the best performance, with an area under the curve of 0.75(95% CI 0.63 to 0.88); calibration plot showed good fit despite a significant Hosmer-Lemeshow test (p=0.017). Among the non-specific symptoms, appetite loss had positive likelihood ratio of 3.2 (2.0-5.3), negative likelihood ratio of 0.4 (0.2-0.7) and OR of 7.7 (3.0-19.7). Addition of appetite loss to the model by van Vugt led to improved calibration at p=0.48, NRI of 0.53 (p=0.019) and higher net benefit by DCA. Information on appetite loss improved the performance of an existing model for the diagnosis of CAP in the elderly. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
White Gaussian Noise - Models for Engineers
NASA Astrophysics Data System (ADS)
Jondral, Friedrich K.
2018-04-01
This paper assembles some information about white Gaussian noise (WGN) and its applications. It starts from a description of thermal noise, i. e. the irregular motion of free charge carriers in electronic devices. In a second step, mathematical models of WGN processes and their most important parameters, especially autocorrelation functions and power spectrum densities, are introduced. In order to proceed from mathematical models to simulations, we discuss the generation of normally distributed random numbers. The signal-to-noise ratio as the most important quality measure used in communications, control or measurement technology is accurately introduced. As a practical application of WGN, the transmission of quadrature amplitude modulated (QAM) signals over additive WGN channels together with the optimum maximum likelihood (ML) detector is considered in a demonstrative and intuitive way.
Identifying Malignant Pleural Effusion by A Cancer Ratio (Serum LDH: Pleural Fluid ADA Ratio).
Verma, Akash; Abisheganaden, John; Light, R W
2016-02-01
We studied the diagnostic potential of serum lactate dehydrogenase (LDH) in malignant pleural effusion. Retrospective analysis of patients hospitalized with exudative pleural effusion in 2013. Serum LDH and serum LDH: pleural fluid ADA ratio was significantly higher in cancer patients presenting with exudative pleural effusion. In multivariate logistic regression analysis, pleural fluid ADA was negatively correlated 0.62 (0.45-0.85, p = 0.003) with malignancy, whereas serum LDH 1.02 (1.0-1.03, p = 0.004) and serum LDH: pleural fluid ADA ratio 0.94 (0.99-1.0, p = 0.04) was correlated positively with malignant pleural effusion. For serum LDH: pleural fluid ADA ratio, a cut-off level of >20 showed sensitivity, specificity of 0.98 (95 % CI 0.92-0.99) and 0.94 (95 % CI 0.83-0.98), respectively. The positive likelihood ratio was 32.6 (95 % CI 10.7-99.6), while the negative likelihood ratio at this cut-off was 0.03 (95 % CI 0.01-0.15). Higher serum LDH and serum LDH: pleural fluid ADA ratio in patients presenting with exudative pleural effusion can distinguish between malignant and non-malignant effusion on the first day of hospitalization. The cut-off level for serum LDH: pleural fluid ADA ratio of >20 is highly predictive of malignancy in patients with exudative pleural effusion (whether lymphocytic or neutrophilic) with high sensitivity and specificity.
Can We Rule Out Meningitis from Negative Jolt Accentuation? A Retrospective Cohort Study.
Sato, Ryota; Kuriyama, Akira; Luthe, Sarah Kyuragi
2017-04-01
Jolt accentuation has been considered to be the most sensitive physical finding to predict meningitis. However, there are only a few studies assessing the diagnostic accuracy of jolt accentuation. Therefore, we aimed to evaluate the diagnostic accuracy of jolt accentuation and investigate whether it can be extended to patients with mild altered mental status. We performed a single center, retrospective observational study on patients who presented to the emergency department in a Japanese tertiary care center from January 1, 2010 to March 31, 2016. Jolt accentuation evaluated in patients with fever, headache, and mild altered mental status with Glasgow Coma Scale no lower than E2 or M4 was defined as "jolt accentuation in the broad sense." Jolt accentuation evaluated in patients with fever, headache, and no altered mental status was defined as "jolt accentuation in the narrow sense." We evaluated the sensitivity and specificity in both groups. Among 118 patients, the sensitivity and specificity of jolt accentuation in the broad sense were 70.7% (95% confidence interval (CI): 58.0%-80.8%) and 36.7% (95% CI: 25.6%-49.3%). The positive likelihood ratio and negative likelihood ratio were 1.12 (95% CI: 0.87-1.44) and 0.80 (95% CI: 0.48-1.34), respectively. Among 108 patients, the sensitivity and specificity of jot accentuation in the narrow sense were 75.0% (95% CI: 61.8%-84.8%) and 35.1% (95% CI: 24.0%-48.0%). The positive likelihood ratio and negative likelihood ratio were 1.16 (95% CI: 0.90-1.48) and 0.71 (95% CI: 0.40-1.28), respectively. Jolt accentuation itself has a limited value in the diagnosis of meningitis regardless of altered mental status. Therefore, meningitis should not be ruled out by negative jolt accentuation. © 2017 American Headache Society.
NASA Astrophysics Data System (ADS)
Sembiring, J.; Jones, F.
2018-03-01
Red cell Distribution Width (RDW) and platelet ratio (RPR) can predict liver fibrosis and cirrhosis in chronic hepatitis B with relatively high accuracy. RPR was superior to other non-invasive methods to predict liver fibrosis, such as AST and ALT ratio, AST and platelet ratio Index and FIB-4. The aim of this study was to assess diagnostic accuracy liver fibrosis by using RDW and platelets ratio in chronic hepatitis B patients based on compared with Fibroscan. This cross-sectional study was conducted at Adam Malik Hospital from January-June 2015. We examine 34 patients hepatitis B chronic, screen RDW, platelet, and fibroscan. Data were statistically analyzed. The result RPR with ROC procedure has an accuracy of 72.3% (95% CI: 84.1% - 97%). In this study, the RPR had a moderate ability to predict fibrosis degree (p = 0.029 with AUC> 70%). The cutoff value RPR was 0.0591, sensitivity and spesificity were 71.4% and 60%, Positive Prediction Value (PPV) was 55.6% and Negative Predictions Value (NPV) was 75%, positive likelihood ratio was 1.79 and negative likelihood ratio was 0.48. RPR have the ability to predict the degree of liver fibrosis in chronic hepatitis B patients with moderate accuracy.
Validation of the portable Air-Smart Spirometer
Núñez Fernández, Marta; Pallares Sanmartín, Abel; Mouronte Roibas, Cecilia; Cerdeira Domínguez, Luz; Botana Rial, Maria Isabel; Blanco Cid, Nagore; Fernández Villar, Alberto
2018-01-01
Background The Air-Smart Spirometer is the first portable device accepted by the European Community (EC) that performs spirometric measurements by a turbine mechanism and displays the results on a smartphone or a tablet. Methods In this multicenter, descriptive and cross-sectional prospective study carried out in 2 hospital centers, we compare FEV1, FVC, FEV1/FVC ratio measured with the Air Smart-Spirometer device and a conventional spirometer, and analyze the ability of this new portable device to detect obstructions. Patients were included for 2 consecutive months. We calculate sensitivity, specificity, positive and negative predictive value (PPV and NPV) and likelihood ratio (LR +, LR-) as well as the Kappa Index to evaluate the concordance between the two devices for the detection of obstruction. The agreement and relation between the values of FEV1 and FVC in absolute value and the FEV1/FVC ratio measured by both devices were analyzed by calculating the intraclass correlation coefficient (ICC) and the Pearson correlation coefficient (r) respectively. Results 200 patients (100 from each center) were included with a mean age of 57 (± 14) years, 110 were men (55%). Obstruction was detected by conventional spirometry in 73 patients (40.1%). Using a FEV1/FVC ratio smaller than 0.7 to detect obstruction with the Air Smart-Spirometer, the kappa index was 0.88, sensitivity (90.4%), specificity (97.2%), PPV (95.7%), NPV (93.7%), positive likelihood ratio (32.29), and negative likelihood ratio (0.10). The ICC and r between FEV1, FVC, and FEV1 / FVC ratio measured by the Air Smart Spirometer and the conventional spirometer were all higher than 0.94. Conclusion The Air-Smart Spirometer is a simple and very precise instrument for detecting obstructive airway diseases. It is easy to use, which could make it especially useful non-specialized care and in other areas. PMID:29474502
Lods, wrods, and mods: the interpretation of lod scores calculated under different models.
Hodge, S E; Elston, R C
1994-01-01
In this paper we examine the relationships among classical lod scores, "wrod" scores (lod scores calculated under the wrong genetic model), and "mod" scores (lod scores maximized over genetic model parameters). We compare the behavior of these scores when the state of nature is linkage to their behavior when the state of nature is no linkage. We describe sufficient conditions for mod scores to be valid and discuss their use to determine the correct genetic model. We show that lod scores represent a likelihood-ratio test for independence. We explain the "ascertainment-assumption-free" aspect of using mod scores to determine mode of inheritance and we set this aspect into a well-established statistical framework. Finally, we summarize practical guidelines for the use of mod scores.
Alladio, Eugenio; Martyna, Agnieszka; Salomone, Alberto; Pirro, Valentina; Vincenti, Marco; Zadora, Grzegorz
2017-02-01
The detection of direct ethanol metabolites, such as ethyl glucuronide (EtG) and fatty acid ethyl esters (FAEEs), in scalp hair is considered the optimal strategy to effectively recognize chronic alcohol misuses by means of specific cut-offs suggested by the Society of Hair Testing. However, several factors (e.g. hair treatments) may alter the correlation between alcohol intake and biomarkers concentrations, possibly introducing bias in the interpretative process and conclusions. 125 subjects with various drinking habits were subjected to blood and hair sampling to determine indirect (e.g. CDT) and direct alcohol biomarkers. The overall data were investigated using several multivariate statistical methods. A likelihood ratio (LR) approach was used for the first time to provide predictive models for the diagnosis of alcohol abuse, based on different combinations of direct and indirect alcohol biomarkers. LR strategies provide a more robust outcome than the plain comparison with cut-off values, where tiny changes in the analytical results can lead to dramatic divergence in the way they are interpreted. An LR model combining EtG and FAEEs hair concentrations proved to discriminate non-chronic from chronic consumers with ideal correct classification rates, whereas the contribution of indirect biomarkers proved to be negligible. Optimal results were observed using a novel approach that associates LR methods with multivariate statistics. In particular, the combination of LR approach with either Principal Component Analysis (PCA) or Linear Discriminant Analysis (LDA) proved successful in discriminating chronic from non-chronic alcohol drinkers. These LR models were subsequently tested on an independent dataset of 43 individuals, which confirmed their high efficiency. These models proved to be less prone to bias than EtG and FAEEs independently considered. In conclusion, LR models may represent an efficient strategy to sustain the diagnosis of chronic alcohol consumption and provide a suitable gradation to support the judgment. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Lee, Michael J; Cizik, Amy M; Hamilton, Deven; Chapman, Jens R
2014-02-01
The possibility and likelihood of a postoperative medical complication after spine surgery undoubtedly play a major role in the decision making of the surgeon and patient alike. Although prior study has determined relative risk and odds ratio values to quantify risk factors, these values may be difficult to translate to the patient during counseling of surgical options. Ideally, a model that predicts absolute risk of medical complication, rather than relative risk or odds ratio values, would greatly enhance the discussion of safety of spine surgery. To date, there is no risk stratification model that specifically predicts the risk of medical complication. The purpose of this study was to create and validate a predictive model for the risk of medical complication during and after spine surgery. Statistical analysis using a prospective surgical spine registry that recorded extensive demographic, surgical, and complication data. Outcomes examined are medical complications that were specifically defined a priori. This analysis is a continuation of statistical analysis of our previously published report. Using a prospectively collected surgical registry of more than 1,476 patients with extensive demographic, comorbidity, surgical, and complication detail recorded for 2 years after surgery, we previously identified several risk factor for medical complications. Using the beta coefficients from those log binomial regression analyses, we created a model to predict the occurrence of medical complication after spine surgery. We split our data into two subsets for internal and cross-validation of our model. We created two predictive models: one predicting the occurrence of any medical complication and the other predicting the occurrence of a major medical complication. The final predictive model for any medical complications had a receiver operator curve characteristic of 0.76, considered to be a fair measure. The final predictive model for any major medical complications had receiver operator curve characteristic of 0.81, considered to be a good measure. The final model has been uploaded for use on SpineSage.com. We present a validated model for predicting medical complications after spine surgery. The value in this model is that it gives the user an absolute percent likelihood of complication after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Patients are far more likely to understand an absolute percentage, rather than relative risk and confidence interval values. A model such as this is of paramount importance in counseling patients and enhancing the safety of spine surgery. In addition, a tool such as this can be of great use particularly as health care trends toward pay-for-performance, quality metrics, and risk adjustment. To facilitate the use of this model, we have created a website (SpineSage.com) where users can enter in patient data to determine likelihood of medical complications after spine surgery. Copyright © 2014 Elsevier Inc. All rights reserved.
Likelihood analysis of supersymmetric SU(5) GUTs
Bagnaschi, Emanuele; Costa, J. C.; Sakurai, K.; ...
2017-02-16
Here, we perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino massmore » $$m_{1/2}$$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $$m_5$$ and $$m_{10}$$, and for the $$\\mathbf{5}$$ and $$\\mathbf{\\bar 5}$$ Higgs representations $$m_{H_u}$$ and $$m_{H_d}$$, a universal trilinear soft SUSY-breaking parameter $$A_0$$, and the ratio of Higgs vevs $$\\tan \\beta$$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel $${\\tilde u_R}/{\\tilde c_R} - \\tilde{\\chi}^0_1$$ coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of $${\\tilde \
Unified framework to evaluate panmixia and migration direction among multiple sampling locations.
Beerli, Peter; Palczewski, Michal
2010-05-01
For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.
Liu, Bo-Ji; Li, Dan-Dan; Xu, Hui-Xiong; Guo, Le-Hang; Zhang, Yi-Feng; Xu, Jun-Mei; Liu, Chang; Liu, Lin-Na; Li, Xiao-Long; Xu, Xiao-Hong; Qu, Shen; Xing, Mingzhao
2015-12-01
The aim of this study was to evaluate the diagnostic performance of quantitative shear wave velocity (SWV) measurement on acoustic radiation force impulse (ARFI) elastography for differentiation between benign and malignant thyroid nodules using meta-analysis. The databases of PubMed and the Web of Science were searched. Studies published in English on assessment of the sensitivity and specificity of ARFI elastography for the differentiation of thyroid nodules were collected. The quantitative measurement of ARFI elastography was evaluated by SWV (m/s). Meta-Disc Version 1.4 software was used to describe and calculate the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio and summary receiver operating characteristic curves. We analyzed a total of 13 studies, which included 1,854 thyroid nodules (including 1,339 benign nodules and 515 malignant nodules) from 1,641 patients. The summary sensitivity and specificity for differential diagnosis between benign and malignant thyroid nodules by SWV were 0.81 (95% confidence interval [CI]: 0.77-0.84) and 0.84 (95% CI: 0.81-0.86), respectively. The pooled positive and negative likelihood ratios were 5.21 (95% CI: 3.56-7.62) and 0.23 (95% CI: 0.17-0.32), respectively. The pooled diagnostic odds ratio was 27.53 (95% CI: 14.58-52.01), and the area under the summary receiver operating characteristic curve was 0.91 (Q* = 0.84). In conclusion, SWV measurement on ARFI elastography has high sensitivity and specificity for differential diagnosis between benign and malignant thyroid nodules and can be used in combination with conventional ultrasound. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Kim, Hye Jeong; Kwak, Mi Kyung; Choi, In Ho; Jin, So-Young; Park, Hyeong Kyu; Byun, Dong Won; Suh, Kyoil; Yoo, Myung Hi
2018-02-23
The aim of this study was to address the role of the elasticity index as a possible predictive marker for detecting papillary thyroid carcinoma (PTC) and quantitatively assess shear wave elastography (SWE) as a tool for differentiating PTC from benign thyroid nodules. One hundred and nineteen patients with thyroid nodules undergoing SWE before ultrasound-guided fine needle aspiration and core needle biopsy were analyzed. The mean (EMean), minimum (EMin), maximum (EMax), and standard deviation (ESD) of SWE elasticity indices were measured. Among 105 nodules, 14 were PTC and 91 were benign. The EMean, EMin, and EMax values were significantly higher in PTCs than benign nodules (EMean 37.4 in PTC vs. 23.7 in benign nodules, p = 0.005; EMin 27.9 vs. 17.8, p = 0.034; EMax 46.7 vs. 31.5, p < 0.001). The EMean, EMin, and EMax were significantly associated with PTC with diagnostic odds ratios varying from 6.74 to 9.91, high specificities (86.4%, 86.4%, and 88.1%, respectively), and positive likelihood ratios (4.21, 3.69, and 4.82, respectively). The ESD values were significantly higher in PTC than in benign nodules (6.3 vs. 2.6, p < 0.001). ESD had the highest specificity (96.6%) when applied with a cut-off value of 6.5 kPa. It had a positive likelihood ratio of 14.75 and a diagnostic odds ratio of 28.50. The shear elasticity index of ESD, with higher likelihood ratios for PTC, will probably identify nodules that have a high potential for malignancy. It may help to identify and select malignant nodules, while reducing unnecessary fine needle aspiration and core needle biopsies of benign nodules.
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.
2014-01-01
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157
Comoving Stars in Gaia DR1: An Abundance of Very Wide Separation Comoving Pairs
NASA Astrophysics Data System (ADS)
Oh, Semyeong; Price-Whelan, Adrian M.; Hogg, David W.; Morton, Timothy D.; Spergel, David N.
2017-06-01
The primary sample of the Gaia Data Release 1 is the Tycho-Gaia Astrometric Solution (TGAS): ≈2 million Tycho-2 sources with improved parallaxes and proper motions relative to the initial catalog. This increased astrometric precision presents an opportunity to find new binary stars and moving groups. We search for high-confidence comoving pairs of stars in TGAS by identifying pairs of stars consistent with having the same 3D velocity using a marginalized likelihood ratio test to discriminate candidate comoving pairs from the field population. Although we perform some visualizations using (bias-corrected) inverse parallax as a point estimate of distance, the likelihood ratio is computed with a probabilistic model that includes the covariances of parallax and proper motions and marginalizes the (unknown) true distances and 3D velocities of the stars. We find 13,085 comoving star pairs among 10,606 unique stars with separations as large as 10 pc (our search limit). Some of these pairs form larger groups through mutual comoving neighbors: many of these pair networks correspond to known open clusters and OB associations, but we also report the discovery of several new comoving groups. Most surprisingly, we find a large number of very wide (> 1 pc) separation comoving star pairs, the number of which increases with increasing separation and cannot be explained purely by false-positive contamination. Our key result is a catalog of high-confidence comoving pairs of stars in TGAS. We discuss the utility of this catalog for making dynamical inferences about the Galaxy, testing stellar atmosphere models, and validating chemical abundance measurements.
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
Poortinga, Ernest; Lemmen, Craig; Jibson, Michael D
2006-01-01
We examined the clinical, criminal, and sociodemographic characteristics of all white-collar crime defendants referred to the evaluation unit of a state center for forensic psychiatry. With 29,310 evaluations in a 12-year period, we found 70 defendants charged with embezzlement, 3 with health care fraud, and no other white-collar defendants (based on the eight crimes widely accepted as white-collar offenses). In a case-control study design, the 70 embezzlement cases were compared with 73 defendants charged with other forms of nonviolent theft. White-collar defendants were found to have a higher likelihood of white race (adjusted odds ratio (adj. OR) = 4.51), more years of education (adj. OR = 3471), and a lower likelihood of substance abuse (adj. OR = .28) than control defendants. Logistic regression modeling showed that the variance in the relationship between unipolar depression and white-collar crime was more economically accounted for by education, race, and substance abuse.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
A long-term earthquake rate model for the central and eastern United States from smoothed seismicity
Moschetti, Morgan P.
2015-01-01
I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.
Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas
Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles
2016-01-01
Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Use and interpretation of logistic regression in habitat-selection studies
Keating, Kim A.; Cherry, Steve
2004-01-01
Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.
Sull, Jae Woong; Liang, Kung-Yee; Hetmanski, Jacqueline B; Fallin, M Daniele; Ingersoll, Roxanne G; Park, Ji Wan; Wu-Chou, Yah-Huei; Chen, Philip K; Chong, Samuel S; Cheah, Felicia; Yeow, Vincent; Park, Beyoung Yun; Jee, Sun Ha; Jabs, Ethylin W; Redett, Richard; Scott, Alan F; Beaty, Terri H
2008-09-15
Isolated cleft palate is among the most common human birth defects. The TCOF1 gene has been suggested as a candidate gene for cleft palate based on animal models. This study tests for association between markers in TCOF1 and isolated, nonsyndromic cleft palate using a case-parent trio design considering parent-of-origin effects. Case-parent trios from three populations (comprising a total of 81 case-parent trios) were genotyped for single nucleotide polymorphisms (SNPs) in the TCOF1 gene. We used the transmission disequilibrium test and the transmission asymmetry test on individual SNPs. When all trios were combined, the odds ratio for transmission of the minor allele, OR(transmission), was significant for SNP rs15251 (OR = 2.88, P = 0.007), as well as rs2255796 and rs2569062 (OR = 2.08, P = 0.03; OR = 2.43, P = 0.041; respectively) when parent of origin was not considered. The transmission asymmetry test also revealed one SNP (rs15251) showing excess maternal transmission significant at the P = 0.005 level (OR = 6.50). Parent-of-origin effects were assessed using the parent-of-origin likelihood ratio test on both SNPs and haplotypes. While the parent-of-origin likelihood ratio test was only marginally significant for this SNP (P = 0.136), analysis of haplotypes of rs2255796 and rs15251 suggested excess maternal transmission. Therefore, these data suggest TCOF1 may influence risk of cleft palate through a parent-of-origin effect. Copyright 2008 Wiley-Liss, Inc.
Circulating miR-128 as a potential diagnostic biomarker for glioma.
Liang, Ruo-Fei; Li, Mao; Yang, Yuan; Wang, Xiang; Mao, Qing; Liu, Yan-Hui
2017-09-01
miR-128 in circulation is a promising marker for early diagnosis of glioma. A meta-analysis was performed to evaluate the diagnostic accuracy and clinical value of circulating miR-128 in patients with glioma. A comprehensive literature search for relevant published articles (last search updated on December 29, 2016) was conducted in the Chinese Biomedical Literature Database, PubMed, and Embase. The quality assessment of diagnostic accuracy studies (QUADAS) tool was used to score the quality of the eligible studies. Meta-Disc 1.4 software was used to test for heterogeneity and to perform the meta-analysis. The three studies included in our study enrolled a total of 191 patients with glioma and 73 individuals without tumor. Using a fixed-effect model analysis, the summary assessments revealed that the pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio were 0.89 (95% CI: 0.84-0.93), 0.90 (95% CI: 0.81-0.96), 8.07 (95% CI: 4.21-15.46), and 0.13 (95% CI: 0.09-0.19), respectively. The diagnostic odds ratio (DOR) of miR-128 was 65.00 (95% CI: 26.90-157.10), indicating that the overall accuracy of the miR-128 test for detecting glioma was high. The value of I 2 was 0.0%, indicating that there was no significant heterogeneity among studies. The present meta-analysis showed that circulating miR-128 might be a promising noninvasive biomarker for diagnosing glioma. Copyright © 2017 Elsevier B.V. All rights reserved.
Li, Yan-Wei; Zhou, Le-Shan; Li, Xing
2017-03-15
Fever is the most common complaint in the pediatric and emergency departments. Caregivers prefer to detect fever in their children by tactile assessment. To summarize the evidence on the accuracy of caregivers' tactile assessment for detecting fever in children. We performed a literature search of Cochrane Library, PubMed, Web of Knowledge, EMBASE (ovid), EBSCO and Google Scholar, without restriction of publication date, to identify English articles assessing caregivers' ability of detecting fever in children by tactile assessment. Quality assessment was based on the 2011 Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) criteria. Pooled estimates of sensitivity and specificity were calculated with use of bivariate model and summary receiver operation characteristics plots for meta-analysis. 11 articles were included in our analysis. The summary estimates for tactile assessment as a diagnostic tool revealed a sensitivity of 87.5% (95% CI 79.3% to 92.8%) and specificity of 54.6% (95% CI 38.5% to 69.9%). The pooled positive likelihood ratio was 1.93 (95% CI 1.39 to 2.67) and negative likelihood ratio was 0.23 (95% CI 0.15 to 0.36). Area under curve was 0.82 (95% CI 0.7 to 0.85). The pooled diagnostic odds ratio was 8.46 (95% CI 4.54 to 15.76). Tactile assessment of fever in children by palpation has moderate diagnostic value. Caregivers' assessment as "no fever" by touch is quite accurate in ruling out fever, while assessment as "fever" can be considered but needs confirmation.
Estimating hazard ratios in cohort data with missing disease information due to death.
Binder, Nadine; Herrnböck, Anne-Sophie; Schumacher, Martin
2017-03-01
In clinical and epidemiological studies information on the primary outcome of interest, that is, the disease status, is usually collected at a limited number of follow-up visits. The disease status can often only be retrieved retrospectively in individuals who are alive at follow-up, but will be missing for those who died before. Right-censoring the death cases at the last visit (ad-hoc analysis) yields biased hazard ratio estimates of a potential risk factor, and the bias can be substantial and occur in either direction. In this work, we investigate three different approaches that use the same likelihood contributions derived from an illness-death multistate model in order to more adequately estimate the hazard ratio by including the death cases into the analysis: a parametric approach, a penalized likelihood approach, and an imputation-based approach. We investigate to which extent these approaches allow for an unbiased regression analysis by evaluating their performance in simulation studies and on a real data example. In doing so, we use the full cohort with complete illness-death data as reference and artificially induce missing information due to death by setting discrete follow-up visits. Compared to an ad-hoc analysis, all considered approaches provide less biased or even unbiased results, depending on the situation studied. In the real data example, the parametric approach is seen to be too restrictive, whereas the imputation-based approach could almost reconstruct the original event history information. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2016-12-01
Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.
Long terms trends in CD4+ cell counts, CD8+ cell counts, and the CD4+ : CD8+ ratio
Hughes, Rachael A.; May, Margaret T.; Tilling, Kate; Taylor, Ninon; Wittkop, Linda; Reiss, Peter; Gill, John; Schommers, Philipp; Costagliola, Dominique; Guest, Jodie L.; Lima, Viviane D.; d’Arminio Monforte, Antonella; Smith, Colette; Cavassini, Matthias; Saag, Michael; Castilho, Jessica L.; Sterne, Jonathan A.C.
2018-01-01
Objective: Model trajectories of CD4+ and CD8+ cell counts after starting combination antiretroviral therapy (ART) and use the model to predict trends in these counts and the CD4+ : CD8+ ratio. Design: Cohort study of antiretroviral-naïve HIV-positive adults who started ART after 1997 (ART Cohort Collaboration) with more than 6 months of follow-up data. Methods: We jointly estimated CD4+ and CD8+ cell count trends and their correlation using a bivariate random effects model, with linear splines describing their population trends, and predicted the CD4+ : CD8+ ratio trend from this model. We assessed whether CD4+ and CD8+ cell count trends and the CD4+ : CD8+ ratio trend varied according to CD4+ cell count at start of ART (baseline), and, whether these trends differed in patients with and without virological failure more than 6 months after starting ART. Results: A total of 39 979 patients were included (median follow-up was 53 months). Among patients with baseline CD4+ cell count at least 50 cells/μl, predicted mean CD8+ cell counts continued to decrease between 3 and 15 years post-ART, partly driving increases in the predicted mean CD4+ : CD8+ ratio. During 15 years of follow-up, normalization of the predicted mean CD4+ : CD8+ ratio (to >1) was only observed among patients with baseline CD4+ cell count at least 200 cells/μl. A higher baseline CD4+ cell count predicted a shorter time to normalization. Conclusion: Declines in CD8+ cell count and increases in CD4+ : CD8+ ratio occurred up to 15 years after starting ART. The likelihood of normalization of the CD4+ : CD8+ ratio is strongly related to baseline CD4+ cell count. PMID:29851663
76 FR 18221 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-01
... Ratio Standard for a State's Individual Market; Use: Under section 2718 of the Public Health Service Act... data allows for the calculation of an issuer's medical loss ratio (MLR) by market (individual, small... whether market destabilization has a high likelihood of occurring. Form Number: CMS-10361 (OMB Control No...
Unternaehrer, Eva; Meyer, Andrea Hans; Burkhardt, Susan C A; Dempster, Emma; Staehli, Simon; Theill, Nathan; Lieb, Roselind; Meinlschmidt, Gunther
2015-01-01
In adults, reporting low and high maternal care in childhood, we compared DNA methylation in two stress-associated genes (two target sequences in the oxytocin receptor gene, OXTR; one in the brain-derived neurotrophic factor gene, BDNF) in peripheral whole blood, in a cross-sectional study (University of Basel, Switzerland) during 2007-2008. We recruited 89 participants scoring < 27 (n = 47, 36 women) or > 33 (n = 42, 35 women) on the maternal care subscale of the Parental Bonding Instrument (PBI) at a previous assessment of a larger group (N = 709, range PBI maternal care = 0-36, age range = 19-66 years; median 24 years). 85 participants gave blood for DNA methylation analyses (Sequenom(R) EpiTYPER, San Diego, CA) and cell count (Sysmex PocH-100i™, Kobe, Japan). Mixed model statistical analysis showed greater DNA methylation in the low versus high maternal care group, in the BDNF target sequence [Likelihood-Ratio (1) = 4.47; p = 0.035] and in one OXTR target sequence Likelihood-Ratio (1) = 4.33; p = 0.037], but not the second OXTR target sequence [Likelihood-Ratio (1) < 0.001; p = 0.995). Mediation analyses indicated that differential blood cell count did not explain associations between low maternal care and BDNF (estimate = -0.005, 95% CI = -0.025 to 0.015; p = 0.626) or OXTR DNA methylation (estimate = -0.015, 95% CI = -0.038 to 0.008; p = 0.192). Hence, low maternal care in childhood was associated with greater DNA methylation in an OXTR and a BDNF target sequence in blood cells in adulthood. Although the study has limitations (cross-sectional, a wide age range, only three target sequences in two genes studied, small effects, uncertain relevance of changes in blood cells to gene methylation in brain), the findings may indicate components of the epiphenotype from early life stress.
Handwriting individualization using distance and rarity
NASA Astrophysics Data System (ADS)
Tang, Yi; Srihari, Sargur; Srinivasan, Harish
2012-01-01
Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.
Comparison between presepsin and procalcitonin in early diagnosis of neonatal sepsis.
Iskandar, Agustin; Arthamin, Maimun Z; Indriana, Kristin; Anshory, Muhammad; Hur, Mina; Di Somma, Salvatore
2018-05-09
Neonatal sepsis remains worldwide one of the leading causes of morbidity and mortality in both term and preterm infants. Lower mortality rates are related to timely diagnostic evaluation and prompt initiation of empiric antibiotic therapy. Blood culture, as gold standard examination for sepsis, has several limitations for early diagnosis, so that sepsis biomarkers could play an important role in this regard. This study was aimed to compare the value of the two biomarkers presepsin and procalcitonin in early diagnosis of neonatal sepsis. This was a prospective cross-sectional study performed, in Saiful Anwar General Hospital Malang, Indonesia, in 51 neonates that fulfill the criteria of systemic inflammatory response syndrome (SIRS) with blood culture as diagnostic gold standard for sepsis. At reviewer operating characteristic (ROC) curve analyses, using a presepsin cutoff of 706,5 pg/mL, the obtained area under the curve (AUCs) were: sensitivity = 85.7%, specificity = 68.8%, positive predictive value = 85.7%, negative predictive value = 68.8%, positive likelihood ratio = 2.75, negative likelihood ratio = 0.21, and accuracy = 80.4%. On the other hand, with a procalcitonin cutoff value of 161.33 pg/mL the obtained AUCs showed: sensitivity = 68.6%, specificity = 62.5%, positive predictive value = 80%, negative predictive value = 47.6%, positive likelihood ratio = 1.83, the odds ratio negative = 0.5, and accuracy = 66.7%. In early diagnosis of neonatal sepsis, compared with procalcitonin, presepsin seems to provide better early diagnostic value with consequent possible faster therapeutical decision making and possible positive impact on outcome of neonates.
The effect of rare variants on inflation of the test statistics in case-control analyses.
Pirie, Ailith; Wood, Angela; Lush, Michael; Tyrer, Jonathan; Pharoah, Paul D P
2015-02-20
The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data. We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency. In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.
Environmental, Spatial, and Sociodemographic Factors Associated with Nonfatal Injuries in Indonesia.
Irianti, Sri; Prasetyoputra, Puguh
2017-01-01
Background . The determinants of injuries and their reoccurrence in Indonesia are not well understood, despite their importance in the prevention of injuries. Therefore, this study seeks to investigate the environmental, spatial, and sociodemographic factors associated with the reoccurrence of injuries among Indonesian people. Methods . Data from the 2013 round of the Indonesia Baseline Health Research (IBHR 2013) were analysed using a two-part hurdle regression model. A logit regression model was chosen for the zero-hurdle part , while a zero-truncated negative binomial regression model was selected for the counts part . Odds ratio (OR) and incidence rate ratio (IRR) were the measures of association, respectively. Results . The results suggest that living in a household with distant drinking water source, residing in slum areas, residing in Eastern Indonesia, having low educational attainment, being men, and being poorer are positively related to the likelihood of experiencing injury. Moreover, being a farmer or fishermen, having low educational attainment, and being men are positively associated with the frequency of injuries. Conclusion . This study would be useful to prioritise injury prevention programs in Indonesia based on the environmental, spatial, and sociodemographic characteristics.
Predictive model for risk of cesarean section in pregnant women after induction of labor.
Hernández-Martínez, Antonio; Pascual-Pedreño, Ana I; Baño-Garnés, Ana B; Melero-Jiménez, María R; Tenías-Burillo, José M; Molina-Alarcón, Milagros
2016-03-01
To develop a predictive model for risk of cesarean section in pregnant women after induction of labor. A retrospective cohort study was conducted of 861 induced labors during 2009, 2010, and 2011 at Hospital "La Mancha-Centro" in Alcázar de San Juan, Spain. Multivariate analysis was used with binary logistic regression and areas under the ROC curves to determine predictive ability. Two predictive models were created: model A predicts the outcome at the time the woman is admitted to the hospital (before the decision to of the method of induction); and model B predicts the outcome at the time the woman is definitely admitted to the labor room. The predictive factors in the final model were: maternal height, body mass index, nulliparity, Bishop score, gestational age, macrosomia, gender of fetus, and the gynecologist's overall cesarean section rate. The predictive ability of model A was 0.77 [95% confidence interval (CI) 0.73-0.80] and model B was 0.79 (95% CI 0.76-0.83). The predictive ability for pregnant women with previous cesarean section with model A was 0.79 (95% CI 0.64-0.94) and with model B was 0.80 (95% CI 0.64-0.96). For a probability of estimated cesarean section ≥80%, the models A and B presented a positive likelihood ratio (+LR) for cesarean section of 22 and 20, respectively. Also, for a likelihood of estimated cesarean section ≤10%, the models A and B presented a +LR for vaginal delivery of 13 and 6, respectively. These predictive models have a good discriminative ability, both overall and for all subgroups studied. This tool can be useful in clinical practice, especially for pregnant women with previous cesarean section and diabetes.
Ermertcan, Aylin Türel; Oztürk, Ferdi; Gençoğlan, Gülsüm; Eskiizmir, Görkem; Temiz, Peyker; Horasan, Gönül Dinç
2011-03-01
The precision of clinical diagnosis of skin tumors is not commonly measured and, therefore, very little is known about the diagnostic ability of clinicians. This study aimed to compare clinical and histopathologic diagnoses of nonmelanoma skin cancers with regard to sensitivity, predictive values, pretest-posttest probabilities, and likelihood ratios. Two hundred nineteen patients with 241 nonmelanoma skin cancers were enrolled in this study. Of these patients, 49.4% were female and 50.6% were male. The mean age ± standard deviation (SD) was 63.66 ± 16.44 years for the female patients and 64.77 ± 14.88 years for the male patients. The mean duration of the lesions was 20.90 ± 32.95 months. One hundred forty-eight (61.5%) of the lesions were diagnosed as basal cell carcinoma (BCC) and 93 (38.5%) were diagnosed as squamous cell carcinoma (SCC) histopathologically. Sensitivity, positive predictive value, and posttest probability were calculated as 75.96%, 87.77%, and 87.78% for BCC and 70.37%, 37.25%, and 37.20% for SCC, respectively. The correlation between clinical and histopathologic diagnoses was found to be higher in BCC. Knowledge of sensitivity, predictive values, likelihood ratios, and posttest probabilities may have implications for the management of skin cancers. To prevent unnecessary surgeries and achieve high diagnostic accuracies, multidisciplinary approaches are recommended.
Analysis of case-parent trios at a locus with a deletion allele: association of GSTM1 with autism.
Buyske, Steven; Williams, Tanishia A; Mars, Audrey E; Stenroos, Edward S; Ming, Sue X; Wang, Rong; Sreenath, Madhura; Factura, Marivic F; Reddy, Chitra; Lambert, George H; Johnson, William G
2006-02-10
Certain loci on the human genome, such as glutathione S-transferase M1 (GSTM1), do not permit heterozygotes to be reliably determined by commonly used methods. Association of such a locus with a disease is therefore generally tested with a case-control design. When subjects have already been ascertained in a case-parent design however, the question arises as to whether the data can still be used to test disease association at such a locus. A likelihood ratio test was constructed that can be used with a case-parents design but has somewhat less power than a Pearson's chi-squared test that uses a case-control design. The test is illustrated on a novel dataset showing a genotype relative risk near 2 for the homozygous GSTM1 deletion genotype and autism. Although the case-control design will remain the mainstay for a locus with a deletion, the likelihood ratio test will be useful for such a locus analyzed as part of a larger case-parent study design. The likelihood ratio test has the advantage that it can incorporate complete and incomplete case-parent trios as well as independent cases and controls. Both analyses support (p = 0.046 for the proposed test, p = 0.028 for the case-control analysis) an association of the homozygous GSTM1 deletion genotype with autism.
Analysis of case-parent trios at a locus with a deletion allele: association of GSTM1 with autism
Buyske, Steven; Williams, Tanishia A; Mars, Audrey E; Stenroos, Edward S; Ming, Sue X; Wang, Rong; Sreenath, Madhura; Factura, Marivic F; Reddy, Chitra; Lambert, George H; Johnson, William G
2006-01-01
Background Certain loci on the human genome, such as glutathione S-transferase M1 (GSTM1), do not permit heterozygotes to be reliably determined by commonly used methods. Association of such a locus with a disease is therefore generally tested with a case-control design. When subjects have already been ascertained in a case-parent design however, the question arises as to whether the data can still be used to test disease association at such a locus. Results A likelihood ratio test was constructed that can be used with a case-parents design but has somewhat less power than a Pearson's chi-squared test that uses a case-control design. The test is illustrated on a novel dataset showing a genotype relative risk near 2 for the homozygous GSTM1 deletion genotype and autism. Conclusion Although the case-control design will remain the mainstay for a locus with a deletion, the likelihood ratio test will be useful for such a locus analyzed as part of a larger case-parent study design. The likelihood ratio test has the advantage that it can incorporate complete and incomplete case-parent trios as well as independent cases and controls. Both analyses support (p = 0.046 for the proposed test, p = 0.028 for the case-control analysis) an association of the homozygous GSTM1 deletion genotype with autism. PMID:16472391
The Diagnostic Accuracy of Special Tests for Rotator Cuff Tear: The ROW Cohort Study
Jain, Nitin B.; Luz, Jennifer; Higgins, Laurence D.; Dong, Yan; Warner, Jon J.P.; Matzkin, Elizabeth; Katz, Jeffrey N.
2016-01-01
Objective The aim was to assess diagnostic accuracy of 15 shoulder special tests for rotator cuff tears. Design From 02/2011 to 12/2012, 208 participants with shoulder pain were recruited in a cohort study. Results Among tests for supraspinatus tears, Jobe’s test had a sensitivity of 88% (95% CI=80% to 96%), specificity of 62% (95% CI=53% to 71%), and likelihood ratio of 2.30 (95% CI=1.79 to 2.95). The full can test had a sensitivity of 70% (95% CI=59% to 82%) and a specificity of 81% (95% CI=74% to 88%). Among tests for infraspinatus tears, external rotation lag signs at 0° had a specificity of 98% (95% CI=96% to 100%) and a likelihood ratio of 6.06 (95% CI=1.30 to 28.33), and the Hornblower’s sign had a specificity of 96% (95% CI=93% to 100%) and likelihood ratio of 4.81 (95% CI=1.60 to 14.49). Conclusions Jobe’s test and full can test had high sensitivity and specificity for supraspinatus tears and Hornblower’s sign performed well for infraspinatus tears. In general, special tests described for subscapularis tears have high specificity but low sensitivity. These data can be used in clinical practice to diagnose rotator cuff tears and may reduce the reliance on expensive imaging. PMID:27386812
David, Ingrid; Bouvier, Frédéric; Ricard, Edmond; Ruesche, Julien; Weisbecker, Jean-Louis
2013-09-30
The pre-weaning growth of lambs, an important component of meat production, depends on maternal and direct effects. These effects cannot be observed directly and models used to study pre-weaning growth assume that they are additive. However, it is reasonable to suggest that the influence of direct effects on growth may differ depending on the value of maternal effects i.e. an interaction may exist between the two components. To test this hypothesis, an experiment was carried out in Romane sheep in order to obtain observations of maternal phenotypic effects (milk yield and milk quality) and pre-weaning growth of the lambs. The experiment consisted of mating ewes that had markedly different maternal genetic effects with rams that contributed very different genetic effects in four replicates of a 3 × 2 factorial plan. Milk yield was measured using the lamb suckling weight differential technique and milk composition (fat and protein contents) was determined by infrared spectroscopy at 15, 21 and 35 days after lambing. Lambs were weighed at birth and then at 15, 21 and 35 days. An interaction between genotype (of the lamb) and environment (milk yield and quality) for average daily gain was tested using a restricted likelihood ratio test, comparing a linear reaction norm model (interaction model) to a classical additive model (no interaction model). A total of 1284 weights of 442 lambs born from 166 different ewes were analysed. On average, the ewes produced 2.3 ± 0.8 L milk per day. The average protein and fat contents were 50 ± 4 g/L and 60 ± 18 g/L, respectively. The mean 0-35 day average daily gain was 207 ± 46 g/d. Results of the restricted likelihood ratio tests did not highlight any significant interactions between the genotype of the lambs and milk production of the ewe. Our results support the hypothesis of additivity of maternal and direct effects on growth that is currently applied in genetic evaluation models.
Multiple robustness in factorized likelihood models.
Molina, J; Rotnitzky, A; Sued, M; Robins, J M
2017-09-01
We consider inference under a nonparametric or semiparametric model with likelihood that factorizes as the product of two or more variation-independent factors. We are interested in a finite-dimensional parameter that depends on only one of the likelihood factors and whose estimation requires the auxiliary estimation of one or several nuisance functions. We investigate general structures conducive to the construction of so-called multiply robust estimating functions, whose computation requires postulating several dimension-reducing models but which have mean zero at the true parameter value provided one of these models is correct.
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
von Oertzen, Timo; Brandmaier, Andreas M
2013-06-01
Structural equation models have become a broadly applied data-analytic framework. Among them, latent growth curve models have become a standard method in longitudinal research. However, researchers often rely solely on rules of thumb about statistical power in their study designs. The theory of power equivalence provides an analytical answer to the question of how design factors, for example, the number of observed indicators and the number of time points assessed in repeated measures, trade off against each other while holding the power for likelihood-ratio tests on the latent structure constant. In this article, we present applications of power-equivalent transformations on a model with data from a previously published study on cognitive aging, and highlight consequences of participant attrition on power. PsycINFO Database Record (c) 2013 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
Golden, Sean K; Harringa, John B; Pickhardt, Perry J; Ebinger, Alexander; Svenson, James E; Zhao, Ying-Qi; Li, Zhanhai; Westergaard, Ryan P; Ehlenbach, William J; Repplinger, Michael D
2016-07-01
To determine whether clinical scoring systems or physician gestalt can obviate the need for computed tomography (CT) in patients with possible appendicitis. Prospective, observational study of patients with abdominal pain at an academic emergency department (ED) from February 2012 to February 2014. Patients over 11 years old who had a CT ordered for possible appendicitis were eligible. All parameters needed to calculate the scores were recorded on standardised forms prior to CT. Physicians also estimated the likelihood of appendicitis. Test characteristics were calculated using clinical follow-up as the reference standard. Receiver operating characteristic curves were drawn. Of the 287 patients (mean age (range), 31 (12-88) years; 60% women), the prevalence of appendicitis was 33%. The Alvarado score had a positive likelihood ratio (LR(+)) (95% CI) of 2.2 (1.7 to 3) and a negative likelihood ratio (LR(-)) of 0.6 (0.4 to 0.7). The modified Alvarado score (MAS) had LR(+) 2.4 (1.6 to 3.4) and LR(-) 0.7 (0.6 to 0.8). The Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) score had LR(+) 1.3 (1.1 to 1.5) and LR(-) 0.5 (0.4 to 0.8). Physician-determined likelihood of appendicitis had LR(+) 1.3 (1.2 to 1.5) and LR(-) 0.3 (0.2 to 0.6). When combined with physician likelihoods, LR(+) and LR(-) was 3.67 and 0.48 (Alvarado), 2.33 and 0.45 (RIPASA), and 3.87 and 0.47 (MAS). The area under the curve was highest for physician-determined likelihood (0.72), but was not statistically significantly different from the clinical scores (RIPASA 0.67, Alvarado 0.72, MAS 0.7). Clinical scoring systems performed equally well as physician gestalt in predicting appendicitis. These scores do not obviate the need for imaging for possible appendicitis when a physician deems it necessary. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Zhou, Q; Ye, Z J; Su, Y; Zhang, J C; Shi, H Z
2010-08-01
N-terminal pro-brain natriuretic peptide (NT-proBNP) is a biomarker useful in diagnosis of pleural effusion due to heart failure. Thus far, its overall diagnostic accuracy has not been systematically reviewed. The aim of the present meta-analysis was to establish the overall diagnostic accuracy of the measurement of pleural NT-proBNP for identifying pleural effusion due to heart failure. After a systematic review of English-language studies, sensitivity, specificity, and other measures of accuracy of NT-proBNP concentrations in pleural fluid in the diagnosis of pleural effusion resulting from heart failure were pooled using fixed-effects models. Summary receiver operating characteristic curves were used to summarise overall test performance. Eight publications met the inclusion criteria. The summary estimates for pleural NT-proBNP in the diagnosis of pleural effusion attributable to heart failure were: sensitivity 0.95 (95% CI 0.92 to 0.97), specificity 0.94 (0.92 to 0.96), positive likelihood ratio 14.12 (10.23 to 19.51), negative likelihood ratio 0.06 (0.04 to 0.09) and diagnostic OR 213.87 (122.50 to 373.40). NT-proBNP levels in pleural fluid showed a high diagnostic accuracy and may help accurately differentiate cardiac from non-cardiac conditions in patients presenting with pleural effusion.
Thermodynamics versus Kinetics Dichotomy in the Linear Self-Assembly of Mixed Nanoblocks.
Ruiz, L; Keten, S
2014-06-05
We report classical and replica exchange molecular dynamics simulations that establish the mechanisms underpinning the growth kinetics of a binary mix of nanorings that form striped nanotubes via self-assembly. A step-growth coalescence model captures the growth process of the nanotubes, which suggests that high aspect ratio nanostructures can grow by obeying the universal laws of self-similar coarsening, contrary to systems that grow through nucleation and elongation. Notably, striped patterns do not depend on specific growth mechanisms, but are governed by tempering conditions that control the likelihood of depropagation and fragmentation.
Modified Multiple Model Adaptive Estimation (M3AE) for Simultaneous Parameter and State Estimation
1998-03-01
Contents Page Dedication : iv Acknowledgments v Table Of Contents vi List of Figures . . ; x List of Tables xv Abstract xvii Chapter 1 ...INTRODUCTION 1 1.1 Overview 1 1.2 Background 7 1.2.1 The Chi-Square Test 9 1.2.2 Generalized Likelihood Ratio (GLR) Testing 10 1.2.3 Multiple...M3AE Covariance Analysis 115 4.1.3 Simulations and Performance Analysis 121 4.1.3.1 Test Case 1 : aT = 32.0 124 4.1.3.2 Test Case 2: aT = 37.89, and
Distribution of Model-based Multipoint Heterogeneity Lod Scores
Xing, Chao; Morris, Nathan; Xing, Guan
2011-01-01
The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ2 approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating the distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution 12χ02+12χ12, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. PMID:21104892
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
NASA Technical Reports Server (NTRS)
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
Bayesian logistic regression approaches to predict incorrect DRG assignment.
Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural
2018-05-07
Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Personality patterns and Smoking behavior among students in Tabriz, Iran
Fakharri, Ali; Jahani, Ali; Sadeghi-Bazargani, Homayoun; Farahbakhsh, Mostafa; Asl, Asghar Mohammadpour
2017-01-01
Introduction Psychological factors have always been considered for their role on risk taking behavior such as substance abuse, risky driving and smoking. The aim of this study was to determine the association between smoking behavior and potential personality patterns among high school students in Tabriz, Iran. Methods Through a multistage sampling in a cross-sectional study, 1000 students were enrolled to represent the final grade high school student population of Tabriz, Iran in 2013. The personality patterns along with smoking status and some background information were collected through standard questionnaires along with Millon Clinical Multiaxial Inventory-III (MCMI-III). Fourteen personality patterns and ten clinical syndromes. ANOVA and Kruskal Wallis tests were used to compare numeric scales among the study participants, with respect to their smoking status. Stata version 13 statistical software package was used to analyze the data. Multivariate logistic regression was used to predict likelihood of smoking by personality status. Results Two logistic models were developed in both of whom male sex was identified as a determinant of regular smoking (1st model) and ever-smoking (2nd model). Depressive personality increased the likelihood of being a regular smoker by 2.8 times (OR=2.8, 95% CI: 1.3–6.1). The second personality disorder included in the model was sadistic personality with an odds ratio of 7.9 (96% CI: 1.2–53%). Histrionic personality increased the likelihood of experiencing smoking by 2.2 times (OR=2.2, 95% CI: 1.6–3.1) followed by borderline personality (OR=2.8, 95% CI: 0.97–8.1). Conclusion Histrionic and depressive personalities could be considered as strong associates of smoking, followed by borderline and sadistic personalities. A causal relationship couldn’t be assumed unless well controlled longitudinal studies reached the same findings using psychiatric interviews. PMID:28461869
Combining evidence using likelihood ratios in writer verification
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory
2013-01-01
Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.
THE CONSEQUENCES OF INDIA’S MALE SURPLUS FOR WOMEN’S PARTNERING AND SEXUAL EXPERIENCES*
Trent, Katherine; South, Scott J.; Bose, Sunita
2013-01-01
Data from the third wave of India’s 2005–2006 National Family and Health Survey are used to examine the influence of the community-level sex ratio on several dimensions of women’s partnering behavior and sexual experiences. Multi-level logistic regression models that control for individual demographic attributes and community-level characteristics reveal that the local male-to-female sex ratio is positively and significantly associated with the likelihood that women marry prior to age 16 and have experienced forced sex. These associations are modest in magnitude. However, no significant associations are observed between the sex ratio and whether women have had two or more lifetime sexual partners or women’s risk of contracting a sexually-transmitted disease. Birth cohort, education, religion, caste, region, urban residence, and several community-level measures of women’s status also emerge as significant predictors of Indian women’s partnering and sexual experiences. The implications of our results for India’s growing surplus of adult men are discussed. PMID:26085706
THE CONSEQUENCES OF INDIA'S MALE SURPLUS FOR WOMEN'S PARTNERING AND SEXUAL EXPERIENCES.
Trent, Katherine; South, Scott J; Bose, Sunita
2015-06-01
Data from the third wave of India's 2005-2006 National Family and Health Survey are used to examine the influence of the community-level sex ratio on several dimensions of women's partnering behavior and sexual experiences. Multi-level logistic regression models that control for individual demographic attributes and community-level characteristics reveal that the local male-to-female sex ratio is positively and significantly associated with the likelihood that women marry prior to age 16 and have experienced forced sex. These associations are modest in magnitude. However, no significant associations are observed between the sex ratio and whether women have had two or more lifetime sexual partners or women's risk of contracting a sexually-transmitted disease. Birth cohort, education, religion, caste, region, urban residence, and several community-level measures of women's status also emerge as significant predictors of Indian women's partnering and sexual experiences. The implications of our results for India's growing surplus of adult men are discussed.
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
PBOOST: a GPU-based tool for parallel permutation tests in genome-wide association studies.
Yang, Guangyuan; Jiang, Wei; Yang, Qiang; Yu, Weichuan
2015-05-01
The importance of testing associations allowing for interactions has been demonstrated by Marchini et al. (2005). A fast method detecting associations allowing for interactions has been proposed by Wan et al. (2010a). The method is based on likelihood ratio test with the assumption that the statistic follows the χ(2) distribution. Many single nucleotide polymorphism (SNP) pairs with significant associations allowing for interactions have been detected using their method. However, the assumption of χ(2) test requires the expected values in each cell of the contingency table to be at least five. This assumption is violated in some identified SNP pairs. In this case, likelihood ratio test may not be applicable any more. Permutation test is an ideal approach to checking the P-values calculated in likelihood ratio test because of its non-parametric nature. The P-values of SNP pairs having significant associations with disease are always extremely small. Thus, we need a huge number of permutations to achieve correspondingly high resolution for the P-values. In order to investigate whether the P-values from likelihood ratio tests are reliable, a fast permutation tool to accomplish large number of permutations is desirable. We developed a permutation tool named PBOOST. It is based on GPU with highly reliable P-value estimation. By using simulation data, we found that the P-values from likelihood ratio tests will have relative error of >100% when 50% cells in the contingency table have expected count less than five or when there is zero expected count in any of the contingency table cells. In terms of speed, PBOOST completed 10(7) permutations for a single SNP pair from the Wellcome Trust Case Control Consortium (WTCCC) genome data (Wellcome Trust Case Control Consortium, 2007) within 1 min on a single Nvidia Tesla M2090 device, while it took 60 min in a single CPU Intel Xeon E5-2650 to finish the same task. More importantly, when simultaneously testing 256 SNP pairs for 10(7) permutations, our tool took only 5 min, while the CPU program took 10 h. By permuting on a GPU cluster consisting of 40 nodes, we completed 10(12) permutations for all 280 SNP pairs reported with P-values smaller than 1.6 × 10⁻¹² in the WTCCC datasets in 1 week. The source code and sample data are available at http://bioinformatics.ust.hk/PBOOST.zip. gyang@ust.hk; eeyu@ust.hk Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Genealogical Working Distributions for Bayesian Model Testing with Phylogenetic Uncertainty
Baele, Guy; Lemey, Philippe; Suchard, Marc A.
2016-01-01
Marginal likelihood estimates to compare models using Bayes factors frequently accompany Bayesian phylogenetic inference. Approaches to estimate marginal likelihoods have garnered increased attention over the past decade. In particular, the introduction of path sampling (PS) and stepping-stone sampling (SS) into Bayesian phylogenetics has tremendously improved the accuracy of model selection. These sampling techniques are now used to evaluate complex evolutionary and population genetic models on empirical data sets, but considerable computational demands hamper their widespread adoption. Further, when very diffuse, but proper priors are specified for model parameters, numerical issues complicate the exploration of the priors, a necessary step in marginal likelihood estimation using PS or SS. To avoid such instabilities, generalized SS (GSS) has recently been proposed, introducing the concept of “working distributions” to facilitate—or shorten—the integration process that underlies marginal likelihood estimation. However, the need to fix the tree topology currently limits GSS in a coalescent-based framework. Here, we extend GSS by relaxing the fixed underlying tree topology assumption. To this purpose, we introduce a “working” distribution on the space of genealogies, which enables estimating marginal likelihoods while accommodating phylogenetic uncertainty. We propose two different “working” distributions that help GSS to outperform PS and SS in terms of accuracy when comparing demographic and evolutionary models applied to synthetic data and real-world examples. Further, we show that the use of very diffuse priors can lead to a considerable overestimation in marginal likelihood when using PS and SS, while still retrieving the correct marginal likelihood using both GSS approaches. The methods used in this article are available in BEAST, a powerful user-friendly software package to perform Bayesian evolutionary analyses. PMID:26526428
Log-Linear Models for Gene Association
Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.
2009-01-01
We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032
Application of an Elongated Kelvin Model to Space Shuttle Foams
NASA Technical Reports Server (NTRS)
Sullivan, Roy M.; Ghosn, Louis J.; Lerch, Bradley A.
2008-01-01
Spray-on foam insulation is applied to the exterior of the Space Shuttle s External Tank to limit propellant boil-off and to prevent ice formation. The Space Shuttle foams are rigid closed-cell polyurethane foams. The two foams used most extensively on the Space Shuttle External Tank are BX-265 and NCFI24-124. Since the catastrophic loss of the Space Shuttle Columbia, numerous studies have been conducted to mitigate the likelihood and the severity of foam shedding during the Shuttle s ascent to space. Due to the foaming and rising process, the foam microstructures are elongated in the rise direction. As a result, these two foams exhibit a non-isotropic mechanical behavior. In this paper, a detailed microstructural characterization of the two foams is presented. The key features of the foam cells are summarized and the average cell dimensions in the two foams are compared. Experimental studies to measure the room temperature mechanical response of the two foams in the two principal material directions (parallel to the rise and perpendicular to the rise) are also reported. The measured elastic modulus, proportional limit stress, ultimate tensile stress and the Poisson s ratios for the two foams are compared. The generalized elongated Kelvin foam model previously developed by the authors is reviewed and the equations which result from this model are presented. The resulting equations show that the ratio of the elastic modulus in the rise direction to that in the perpendicular-to-rise direction as well as the ratio of the strengths in the two material directions is only a function of the microstructural dimensions. Using the measured microstructural dimensions and the measured stiffness ratio, the foam tensile strength ratio and Poisson s ratios are predicted for both foams. The predicted tensile strength ratio is in close agreement with the measured strength ratios for both BX-265 and NCFI24-124. The comparison between the predicted Poisson s ratios and the measured values is not as favorable.
Prospective Tests of Southern California Earthquake Forecasts
NASA Astrophysics Data System (ADS)
Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.
2004-12-01
We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability that the second would be wrongly rejected in favor of the first. Computing alpha and beta requires knowing the theoretical distribution of likelihood scores under each hypothesis, which we estimate by simulations. In this scheme, each forecast is given equal status; there is no "null hypothesis" which would be accepted by default. Forecasts and test results will be archived and posted on the RELM web site. Major problems under discussion include how to treat aftershocks, which clearly violate the variable-rate Poissonian hypotheses that we employ, and how to deal with the temporal variations in catalog completeness that follow large earthquakes.
Statistical modelling of growth using a mixed model with orthogonal polynomials.
Suchocki, T; Szyda, J
2011-02-01
In statistical modelling, the effects of single-nucleotide polymorphisms (SNPs) are often regarded as time-independent. However, for traits recorded repeatedly, it is very interesting to investigate the behaviour of gene effects over time. In the analysis, simulated data from the 13th QTL-MAS Workshop (Wageningen, The Netherlands, April 2009) was used and the major goal was the modelling of genetic effects as time-dependent. For this purpose, a mixed model which describes each effect using the third-order Legendre orthogonal polynomials, in order to account for the correlation between consecutive measurements, is fitted. In this model, SNPs are modelled as fixed, while the environment is modelled as random effects. The maximum likelihood estimates of model parameters are obtained by the expectation-maximisation (EM) algorithm and the significance of the additive SNP effects is based on the likelihood ratio test, with p-values corrected for multiple testing. For each significant SNP, the percentage of the total variance contributed by this SNP is calculated. Moreover, by using a model which simultaneously incorporates effects of all of the SNPs, the prediction of future yields is conducted. As a result, 179 from the total of 453 SNPs covering 16 out of 18 true quantitative trait loci (QTL) were selected. The correlation between predicted and true breeding values was 0.73 for the data set with all SNPs and 0.84 for the data set with selected SNPs. In conclusion, we showed that a longitudinal approach allows for estimating changes of the variance contributed by each SNP over time and demonstrated that, for prediction, the pre-selection of SNPs plays an important role.
Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.
ERIC Educational Resources Information Center
McNeill, Brian W.; Stoltenberg, Cal D.
1989-01-01
Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…
Statistical inference for template aging
NASA Astrophysics Data System (ADS)
Schuckers, Michael E.
2006-04-01
A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.
Factors predicting a home death among home palliative care recipients
Ko, Ming-Chung; Huang, Sheng-Jean; Chen, Chu-Chieh; Chang, Yu-Ping; Lien, Hsin-Yi; Lin, Jia-Yi; Woung, Lin-Chung; Chan, Shang-Yih
2017-01-01
Abstract Awareness of factors affecting the place of death could improve communication between healthcare providers and patients and their families regarding patient preferences and the feasibility of dying in the preferred place. This study aimed to evaluate factors predicting home death among home palliative care recipients. This is a population-based study using a national representative sample retrieved from the National Health Insurance Research Database. Subjects receiving home palliative care, from 2010 to 2012, were analyzed to evaluate the association between a home death and various characteristics related to illness, individual, and health care utilization. A multiple-logistic regression model was used to assess the independent effect of various characteristics on the likelihood of a home death. The overall rate of a home death for home palliative care recipients was 43.6%. Age; gender; urbanization of the area where the patients lived; illness; the total number of home visits by all health care professionals; the number of home visits by nurses; utilization of nasogastric tube, endotracheal tube, or indwelling urinary catheter; the number of emergency department visits; and admission to intensive care unit in previous 1 year were not significantly associated with the risk of a home death. Physician home visits increased the likelihood of a home death. Compared with subjects without physician home visits (31.4%) those with 1 physician home visit (53.0%, adjusted odds ratio [AOR]: 3.23, 95% confidence interval [CI]: 1.93–5.42) and those with ≥2 physician home visits (43.9%, AOR: 2.23, 95% CI: 1.06–4.70) had higher likelihood of a home death. Compared with subjects with hospitalization 0 to 6 times in previous 1 year, those with hospitalization ≥7 times in previous 1 year (AOR: 0.57, 95% CI: 0.34–0.95) had lower likelihood of a home death. Among home palliative care recipients, physician home visits increased the likelihood of a home death. Hospitalizations ≥7 times in previous 1 year decreased the likelihood of a home death. PMID:29019887
NASA Astrophysics Data System (ADS)
Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya
2012-05-01
Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.
A unified framework for group independent component analysis for multi-subject fMRI data
Guo, Ying; Pagnoni, Giuseppe
2008-01-01
Independent component analysis (ICA) is becoming increasingly popular for analyzing functional magnetic resonance imaging (fMRI) data. While ICA has been successfully applied to single-subject analysis, the extension of ICA to group inferences is not straightforward and remains an active topic of research. Current group ICA models, such as the GIFT (Calhoun et al., 2001) and tensor PICA (Beckmann and Smith, 2005), make different assumptions about the underlying structure of the group spatio-temporal processes and are thus estimated using algorithms tailored for the assumed structure, potentially leading to diverging results. To our knowledge, there are currently no methods for assessing the validity of different model structures in real fMRI data and selecting the most appropriate one among various choices. In this paper, we propose a unified framework for estimating and comparing group ICA models with varying spatio-temporal structures. We consider a class of group ICA models that can accommodate different group structures and include existing models, such as the GIFT and tensor PICA, as special cases. We propose a maximum likelihood (ML) approach with a modified Expectation-Maximization (EM) algorithm for the estimation of the proposed class of models. Likelihood ratio tests (LRT) are presented to compare between different group ICA models. The LRT can be used to perform model comparison and selection, to assess the goodness-of-fit of a model in a particular data set, and to test group differences in the fMRI signal time courses between subject subgroups. Simulation studies are conducted to evaluate the performance of the proposed method under varying structures of group spatio-temporal processes. We illustrate our group ICA method using data from an fMRI study that investigates changes in neural processing associated with the regular practice of Zen meditation. PMID:18650105
Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil
2009-07-01
Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.
Tillman, Fred D.; Anning, David W.
2014-01-01
The Colorado River is one of the most important sources of water in the western United States, supplying water to over 35 million people in the U.S. and 3 million people in Mexico. High dissolved-solids loading to the River and tributaries are derived primarily from geologic material deposited in inland seas in the mid-to-late Cretaceous Period, but this loading may be increased by human activities. High dissolved solids in the River causes substantial damages to users, primarily in reduced agricultural crop yields and corrosion. The Colorado River Basin Salinity Control Program was created to manage dissolved-solids loading to the River and has focused primarily on reducing irrigation-related loading from agricultural areas. This work presents a reconnaissance of existing data from sites in the Upper Colorado River Basin (UCRB) in order to highlight areas where suspended-sediment control measures may be useful in reducing dissolved-solids concentrations. Multiple linear regression was used on data from 164 sites in the UCRB to develop dissolved-solids models that include combinations of explanatory variables of suspended sediment, flow, and time. Results from the partial t-test, overall likelihood ratio, and partial likelihood ratio on the models were used to group the sites into categories of strong, moderate, weak, and no-evidence of a relation between suspended-sediment and dissolved-solids concentrations. Results show 68 sites have strong or moderate evidence of a relation, with drainage areas for many of these sites composed of a large percentage of clastic sedimentary rocks. These results could assist water managers in the region in directing field-scale evaluation of suspended-sediment control measures to reduce UCRB dissolved-solids loading.
Tam, Vincent H; Chang, Kai-Tai; Zhou, Jian; Ledesma, Kimberly R; Phe, Kady; Gao, Song; Van Bambeke, Françoise; Sánchez-Díaz, Ana María; Zamorano, Laura; Oliver, Antonio; Cantón, Rafael
2017-05-01
β-Lactams are commonly used for nosocomial infections and resistance to these agents among Gram-negative bacteria is increasing rapidly. Optimized dosing is expected to reduce the likelihood of resistance development during antimicrobial therapy, but the target for clinical dose adjustment is not well established. We examined the likelihood that various dosing exposures would suppress resistance development in an in vitro hollow-fibre infection model. Two strains of Klebsiella pneumoniae and two strains of Pseudomonas aeruginosa (baseline inocula of ∼10 8 cfu/mL) were examined. Various dosing exposures of cefepime, ceftazidime and meropenem were simulated in the hollow-fibre infection model. Serial samples were obtained to ascertain the pharmacokinetic simulations and viable bacterial burden for up to 120 h. Drug concentrations were determined by a validated LC-MS/MS assay and the simulated exposures were expressed as C min /MIC ratios. Resistance development was detected by quantitative culture on drug-supplemented media plates (at 3× the corresponding baseline MIC). The C min /MIC breakpoint threshold to prevent bacterial regrowth was identified by classification and regression tree (CART) analysis. For all strains, the bacterial burden declined initially with the simulated exposures, but regrowth was observed in 9 out of 31 experiments. CART analysis revealed that a C min /MIC ratio ≥3.8 was significantly associated with regrowth prevention (100% versus 44%, P = 0.001). The development of β-lactam resistance during therapy could be suppressed by an optimized dosing exposure. Validation of the proposed target in a well-designed clinical study is warranted. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Clinical Diagnosis of Bordetella Pertussis Infection: A Systematic Review.
Ebell, Mark H; Marchello, Christian; Callahan, Maria
2017-01-01
Bordetella pertussis (BP) is a common cause of prolonged cough. Our objective was to perform an updated systematic review of the clinical diagnosis of BP without restriction by patient age. We identified prospective cohort studies of patients with cough or suspected pertussis and assessed study quality using QUADAS-2. We performed bivariate meta-analysis to calculate summary estimates of accuracy and created summary receiver operating characteristic curves to explore heterogeneity by vaccination status and age. Of 381 studies initially identified, 22 met our inclusion criteria, of which 14 had a low risk of bias. The overall clinical impression was the most accurate predictor of BP (positive likelihood ratio [LR+], 3.3; negative likelihood ratio [LR-], 0.63). The presence of whooping cough (LR+, 2.1) and posttussive vomiting (LR+, 1.7) somewhat increased the likelihood of BP, whereas the absence of paroxysmal cough (LR-, 0.58) and the absence of sputum (LR-, 0.63) decreased it. Whooping cough and posttussive vomiting have lower sensitivity in adults. Clinical criteria defined by the Centers for Disease Control and Prevention were sensitive (0.90) but nonspecific. Typical signs and symptoms of BP may be more sensitive but less specific in vaccinated patients. The clinician's overall impression was the most accurate way to determine the likelihood of BP infection when a patient initially presented. Clinical decision rules that combine signs, symptoms, and point-of-care tests have not yet been developed or validated. © Copyright 2017 by the American Board of Family Medicine.
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.
ERIC Educational Resources Information Center
Blackwood, Larry G.; Bradley, Edwin L.
1989-01-01
Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)
[Clinical examination and the Valsalva maneuver in heart failure].
Liniado, Guillermo E; Beck, Martín A; Gimeno, Graciela M; González, Ana L; Cianciulli, Tomás F; Castiello, Gustavo G; Gagliardi, Juan A
2018-01-01
Congestion in heart failure patients with reduced ejection fraction (HFrEF) is relevant and closely linked to the clinical course. Bedside blood pressure measurement during the Valsalva maneuver (Val) added to clinical examination may improve the assessment of congestion when compared to NT-proBNP levels and left atrial pressure (LAP) estimation by Doppler echocardiography, as surrogate markers of congestion in HFrEF. A clinical examination, LAP and blood tests were performed in 69 HFrEF ambulatory patients with left ventricular ejection fraction ≤ 40% and sinus rhythm. Framingham Heart Failure Score (HFS) was used to evaluate clinical congestion; Val was classified as normal or abnormal, NT-proBNP was classified as low (< 1000 pg/ml) or high (≥ 1000 pg/ml) and the ratio between Doppler early mitral inflow and tissue diastolic velocity was used to estimate LAP and was classified as low (E/e'< 15) or high (E/e' ≥ 15). A total of 69 patients with HFrEF were included; 27 had a HFS ≥ 2 and 13 of them had high NT-proBNP. HFS ≥ 2 had a 62% sensitivity, 70% specificity and a positive likelihood ratio of 2.08 (p=0.01) to detect congestion. When Val was added to clinical examination, the presence of a HFS ≥ 2 and abnormal Val showed a 100% sensitivity, 64% specificity and a positive likelihood ratio of 2.8 (p = 0.0004). Compared with LAP, the presence of HFS = 2 and abnormal Val had 86% sensitivity, 54% specificity and a positive likelihood ratio of 1.86 (p = 0.03). In conclusion, an integrated clinical examination with the addition Valsalva maneuver may improve the assessment of congestion in patients with HFrEF.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Dhooria, Sahajal; Aggarwal, Ashutosh N; Gupta, Dheeraj; Behera, Digambar; Agarwal, Ritesh
2015-07-01
The use of endoscopic ultrasound with bronchoscope-guided fine-needle aspiration (EUS-B-FNA) has been described in the evaluation of mediastinal lymphadenopathy. Herein, we conduct a meta-analysis to estimate the overall diagnostic yield and safety of EUS-B-FNA combined with endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA), in the diagnosis of mediastinal lymphadenopathy. The PubMed and EmBase databases were searched for studies reporting the outcomes of EUS-B-FNA in diagnosis of mediastinal lymphadenopathy. The study quality was assessed using the QualSyst tool. The yield of EBUS-TBNA alone and the combined procedure (EBUS-TBNA and EUS-B-FNA) were analyzed by calculating the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each study, and pooling the study results using a random effects model. Heterogeneity and publication bias were assessed for individual outcomes. The additional diagnostic gain of EUS-B-FNA over EBUS-TBNA was calculated using proportion meta-analysis. Our search yielded 10 studies (1,080 subjects with mediastinal lymphadenopathy). The sensitivity of the combined procedure was significantly higher than EBUS-TBNA alone (91% vs 80%, P = .004), in staging of lung cancer (4 studies, 465 subjects). The additional diagnostic gain of EUS-B-FNA over EBUS-TBNA was 7.6% in the diagnosis of mediastinal adenopathy. No serious complication of EUS-B-FNA procedure was reported. Clinical and statistical heterogeneity was present without any evidence of publication bias. Combining EBUS-TBNA and EUS-B-FNA is an effective and safe method, superior to EBUS-TBNA alone, in the diagnosis of mediastinal lymphadenopathy. Good quality randomized controlled trials are required to confirm the results of this systematic review. Copyright © 2015 by Daedalus Enterprises.
Cadogan, Angela; McNair, Peter; Laslett, Mark; Hing, Wayne; Taylor, Stephen
2013-01-01
Objectives: Rotator cuff tears are a common and disabling complaint. The early diagnosis of medium and large size rotator cuff tears can enhance the prognosis of the patient. The aim of this study was to identify clinical features with the strongest ability to accurately predict the presence of a medium, large or multitendon (MLM) rotator cuff tear in a primary care cohort. Methods: Participants were consecutively recruited from primary health care practices (n = 203). All participants underwent a standardized history and physical examination, followed by a standardized X-ray series and diagnostic ultrasound scan. Clinical features associated with the presence of a MLM rotator cuff tear were identified (P<0.200), a logistic multiple regression model was derived for identifying a MLM rotator cuff tear and thereafter diagnostic accuracy was calculated. Results: A MLM rotator cuff tear was identified in 24 participants (11.8%). Constant pain and a painful arc in abduction were the strongest predictors of a MLM tear (adjusted odds ratio 3.04 and 13.97 respectively). Combinations of ten history and physical examination variables demonstrated highest levels of sensitivity when five or fewer were positive [100%, 95% confidence interval (CI): 0.86–1.00; negative likelihood ratio: 0.00, 95% CI: 0.00–0.28], and highest specificity when eight or more were positive (0.91, 95% CI: 0.86–0.95; positive likelihood ratio 4.66, 95% CI: 2.34–8.74). Discussion: Combinations of patient history and physical examination findings were able to accurately detect the presence of a MLM rotator cuff tear. These findings may aid the primary care clinician in more efficient and accurate identification of rotator cuff tears that may require further investigation or orthopedic consultation. PMID:24421626
McHugh, Matthew D.; Rochman, Monica F.; Sloane, Douglas M.; Berg, Robert A.; Mancini, Mary E.; Nadkarni, Vinay M.; Merchant, Raina M.; Aiken, Linda H.
2015-01-01
Background Although nurses are the most likely first responders to witness an in-hospital cardiac arrest (IHCA) and provide treatment, little research has been undertaken to determine what features of nursing are related to cardiac arrest outcomes. Objectives To determine the association between nurse staffing, nurse work environments, and IHCA survival. Research Design Cross-sectional study of data from: (1) the American Heart Association’s Get With The Guidelines-Resuscitation database; (2) the University of Pennsylvania Multi-State Nursing Care and and Patient Safety; and (3) the American Hospital Association annual survey. Logistic regression models were used to determine the association of the features of nursing and IHCA survival to discharge after adjusting for hospital and patient characteristics. Subjects A total of 11,160 adult patients aged 18 and older between 2005 and 2007 in 75 hospitals in 4 states (Pennsylvania, Florida, California, and New Jersey). Results Each additional patient per nurse on medical-surgical units was associated with a 5% lower likelihood of surviving IHCA to discharge (odds ratio = 0.95; 95% confidence interval, 0.91–0.99). Further, patients cared for in hospitals with poor work environments had a 16% lower likelihood of IHCA survival (odds ratio = 0.84; 95% confidence interval, 0.71–0.99) than patients cared for in hospitals with better work environments. Conclusions Better work environments and decreased patient-to-nurse ratios on medical-surgical units are associated with higher odds of patient survival after an IHCA. These results add to a large body of literature suggesting that outcomes are better when nurses have a more reasonable workload and work in good hospital work environments. Improving nurse working conditions holds promise for improving survival following IHCA. PMID:26783858
Treatment, survival, and costs of laryngeal cancer care in the elderly.
Gourin, Christine G; Dy, Sydney M; Herbert, Robert J; Blackford, Amanda L; Quon, Harry; Forastiere, Arlene A; Eisele, David W; Frick, Kevin D
2014-08-01
To examine associations between treatment and volume with survival and costs in elderly patients with laryngeal squamous cell cancer (SCCA). Retrospective cross-sectional analysis of Surveillance, Epidemiology, and End Results-Medicare data. We evaluated 2,370 patients diagnosed with laryngeal SCCA from 2004 to 2007 using cross-tabulations, multivariate logistic and generalized linear regression modeling, and survival analysis. Chemoradiation was significantly associated with supraglottic tumors (relative risk ratio: 2.6, 95% confidence interval [CI]: 1.7-4.0), additional cancer-directed treatment (odds ratio [OR]: 1.8, 95% CI: 1.2-2.7), and a reduced likelihood of surgical salvage (OR: 0.3, 95% CI: 0.2-0.6). Surgery with postoperative radiation was associated with significantly improved survival (hazard ratio [HR]: 0.7, 95% CI: 0.6-0.9), after controlling for patient and tumor variables including salvage. High-volume care was not associated with survival for nonoperative treatment but was associated with improved survival (HR: 0.7, 95% CI: 0.5-0.8) among surgical patients. Initial treatment and 5-year overall costs for chemoradiation were higher than for all other treatment categories. High-volume care was associated with significantly lower costs of care for surgical patients but was not associated with differences in costs of care for nonoperative treatment. Chemoradiation in elderly patients with laryngeal cancer was associated with increased costs, additional cancer-directed treatment, and a reduced likelihood of surgical salvage. Surgery with postoperative radiation was associated with improved survival in this cohort, and high-volume hospital surgical care was associated with improved survival and lower costs. These findings have implications for improving the quality of laryngeal cancer treatment at a time of both rapid growth in the elderly population and diminishing healthcare resources. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Shi, Hong-Bin; Yu, Jia-Xing; Yu, Jian-Xiu; Feng, Zheng; Zhang, Chao; Li, Guang-Yong; Zhao, Rui-Ning; Yang, Xiao-Bo
2017-08-03
Previous studies have revealed the importance of microRNAs' (miRNAs) function as biomarkers in diagnosing human bladder cancer (BC). However, the results are discordant. Consequently, the possibility of miRNAs to be BC biomarkers was summarized in this meta-analysis. In this study, the relevant articles were systematically searched from CBM, PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The bivariate model was used to calculate the pooled diagnostic parameters and summary receiver operator characteristic (SROC) curve in this meta-analysis, thereby estimating the whole predictive performance. STATA software was used during the whole analysis. Thirty-one studies from 10 articles, including 1556 cases and 1347 controls, were explored in this meta-analysis. In short, the pooled sensitivity, area under the SROC curve, specificity, positive likelihood ratio, diagnostic odds ratio, and negative likelihood ratio were 0.72 (95%CI 0.66-0.76), 0.80 (0.77-0.84), 0.76 (0.71-0.81), 3.0 (2.4-3.8), 8 (5.0-12.0), and 0.37 (0.30-0.46) respectively. Additionally, sub-group and meta-regression analyses revealed that there were significant differences between ethnicity, miRNA profiling, and specimen sub-groups. These results suggested that Asian population-based studies, multiple-miRNA profiling, and blood-based assays might yield a higher diagnostic accuracy than their counterparts. This meta-analysis demonstrated that miRNAs, particularly multiple miRNAs in the blood, might be novel, useful biomarkers with relatively high sensitivity and specificity and can be used for the diagnosis of BC. However, further prospective studies with more samples should be performed for further validation.
NASA Astrophysics Data System (ADS)
Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai
2018-05-01
The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.
Su, Jingjun; Du, Xinzhong; Li, Xuyong
2018-05-16
Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
A Bayesian Alternative for Multi-objective Ecohydrological Model Specification
NASA Astrophysics Data System (ADS)
Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.
2015-12-01
Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.
Diffuse prior monotonic likelihood ratio test for evaluation of fused image quality measures.
Wei, Chuanming; Kaplan, Lance M; Burks, Stephen D; Blum, Rick S
2011-02-01
This paper introduces a novel method to score how well proposed fused image quality measures (FIQMs) indicate the effectiveness of humans to detect targets in fused imagery. The human detection performance is measured via human perception experiments. A good FIQM should relate to perception results in a monotonic fashion. The method computes a new diffuse prior monotonic likelihood ratio (DPMLR) to facilitate the comparison of the H(1) hypothesis that the intrinsic human detection performance is related to the FIQM via a monotonic function against the null hypothesis that the detection and image quality relationship is random. The paper discusses many interesting properties of the DPMLR and demonstrates the effectiveness of the DPMLR test via Monte Carlo simulations. Finally, the DPMLR is used to score FIQMs with test cases considering over 35 scenes and various image fusion algorithms.
Mohd-Sidik, Sherina; Arroll, Bruce; Goodyear-Smith, Felicity; Zain, Azhar M D
2011-01-01
To determine the diagnostic accuracy of the two questions with help question (TQWHQ) in the Malay language. The two questions are case-finding questions on depression, and a question on whether help is needed was added to increase the specificity of the two questions. This cross sectional validation study was conducted in a government funded primary care clinic in Malaysia. The participants included 146 consecutive women patients receiving no psychotropic drugs and who were Malay speakers. The main outcome measures were sensitivity, specificity, and likelihood ratios of the two questions and help question. The two questions showed a sensitivity of 99% (95% confidence interval 88% to 99.9%) and a specificity of 70% (62% to 78%), respectively. The likelihood ratio for a positive test was 3.3 (2.5 to 4.5) and the likelihood ratio for a negative test was 0.01 (0.00 to 0.57). The addition of the help question to the two questions increased the specificity to 95% (89% to 98%). The two qeustions on depression detected most cases of depression in this study. The questions have the advantage of brevity. The addition of the help question increased the specificity of the two questions. Based on these findings, the TQWHQ can be strongly recommended for detection of depression in government primary care clnics in Malaysia. Translation did not apear to affect the validity of the TQWHQ.
[Accuracy of three methods for the rapid diagnosis of oral candidiasis].
Lyu, X; Zhao, C; Yan, Z M; Hua, H
2016-10-09
Objective: To explore a simple, rapid and efficient method for the diagnosis of oral candidiasis in clinical practice. Methods: Totally 124 consecutive patients with suspected oral candidiasis were enrolled from Department of Oral Medicine, Peking University School and Hospital of Stomatology, Beijing, China. Exfoliated cells of oral mucosa and saliva or concentrated oral rinse) obtained from all participants were tested by three rapid smear methods(10% KOH smear, gram-stained smear, Congo red stained smear). The diagnostic efficacy(sensitivity, specificity, Youden's index, likelihood ratio, consistency, predictive value and area under curve(AUC) of each of the above mentioned three methods was assessed by comparing the results with the gold standard(combination of clinical diagnosis, laboratory diagnosis and expert opinion). Results: Gram-stained smear of saliva(or concentrated oral rinse) demonstrated highest sensitivity(82.3%). Test of 10%KOH smear of exfoliated cells showed highest specificity(93.5%). Congo red stained smear of saliva(or concentrated oral rinse) displayed highest diagnostic efficacy(79.0% sensitivity, 80.6% specificity, 0.60 Youden's index, 4.08 positive likelihood ratio, 0.26 negative likelihood ratio, 80% consistency, 80.3% positive predictive value, 79.4% negative predictive value and 0.80 AUC). Conclusions: Test of Congo red stained smear of saliva(or concentrated oral rinse) could be used as a point-of-care tool for the rapid diagnosis of oral candidiasis in clinical practice. Trial registration: Chinese Clinical Trial Registry, ChiCTR-DDD-16008118.
Recognition of depressive symptoms by physicians.
Henriques, Sergio Gonçalves; Fráguas, Renério; Iosifescu, Dan V; Menezes, Paulo Rossi; Lucia, Mara Cristina Souza de; Gattaz, Wagner Farid; Martins, Milton Arruda
2009-01-01
To investigate the recognition of depressive symptoms of major depressive disorder (MDD) by general practitioners. MDD is underdiagnosed in medical settings, possibly because of difficulties in the recognition of specific depressive symptoms. A cross-sectional study of 316 outpatients at their first visit to a teaching general hospital. We evaluated the performance of 19 general practitioners using Primary Care Evaluation of Mental Disorders (PRIME-MD) to detect depressive symptoms and compared them to 11 psychiatrists using Structured Clinical Interview Axis I Disorders, Patient Version (SCID I/P). We measured likelihood ratios, sensitivity, specificity, and false positive and false negative frequencies. The lowest positive likelihood ratios were for psychomotor agitation/retardation (1.6) and fatigue (1.7), mostly because of a high rate of false positive results. The highest positive likelihood ratio was found for thoughts of suicide (8.5). The lowest sensitivity, 61.8%, was found for impaired concentration. The sensitivity for worthlessness or guilt in patients with medical illness was 67.2% (95% CI, 57.4-76.9%), which is significantly lower than that found in patients without medical illness, 91.3% (95% CI, 83.2-99.4%). Less adequately identified depressive symptoms were both psychological and somatic in nature. The presence of a medical illness may decrease the sensitivity of recognizing specific depressive symptoms. Programs for training physicians in the use of diagnostic tools should consider their performance in recognizing specific depressive symptoms. Such procedures could allow for the development of specific training to aid in the detection of the most misrecognized depressive symptoms.
Gallo, Jiri; Juranova, Jarmila; Svoboda, Michal; Zapletalova, Jana
2017-09-01
The aim of this study was to evaluate the characteristics of synovial fluid (SF) white cell count (SWCC) and neutrophil/lymphocyte percentage in the diagnosis of prosthetic joint infection (PJI) for particular threshold values. This was a prospective study of 391 patients in whom SF specimens were collected before total joint replacement revisions. SF was aspirated before joint capsule incision. The PJI diagnosis was based only on non-SF data. Receiver operating characteristic plots were constructed for the SWCC and differential counts of leukocytes in aspirated fluid. Logistic binomic regression was used to distinguish infected and non-infected cases in the combined data. PJI was diagnosed in 78 patients, and aseptic revision in 313 patients. The areas (AUC) under the curve for the SWCC, the neutrophil and lymphocyte percentages were 0.974, 0.962, and 0.951, respectively. The optimal cut-off for PJI was 3,450 cells/μL, 74.6% neutrophils, and 14.6% lymphocytes. Positive likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 19.0, 10.4, and 9.5, respectively. Negative likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 0.06, 0.076, and 0.092, respectively. Based on AUC, the present study identified cut-off values for the SWCC and differential leukocyte count for the diagnosis of PJI. The likelihood ratio for positive/negative SWCCs can significantly change the pre-test probability of PJI.
Diagnostic accuracy of history and physical examination in bacterial acute rhinosinusitis.
Autio, Timo J; Koskenkorva, Timo; Närkiö, Mervi; Leino, Tuomo K; Koivunen, Petri; Alho, Olli-Pekka
2015-07-01
To evaluate the diagnostic accuracy of symptoms, the symptom progression pattern, and clinical signs in identifying bacterial acute rhinosinusitis (ARS). We conducted an inception cohort study among 50 military recruits with ARS. We collected symptoms daily from the onset of symptoms to approximately 10 days. At 9 to 10 days, standardized data on symptoms and physical findings were gathered. A positive culture of maxillary sinus aspirate was considered to be the reference standard for bacterial ARS. At 9 to 10 days, the presence or deterioration after 5 days of any of the symptoms could not be used to diagnose bacterial ARS. Toothache had an adequate positive likelihood ratio (positive likelihood ratio [LR+] 4.4) but was too rare to be used for screening. In contrast, several physical findings at 9 to 10 days were of more diagnostic use and frequent enough for screening. Moderate or profuse (vs. none/minimal) amount of secretion in nasal passage seen in anterior rhinoscopy satisfactorily either ruled in, if present (LR+ 3.2), or ruled out, if absent (negative likelihood ratio 0.2), bacterial ARS. If any secretion was seen in the posterior pharynx or middle meatus, the probability of bacterial ARS increased markedly (LR+ 5.3 and LR+ 11.0, respectively). We found symptoms or their change to be of little use in identifying bacterial ARS. In contrast, we observed several clinical findings after 9 to 10 days of symptoms to predict bacterial ARS quite accurately. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
The Diagnostic Accuracy of Cytology for the Diagnosis of Hepatobiliary and Pancreatic Cancers.
Al-Hajeili, Marwan; Alqassas, Maryam; Alomran, Astabraq; Batarfi, Bashaer; Basunaid, Bashaer; Alshail, Reem; Alaydarous, Shahad; Bokhary, Rana; Mosli, Mahmoud
2018-06-13
Although cytology testing is considered a valuable method to diagnose tumors that are difficult to access such as hepato-biliary-pancreatic (HBP) malignancies, its diagnostic accuracy remains unclear. We therefore aimed to investigate the diagnostic accuracy of cytology testing for HBP tumors. We performed a retrospective study of all cytology samples that were used to confirm radiologically detected HBP tumors between 2002 and 2016. The cytology techniques used in our center included fine needle aspiration (FNA), brush cytology, and aspiration of bile. Sensitivity, specificity, positive and negative predictive values, and likelihood ratios were calculated in comparison to histological confirmation. From a total of 133 medical records, we calculated an overall sensitivity of 76%, specificity of 74%, a negative likelihood ratio of 0.30, and a positive likelihood ratio of 2.9. Cytology was more accurate in diagnosing lesions of the liver (sensitivity 79%, specificity 57%) and biliary tree (sensitivity 100%, specificity 50%) compared to pancreatic (sensitivity 60%, specificity 83%) and gallbladder lesions (sensitivity 50%, specificity 85%). Cytology was more accurate in detecting primary cancers (sensitivity 77%, specificity 73%) when compared to metastatic cancers (sensitivity 73%, specificity 100%). FNA was the most frequently used cytological technique to diagnose HBP lesions (sensitivity 78.8%). Cytological testing is efficient in diagnosing HBP cancers, especially for hepatobiliary tumors. Given its relative simplicity, cost-effectiveness, and paucity of alternative diagnostic methods, cytology should still be considered as a first-line tool for diagnosing HBP malignancies. © 2018 S. Karger AG, Basel.
Inferring relationships between pairs of individuals from locus heterozygosities
Presciuttini, Silvano; Toni, Chiara; Tempestini, Elena; Verdiani, Simonetta; Casarino, Lucia; Spinetti, Isabella; Stefano, Francesco De; Domenici, Ranieri; Bailey-Wilson, Joan E
2002-01-01
Background The traditional exact method for inferring relationships between individuals from genetic data is not easily applicable in all situations that may be encountered in several fields of applied genetics. This study describes an approach that gives affordable results and is easily applicable; it is based on the probabilities that two individuals share 0, 1 or both alleles at a locus identical by state. Results We show that these probabilities (zi) depend on locus heterozygosity (H), and are scarcely affected by variation of the distribution of allele frequencies. This allows us to obtain empirical curves relating zi's to H for a series of common relationships, so that the likelihood ratio of a pair of relationships between any two individuals, given their genotypes at a locus, is a function of a single parameter, H. Application to large samples of mother-child and full-sib pairs shows that the statistical power of this method to infer the correct relationship is not much lower than the exact method. Analysis of a large database of STR data proves that locus heterozygosity does not vary significantly among Caucasian populations, apart from special cases, so that the likelihood ratio of the more common relationships between pairs of individuals may be obtained by looking at tabulated zi values. Conclusions A simple method is provided, which may be used by any scientist with the help of a calculator or a spreadsheet to compute the likelihood ratios of common alternative relationships between pairs of individuals. PMID:12441003
NGS-based likelihood ratio for identifying contributors in two- and three-person DNA mixtures.
Chan Mun Wei, Joshua; Zhao, Zicheng; Li, Shuai Cheng; Ng, Yen Kaow
2018-06-01
DNA fingerprinting, also known as DNA profiling, serves as a standard procedure in forensics to identify a person by the short tandem repeat (STR) loci in their DNA. By comparing the STR loci between DNA samples, practitioners can calculate a probability of match to identity the contributors of a DNA mixture. Most existing methods are based on 13 core STR loci which were identified by the Federal Bureau of Investigation (FBI). Analyses based on these loci of DNA mixture for forensic purposes are highly variable in procedures, and suffer from subjectivity as well as bias in complex mixture interpretation. With the emergence of next-generation sequencing (NGS) technologies, the sequencing of billions of DNA molecules can be parallelized, thus greatly increasing throughput and reducing the associated costs. This allows the creation of new techniques that incorporate more loci to enable complex mixture interpretation. In this paper, we propose a computation for likelihood ratio that uses NGS (next generation sequencing) data for DNA testing on mixed samples. We have applied the method to 4480 simulated DNA mixtures, which consist of various mixture proportions of 8 unrelated whole-genome sequencing data. The results confirm the feasibility of utilizing NGS data in DNA mixture interpretations. We observed an average likelihood ratio as high as 285,978 for two-person mixtures. Using our method, all 224 identity tests for two-person mixtures and three-person mixtures were correctly identified. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kim, T J; Roesler, N M; von dem Knesebeck, O
2017-06-01
Numerous studies have investigated the association between education and overweight/obesity. Yet less is known about the relative importance of causation (i.e. the influence of education on risks of overweight/obesity) and selection (i.e. the influence of overweight/obesity on the likelihood to attain education) hypotheses. A systematic review was performed to assess the linkage between education and overweight/obesity in prospective studies in general populations. Studies were searched within five databases, and study quality was appraised with the Newcastle-Ottawa scale. In total, 31 studies were considered for meta-analysis. Regarding causation (24 studies), the lower educated had a higher likelihood (odds ratio: 1.33, 1.21-1.47) and greater risk (risk ratio: 1.34, 1.08-1.66) for overweight/obesity, when compared with the higher educated. However, these associations were no longer statistically significant when accounting for publication bias. Concerning selection (seven studies), overweight/obese individuals had a greater likelihood of lower education (odds ratio: 1.57, 1.10-2.25), when contrasted with the non-overweight or non-obese. Subgroup analyses were performed by stratifying meta-analyses upon different factors. Relationships between education and overweight/obesity were affected by study region, age groups, gender and observation period. In conclusion, it is necessary to consider both causation and selection processes in order to tackle educational inequalities in obesity appropriately. © 2017 World Obesity Federation.
Marcum, Zachary A; Perera, Subashan; Thorpe, Joshua M; Switzer, Galen E; Castle, Nicholas G; Strotmeyer, Elsa S; Simonsick, Eleanor M; Ayonayon, Hilsa N; Phillips, Caroline L; Rubin, Susan; Zucker-Levin, Audrey R; Bauer, Douglas C; Shorr, Ronald I; Kang, Yihuang; Gray, Shelly L; Hanlon, Joseph T
2016-07-01
Few studies have compared the risk of recurrent falls across various antidepressant agents-using detailed dosage and duration data-among community-dwelling older adults, including those who have a history of a fall/fracture. To examine the association of antidepressant use with recurrent falls, including among those with a history of falls/fractures, in community-dwelling elders. This was a longitudinal analysis of 2948 participants with data collected via interview at year 1 from the Health, Aging and Body Composition study and followed through year 7 (1997-2004). Any antidepressant medication use was self-reported at years 1, 2, 3, 5, and 6 and further categorized as (1) selective serotonin reuptake inhibitors (SSRIs), (2) tricyclic antidepressants, and (3) others. Dosage and duration were examined. The outcome was recurrent falls (≥2) in the ensuing 12-month period following each medication data collection. Using multivariable generalized estimating equations models, we observed a 48% greater likelihood of recurrent falls in antidepressant users compared with nonusers (adjusted odds ratio [AOR] = 1.48; 95% CI = 1.12-1.96). Increased likelihood was also found among those taking SSRIs (AOR = 1.62; 95% CI = 1.15-2.28), with short duration of use (AOR = 1.47; 95% CI = 1.04-2.00), and taking moderate dosages (AOR = 1.59; 95% CI = 1.15-2.18), all compared with no antidepressant use. Stratified analysis revealed an increased likelihood among users with a baseline history of falls/fractures compared with nonusers (AOR = 1.83; 95% CI = 1.28-2.63). Antidepressant use overall, SSRI use, short duration of use, and moderate dosage were associated with recurrent falls. Those with a history of falls/fractures also had an increased likelihood of recurrent falls. © The Author(s) 2016.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Agrawal, Swati; Cerdeira, Ana Sofia; Redman, Christopher; Vatish, Manu
2018-02-01
Preeclampsia is a major cause of morbidity and mortality worldwide. Numerous candidate biomarkers have been proposed for diagnosis and prediction of preeclampsia. Measurement of maternal circulating angiogenesis biomarker as the ratio of sFlt-1 (soluble FMS-like tyrosine kinase-1; an antiangiogenic factor)/PlGF (placental growth factor; an angiogenic factor) reflects the antiangiogenic balance that characterizes incipient or overt preeclampsia. The ratio increases before the onset of the disease and thus may help in predicting preeclampsia. We conducted a meta-analysis to explore the predictive accuracy of sFlt-1/PlGF ratio in preeclampsia. We included 15 studies with 534 cases with preeclampsia and 19 587 controls. The ratio has a pooled sensitivity of 80% (95% confidence interval, 0.68-0.88), specificity of 92% (95% confidence interval, 0.87-0.96), positive likelihood ratio of 10.5 (95% confidence interval, 6.2-18.0), and a negative likelihood ratio of 0.22 (95% confidence interval, 0.13-0.35) in predicting preeclampsia in both high- and low-risk patients. Most of the studies have not made a distinction between early- and late-onset disease, and therefore, the analysis for it could not be done. It can prove to be a valuable screening tool for preeclampsia and may also help in decision-making, treatment stratification, and better resource allocation. © 2017 American Heart Association, Inc.
Nie, Z Q; Ou, Y Q; Zhuang, J; Qu, Y J; Mai, J Z; Chen, J M; Liu, X Q
2016-05-01
Conditional logistic regression analysis and unconditional logistic regression analysis are commonly used in case control study, but Cox proportional hazard model is often used in survival data analysis. Most literature only refer to main effect model, however, generalized linear model differs from general linear model, and the interaction was composed of multiplicative interaction and additive interaction. The former is only statistical significant, but the latter has biological significance. In this paper, macros was written by using SAS 9.4 and the contrast ratio, attributable proportion due to interaction and synergy index were calculated while calculating the items of logistic and Cox regression interactions, and the confidence intervals of Wald, delta and profile likelihood were used to evaluate additive interaction for the reference in big data analysis in clinical epidemiology and in analysis of genetic multiplicative and additive interactions.
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)
Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lupu, Roxana E.; Marley, Mark S.; Zahnle, Kevin
Upcoming space-based coronagraphic instruments in the next decade will perform reflected light spectroscopy and photometry of cool directly imaged extrasolar giant planets. We are developing a new atmospheric retrieval methodology to help assess the science return and inform the instrument design for such future missions, and ultimately interpret the resulting observations. Our retrieval technique employs a geometric albedo model coupled with both a Markov chain Monte Carlo Ensemble Sampler ( emcee ) and a multimodal nested sampling algorithm ( MultiNest ) to map the posterior distribution. This combination makes the global evidence calculation more robust for any given model andmore » highlights possible discrepancies in the likelihood maps. As a proof of concept, our current atmospheric model contains one or two cloud layers, methane as a major absorber, and a H{sub 2}–He background gas. This 6-to-9 parameter model is appropriate for Jupiter-like planets and can be easily expanded in the future. In addition to deriving the marginal likelihood distribution and confidence intervals for the model parameters, we perform model selection to determine the significance of methane and cloud detection as a function of expected signal-to-noise ratio in the presence of spectral noise correlations. After internal validation, the method is applied to realistic spectra of Jupiter, Saturn, and HD 99492c, a model observing target. We find that the presence or absence of clouds and methane can be determined with high confidence, while parameter uncertainties are model dependent and correlated. Such general methods will also be applicable to the interpretation of direct imaging spectra of cloudy terrestrial planets.« less
Bleka, Øyvind; Storvik, Geir; Gill, Peter
2016-03-01
We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Lower limb and associated injuries in frontal-impact road traffic collisions.
Ammori, Mohannad B; Eid, Hani O; Abu-Zidan, Fikri M
2016-03-01
To study the relationship between severity of injury of the lower limb and severity of injury of the head, thoracic, and abdominal regions in frontal-impact road traffic collisions. Consecutive hospitalised trauma patients who were involved in a frontal road traffic collision were prospectively studied over 18 months. Patients with at least one Abbreviated Injury Scale (AIS) ≥3 or AIS 2 injuries within two AIS body regions were included. Patients were divided into two groups depending on the severity of injury to the head, chest or abdomen. Low severity group had an AIS < 2 and high severity group had an AIS ≥ 2. Backward likelihood logistic regression models were used to define significant factors affecting the severity of head, chest or abdominal injuries. Eighty-five patients were studied. The backward likelihood logistic regression model defining independent factors affecting severity of head injuries was highly significant (p =0.01, nagelkerke r square = 0.1) severity of lower limb injuries was the only significant factor (p=0.013) having a negative correlation with head injury (Odds ratio of 0.64 (95% CI: 0.45-0.91). Occupants who sustain a greater severity of injury to the lower limb in a frontal-impact collision are likely to be spared from a greater severity of head injury.
The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.
ERIC Educational Resources Information Center
Baldwin, Beatrice
The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
ERIC Educational Resources Information Center
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
Primary health care providers' advice for a dental checkup and dental use in children.
Beil, Heather A; Rozier, R Gary
2010-08-01
In this study we estimated factors associated with children being advised to see the dentist by a doctor or other health provider; tested for an association between the advisement on the likelihood that the child would visit the dentist; and estimated the effect of the advisement on dental costs. We identified a sample of 5268 children aged 2 to 11 years in the 2004 Medical Expenditures Panel Survey. A cross-sectional analysis with logistic regression models was conducted to estimate the likelihood of the child receiving a recommendation for a dental checkup, and to determine its effect on the likelihood of having a dental visit. Differences in cost for children who received a recommendation were assessed by using a linear regression model. All analyses were conducted separately on children aged 2 to 5 (n = 2031) and aged 6 to 11 (n = 3237) years. Forty-seven percent of 2- to 5-year-olds and 37% of 6- to 11-year-olds had been advised to see the dentist. Children aged 2 to 5 who received a recommendation were more likely to have a dental visit (odds ratio: 2.89 [95% confidence interval: 2.16-3.87]), but no difference was observed among older children. Advice had no effect on dental costs in either age group. Health providers' recommendation that pediatric patients visit the dentist was associated with an increase in dental visits among young children. Providers have the potential to play an important role in establishing a dental home for children at an early age. Future research should examine potential interventions to increase effective dental referrals by health providers.
The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments
NASA Astrophysics Data System (ADS)
Chen, Fajing; Jiao, Meiyan; Chen, Jing
2013-04-01
Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.
Robust analysis of semiparametric renewal process models
Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.
2013-01-01
Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
ERIC Educational Resources Information Center
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
ERIC Educational Resources Information Center
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
The striking similarities between standard, distractor-free, and target-free recognition
Dobbins, Ian G.
2012-01-01
It is often assumed that observers seek to maximize correct responding during recognition testing by actively adjusting a decision criterion. However, early research by Wallace (Journal of Experimental Psychology: Human Learning and Memory 4:441–452, 1978) suggested that recognition rates for studied items remained similar, regardless of whether or not the tests contained distractor items. We extended these findings across three experiments, addressing whether detection rates or observer confidence changed when participants were presented standard tests (targets and distractors) versus “pure-list” tests (lists composed entirely of targets or distractors). Even when observers were made aware of the composition of the pure-list test, the endorsement rates and confidence patterns remained largely similar to those observed during standard testing, suggesting that observers are typically not striving to maximize the likelihood of success across the test. We discuss the implications for decision models that assume a likelihood ratio versus a strength decision axis, as well as the implications for prior findings demonstrating large criterion shifts using target probability manipulations. PMID:21476108
Salinero-Fort, Miguel Ángel; de Burgos-Lunar, Carmen; Mostaza Prieto, José; Lahoz Rallo, Carlos; Abánades-Herranz, Juan Carlos; Gómez-Campelo, Paloma; Laguna Cuesta, Fernando; Estirado De Cabo, Eva; García Iglesias, Francisca; González Alegre, Teresa; Fernández Puntero, Belén; Montesano Sánchez, Luis; Vicent López, David; Cornejo Del Río, Víctor; Fernández García, Pedro J; Sabín Rodríguez, Concesa; López López, Silvia; Patrón Barandío, Pedro
2015-01-01
Introduction The incidence of type 2 diabetes mellitus (T2DM) is increasing worldwide. When diagnosed, many patients already have organ damage or advance subclinical atherosclerosis. An early diagnosis could allow the implementation of lifestyle changes and treatment options aimed at delaying the progression of the disease and to avoid cardiovascular complications. Different scores for identifying undiagnosed diabetes have been reported, however, their performance in populations of southern Europe has not been sufficiently evaluated. The main objectives of our study are: to evaluate the screening performance and cut-off points of the main scores that identify the risk of undiagnosed T2DM and prediabetes in a Spanish population, and to develop and validate our own predictive models of undiagnosed T2DM (screening model), and future T2DM (prediction risk model) after 5-year follow-up. As a secondary objective, we will evaluate the atherosclerotic burden of the population with undiagnosed T2DM. Methods and analysis Population-based prospective cohort study with baseline screening, to evaluate the performance of the FINDRISC, DANISH, DESIR, ARIC and QDScore, against the gold standard tests: Fasting plasma glucose, oral glucose tolerance and/or HbA1c. The sample size will include 1352 participants between the ages of 45 and 74 years. Analysis: sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio positive, likelihood ratio negative and receiver operating characteristic curves and area under curve. Binary logistic regression for the first 700 individuals (derivation) and last 652 (validation) will be performed. All analyses will be calculated with their 95% CI; statistical significance will be p<0.05. Ethics and dissemination The study protocol has been approved by the Research Ethics Committee of the Carlos III Hospital (Madrid). The score performance and predictive model will be presented in medical conferences, workshops, seminars and round table discussions. Furthermore, the predictive model will be published in a peer-reviewed medical journal to further increase the exposure of the scores. PMID:26220868
Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders
2013-10-01
Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.
Lai, Vincent; Lee, Victor Ho Fun; Lam, Ka On; Sze, Henry Chun Kin; Chan, Queenie; Khong, Pek Lan
2015-06-01
To determine the utility of stretched exponential diffusion model in characterisation of the water diffusion heterogeneity in different tumour stages of nasopharyngeal carcinoma (NPC). Fifty patients with newly diagnosed NPC were prospectively recruited. Diffusion-weighted MR imaging was performed using five b values (0-2,500 s/mm(2)). Respective stretched exponential parameters (DDC, distributed diffusion coefficient; and alpha (α), water heterogeneity) were calculated. Patients were stratified into low and high tumour stage groups based on the American Joint Committee on Cancer (AJCC) staging for determination of the predictive powers of DDC and α using t test and ROC curve analyses. The mean ± standard deviation values were DDC = 0.692 ± 0.199 (×10(-3) mm(2)/s) for low stage group vs 0.794 ± 0.253 (×10(-3) mm(2)/s) for high stage group; α = 0.792 ± 0.145 for low stage group vs 0.698 ± 0.155 for high stage group. α was significantly lower in the high stage group while DDC was negatively correlated. DDC and α were both reliable independent predictors (p < 0.001), with α being more powerful. Optimal cut-off values were (sensitivity, specificity, positive likelihood ratio, negative likelihood ratio) DDC = 0.692 × 10(-3) mm(2)/s (94.4 %, 64.3 %, 2.64, 0.09), α = 0.720 (72.2 %, 100 %, -, 0.28). The heterogeneity index α is robust and can potentially help in staging and grading prediction in NPC. • Stretched exponential diffusion models can help in tissue characterisation in nasopharyngeal carcinoma • α and distributed diffusion coefficient (DDC) are negatively correlated • α is a robust heterogeneity index marker • α can potentially help in staging and grading prediction.
Eng, Kenny; Carlisle, Daren M.; Wolock, David M.; Falcone, James A.
2013-01-01
An approach is presented in this study to aid water-resource managers in characterizing streamflow alteration at ungauged rivers. Such approaches can be used to take advantage of the substantial amounts of biological data collected at ungauged rivers to evaluate the potential ecological consequences of altered streamflows. National-scale random forest statistical models are developed to predict the likelihood that ungauged rivers have altered streamflows (relative to expected natural condition) for five hydrologic metrics (HMs) representing different aspects of the streamflow regime. The models use human disturbance variables, such as number of dams and road density, to predict the likelihood of streamflow alteration. For each HM, separate models are derived to predict the likelihood that the observed metric is greater than (‘inflated’) or less than (‘diminished’) natural conditions. The utility of these models is demonstrated by applying them to all river segments in the South Platte River in Colorado, USA, and for all 10-digit hydrologic units in the conterminous United States. In general, the models successfully predicted the likelihood of alteration to the five HMs at the national scale as well as in the South Platte River basin. However, the models predicting the likelihood of diminished HMs consistently outperformed models predicting inflated HMs, possibly because of fewer sites across the conterminous United States where HMs are inflated. The results of these analyses suggest that the primary predictors of altered streamflow regimes across the Nation are (i) the residence time of annual runoff held in storage in reservoirs, (ii) the degree of urbanization measured by road density and (iii) the extent of agricultural land cover in the river basin.
Julien, Clavel; Leandro, Aristide; Hélène, Morlon
2018-06-19
Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.
Shen, Yongchun; Pang, Caishuang; Wu, Yanqiu; Li, Diandian; Wan, Chun; Liao, Zenglin; Yang, Ting; Chen, Lei; Wen, Fuqiang
2016-06-01
The usefulness of bronchoalveolar lavage fluid (BALF) CD4/CD8 ratio for diagnosing sarcoidosis has been reported in many studies with variable results. Therefore, we performed a meta-analysis to estimate the overall diagnostic accuracy of BALF CD4/CD8 ratio based on the bulk of published evidence. Studies published prior to June 2015 and indexed in PubMed, OVID, Web of Science, Scopus and other databases were evaluated for inclusion. Data on sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR) were pooled from included studies. Summary receiver operating characteristic (SROC) curves were used to summarize overall test performance. Deeks's funnel plot was used to detect publication bias. Sixteen publications with 1885 subjects met our inclusion criteria and were included in this meta-analysis. Summary estimates of the diagnostic performance of the BALF CD4/CD8 ratio were as follows: sensitivity, 0.70 (95%CI 0.64-0.75); specificity, 0.83 (95%CI 0.78-0.86); PLR, 4.04 (95%CI 3.13-5.20); NLR, 0.36 (95%CI 0.30-0.44); and DOR, 11.17 (95%CI 7.31-17.07). The area under the SROC curve was 0.84 (95%CI 0.81-0.87). There was no evidence of publication bias. Measuring the BALF CD4/CD8 ratio may assist in the diagnosis of sarcoidosis when interpreted in parallel with other diagnostic factors. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
WE-AB-BRA-05: Fully Automatic Segmentation of Male Pelvic Organs On CT Without Manual Intervention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Y; Lian, J; Chen, R
Purpose: We aim to develop a fully automatic tool for accurate contouring of major male pelvic organs in CT images for radiotherapy without any manual initialization, yet still achieving superior performance than the existing tools. Methods: A learning-based 3D deformable shape model was developed for automatic contouring. Specifically, we utilized a recent machine learning method, random forest, to jointly learn both image regressor and classifier for each organ. In particular, the image regressor is trained to predict the 3D displacement from each vertex of the 3D shape model towards the organ boundary based on the local image appearance around themore » location of this vertex. The predicted 3D displacements are then used to drive the 3D shape model towards the target organ. Once the shape model is deformed close to the target organ, it is further refined by an organ likelihood map estimated by the learned classifier. As the organ likelihood map provides good guideline for the organ boundary, the precise contouring Result could be achieved, by deforming the 3D shape model locally to fit boundaries in the organ likelihood map. Results: We applied our method to 29 previously-treated prostate cancer patients, each with one planning CT scan. Compared with manually delineated pelvic organs, our method obtains overlap ratios of 85.2%±3.74% for the prostate, 94.9%±1.62% for the bladder, and 84.7%±1.97% for the rectum, respectively. Conclusion: This work demonstrated feasibility of a novel machine-learning based approach for accurate and automatic contouring of major male pelvic organs. It shows the potential to replace the time-consuming and inconsistent manual contouring in the clinic. Also, compared with the existing works, our method is more accurate and also efficient since it does not require any manual intervention, such as manual landmark placement. Moreover, our method obtained very similar contouring results as the clinical experts. Project is partially support by a grant from NCI 1R01CA140413.« less
Distribution of model-based multipoint heterogeneity lod scores.
Xing, Chao; Morris, Nathan; Xing, Guan
2010-12-01
The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ(2) approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating th e distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution ½χ²₀+ ½χ²₁, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. © 2010 Wiley-Liss, Inc.
Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key
Batchelder, William H.
2014-01-01
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Ultrasonography guidance reduces complications and costs associated with thoracentesis procedures.
Patel, Pankaj A; Ernst, Frank R; Gunnarsson, Candace L
2012-01-01
PURPOSE.: We performed an analysis of hospitalizations involving thoracentesis procedures to determine whether the use of ultrasonographic (US) guidance is associated with differences in complications or hospital costs as compared with not using US guidance. METHODS.: We used the Premier hospital database to identify patients with ICD-9 coded thoracentesis in 2008. Use of US guidance was identified using CPT-4 codes. We performed univariate and multivariable analyses of cost data and adjusted for patient demographics, hospital characteristics, patient morbidity severity, and mortality. Logistic regression models were developed for pneumothorax and hemorrhage adverse events, controlling for patient demographics, morbidity severity, mortality, and hospital size. RESULTS.: Of 19,339 thoracentesis procedures, 46% were performed with US guidance. Mean total hospitalization costs were $11,786 (±$10,535) and $12,408 (±$13,157) for patients with and without US guidance, respectively (p < 0.001). Unadjusted risk of pneumothorax or hemorrhage was lower with US guidance (p = 0.019 and 0.078, respectively). Logistic regression analyses demonstrate that US is associated with a 16.3% reduction likelihood of pneumothorax (adjusted odds ratio 0.837, 95% CI: 0.73-0.96; p= 0.014), and 38.7% reduction in likelihood of hemorrhage (adjusted odds ratio 0.613, 95% CI: 0.36-1.04; p = 0.071). CONCLUSIONS.: US-guided thoracentesis is associated with lower total hospital stay costs and lower incidence of pneumothorax and hemorrhage. © 2011 Wiley Periodicals, Inc. J Clin Ultrasound, 2011. Copyright © 2011 Wiley Periodicals, Inc.
Team climate, intention to leave and turnover among hospital employees: prospective cohort study.
Kivimäki, Mika; Vanhala, Anna; Pentti, Jaana; Länsisalmi, Hannakaisa; Virtanen, Marianna; Elovainio, Marko; Vahtera, Jussi
2007-10-23
In hospitals, the costs of employee turnover are substantial and intentions to leave among staff may manifest as lowered performance. We examined whether team climate, as indicated by clear and shared goals, participation, task orientation and support for innovation, predicts intention to leave the job and actual turnover among hospital employees. Prospective study with baseline and follow-up surveys (2-4 years apart). The participants were 6,441 (785 men, 5,656 women) hospital employees under the age of 55 at the time of follow-up survey. Logistic regression with generalized estimating equations was used as an analysis method to include both individual and work unit level predictors in the models. Among stayers with no intention to leave at baseline, lower self-reported team climate predicted higher likelihood of having intentions to leave at follow-up (odds ratio per 1 standard deviation decrease in team climate was 1.6, 95% confidence interval 1.4-1.8). Lower co-worker assessed team climate at follow-up was also association with such intentions (odds ratio 1.8, 95% confidence interval 1.4-2.4). Among all participants, the likelihood of actually quitting the job was higher for those with poor self-reported team climate at baseline. This association disappeared after adjustment for intention to leave at baseline suggesting that such intentions may explain the greater turnover rate among employees with low team climate. Improving team climate may reduce intentions to leave and turnover among hospital employees.
Evaluating O, C, and N isotopes in human hair as a forensic tool to reconstruct travel
NASA Astrophysics Data System (ADS)
Ehleringer, Jim; Chesson, Lesley; Cerling, Thure; Valenzuela, Luciano
2014-05-01
Oxygen isotope ratios in the proteins of human scalp hair have been proposed and modeled as a tool for reconstructing the movements of humans and evaluating the likelihood that an individual is a resident or non-resident of a particular geographic region. Carbon and nitrogen isotope ratios reflect dietary input and complement oxygen isotope data interpretation when it is necessary to distinguish potential location overlap among continents. The combination of a time sequence analysis in hair segments and spatial models that describe predicted geographic variation in hair isotope values represents a potentially powerful tool for forensic investigations. The applications of this technique have thus far been to provide assistance to law enforcement with information on the predicted geographical travel histories of unidentified murder victims. Here we review multiple homicide cases from the USA where stable isotope analysis of hair has been applied and for which we now know the travel histories of the murder victims. Here we provide information on the robustness of the original data sets used to test these models by evaluating the travel histories of randomly collected hair discarded in Utah barbershops.
Ackermann, M.; Ajello, M.; Atwood, W. B.; ...
2012-04-09
The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Our observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. In ordermore » to assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. Here, we provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less
[Value of ultrasonography to predict the endometrial cancer in postmenopausal bleeding].
Bouzid, A; Ayachi, A; Mourali, M
2015-10-01
To build mathematical models for evaluating the individual risk of endometrial malignancy in women with postmenopausal bleeding and a thick endometrium using clinical data, sonographic endometrial thickness and power Doppler ultrasound findings. A total of 117 patients underwent transvaginal two-dimensional gray-scale and power Doppler ultrasound examination of the endometrium before getting endometrial biopsy. Inclusion criteria were post-menopausal bleeding and a thick endometrium greater than 5mm. The ultrasound image showing the most vascularized section through the endometrium as assessed by power Doppler was frozen to estimate endometrial thickness and features. The vascularity index was calculated using computer software. A structured history was taken to collect clinical information. Multivariate logistic regression analysis was used to create mathematical models to predict endometrial malignancy. There were 31 (26.4%) malignant and 86 (74.6%) benign endometria… Women with a malignant endometrium were older (median age 61 vs 56 years, P=0.036) and had a thicker endometrium (median thickness 18.8mm vs 12.5; P=0.002) and higher values for vascularity index. When using only clinical data to build a model for estimating the risk of endometrial malignancy, a model including the variables age had the largest area under the receiver-operating characteristics curve (AUC), with a value of 0.69 (95% confidence interval [CI], 0.59-0.79). A model including age and endometrial thickness had an AUC of 0.72 (95% CI, 0.50-0.96), and one including age, endometrial thickness and vascularity index had an AUC of 0.91 (95% CI, 0.62-0.97). Using a risk cut-off of 12%, the latter model had sensitivity 89%, specificity 74%, positive likelihood ratio 3.42 and negative likelihood ratio 0.14. Postmenopausal bleeding is a frequent cause of consultation in gynecological particularly in peri- or post-menopausal period. They are the main alarm sign of endometrial carcinoma. Vaginal ultrasound has become the "gold standard" in the initial exploration. It is a powerful tool to estimate the individual risk of malignancy in symptomatic postmenopausal women in order to optimize the management. The diagnostic performance of models predicting endometrial cancer increases substantially when sonographic and power Doppler information are added to clinical variables. This model seems to be clinically useful but need to be prospectively validated. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Ackermann, M.; Ajello, M.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Borgland, A. W.; Brandt, T. J.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Cavazzuti, E.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Conrad, J.; Cutini, S.; de Angelis, A.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Falletti, L.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Focke, W. B.; Fortin, P.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gaggero, D.; Gargano, F.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grove, J. E.; Guiriec, S.; Gustafsson, M.; Hadasch, D.; Hanabata, Y.; Harding, A. K.; Hayashida, M.; Hays, E.; Horan, D.; Hou, X.; Hughes, R. E.; Jóhannesson, G.; Johnson, A. S.; Johnson, R. P.; Kamae, T.; Katagiri, H.; Kataoka, J.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Lee, S.-H.; Lemoine-Goumard, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mazziotta, M. N.; McEnery, J. E.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Naumann-Godo, M.; Norris, J. P.; Nuss, E.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Ormes, J. F.; Paneque, D.; Panetta, J. H.; Parent, D.; Pesce-Rollins, M.; Pierbattista, M.; Piron, F.; Pivato, G.; Porter, T. A.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Sadrozinski, H. F.-W.; Sgrò, C.; Siskind, E. J.; Spandre, G.; Spinelli, P.; Strong, A. W.; Suson, D. J.; Takahashi, H.; Tanaka, T.; Thayer, J. G.; Thayer, J. B.; Thompson, D. J.; Tibaldo, L.; Tinivella, M.; Torres, D. F.; Tosti, G.; Troja, E.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; Waite, A. P.; Wang, P.; Winer, B. L.; Wood, K. S.; Wood, M.; Yang, Z.; Ziegler, M.; Zimmer, S.
2012-05-01
The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. To assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. We also provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Ajello, M.; Bechtol, K.
The {gamma}-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse {gamma}-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. To assess uncertaintiesmore » associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X{sub CO} factor, the ratio between integrated CO-line intensity and H{sub 2} column density, the fluxes and spectra of the {gamma}-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as {gamma}-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. We also provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Ajello, M.; Atwood, W. B.
The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Our observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. In ordermore » to assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. Here, we provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less
Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2005-01-01
In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Haughton, Jannett; Gregorio, David; Pérez-Escamilla, Rafael
2011-01-01
This retrospective study aimed to identify factors associated with breastfeeding duration among women enrolled in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) of Hartford, Connecticut. The authors included mothers whose children were younger than 5 years and had stopped breastfeeding (N = 155). Women who had planned their pregnancies were twice as likely as those who did not plan them to breastfeed for more than 6 months (odds ratio, 2.15; 95% confidence interval, 1.00–4.64). One additional year of maternal age was associated with a 9% increase on the likelihood of breastfeeding for more than 6 months (odds ratio, 1.09; 95% confidence interval, 1.02–1.17). Time in the United States was inversely associated with the likelihood of breastfeeding for more than 6 months (odds ratio, 0.96; 95% confidence interval, 0.92–0.99). Return to work, sore nipples, lack of access to breast pumps, and free formula provided by WIC were identified as breastfeeding barriers. Findings can help WIC improve its breastfeeding promotion efforts. PMID:20689103
Grosu, Horiana B; Vial-Rodriguez, Macarena; Vakil, Erik; Casal, Roberto F; Eapen, George A; Morice, Rodolfo; Stewart, John; Sarkiss, Mona G; Ost, David E
2017-08-01
During diagnostic thoracoscopy, talc pleurodesis after biopsy is appropriate if the probability of malignancy is sufficiently high. Findings on direct visual assessment of the pleura during thoracoscopy, rapid onsite evaluation (ROSE) of touch preparations (touch preps) of thoracoscopic biopsy specimens, and preoperative imaging may help predict the likelihood of malignancy; however, data on the performance of these methods are limited. To assess the performance of ROSE of touch preps, direct visual assessment of the pleura during thoracoscopy, and preoperative imaging in diagnosing malignancy. Patients who underwent ROSE of touch preps during thoracoscopy for suspected malignancy were retrospectively reviewed. Malignancy was diagnosed on the basis of final pathologic examination of pleural biopsy specimens. ROSE results were categorized as malignant, benign, or atypical cells. Visual assessment results were categorized as tumor studding present or absent. Positron emission tomography (PET) and computed tomography (CT) findings were categorized as abnormal or normal pleura. Likelihood ratios were calculated for each category of test result. The study included 44 patients, 26 (59%) with a final pathologic diagnosis of malignancy. Likelihood ratios were as follows: for ROSE of touch preps: malignant, 1.97 (95% confidence interval [CI], 0.90-4.34); atypical cells, 0.69 (95% CI, 0.21-2.27); benign, 0.11 (95% CI, 0.01-0.93); for direct visual assessment: tumor studding present, 3.63 (95% CI, 1.32-9.99); tumor studding absent, 0.24 (95% CI, 0.09-0.64); for PET: abnormal pleura, 9.39 (95% CI, 1.42-62); normal pleura, 0.24 (95% CI, 0.11-0.52); and for CT: abnormal pleura, 13.15 (95% CI, 1.93-89.63); normal pleura, 0.28 (95% CI, 0.15-0.54). A finding of no malignant cells on ROSE of touch preps during thoracoscopy lowers the likelihood of malignancy significantly, whereas finding of tumor studding on direct visual assessment during thoracoscopy only moderately increases the likelihood of malignancy. A positive finding on PET and/or CT increases the likelihood of malignancy significantly in a moderate-risk patient group and can be used as an adjunct to predict malignancy before pleurodesis.
Statistical inference for tumor growth inhibition T/C ratio.
Wu, Jianrong
2010-09-01
The tumor growth inhibition T/C ratio is commonly used to quantify treatment effects in drug screening tumor xenograft experiments. The T/C ratio is converted to an antitumor activity rating using an arbitrary cutoff point and often without any formal statistical inference. Here, we applied a nonparametric bootstrap method and a small sample likelihood ratio statistic to make a statistical inference of the T/C ratio, including both hypothesis testing and a confidence interval estimate. Furthermore, sample size and power are also discussed for statistical design of tumor xenograft experiments. Tumor xenograft data from an actual experiment were analyzed to illustrate the application.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Janssen, Eva; van Osch, Liesbeth; Lechner, Lilian; Candel, Math; de Vries, Hein
2012-01-01
Despite the increased recognition of affect in guiding probability estimates, perceived risk has been mainly operationalised in a cognitive way and the differentiation between rational and intuitive judgements is largely unexplored. This study investigated the validity of a measurement instrument differentiating cognitive and affective probability beliefs and examined whether behavioural decision making is mainly guided by cognition or affect. Data were obtained from four surveys focusing on smoking (N=268), fruit consumption (N=989), sunbed use (N=251) and sun protection (N=858). Correlational analyses showed that affective likelihood was more strongly correlated with worry compared to cognitive likelihood and confirmatory factor analysis provided support for a two-factor model of perceived likelihood instead of a one-factor model (i.e. cognition and affect combined). Furthermore, affective likelihood was significantly associated with the various outcome variables, whereas the association for cognitive likelihood was absent in three studies. The findings provide support for the construct validity of the measures used to assess cognitive and affective likelihood. Since affective likelihood might be a better predictor of health behaviour than the commonly used cognitive operationalisation, both dimensions should be considered in future research.
Hanchaiphiboolkul, Suchat; Suwanwela, Nijasri Charnnarong; Poungvarin, Niphon; Nidhinandana, Samart; Puthkhao, Pimchanok; Towanabut, Somchai; Tantirittisak, Tasanee; Suwantamee, Jithanorm; Samsen, Maiyadhaj
2013-11-01
Limited information is available on the association between the metabolic syndrome (MetS) and stroke. Whether or not MetS confers a risk greater than the sum of its components is controversial. This study aimed to assess the association of MetS with stroke, and to evaluate whether the risk of MetS is greater than the sum of its components. The Thai Epidemiologic Stroke (TES) study is a community-based cohort study with 19,997 participants, aged 45-80 years, recruited from the general population from 5 regions of Thailand. Baseline survey data were analyzed in cross-sectional analyses. MetS was defined according to criteria from the National Cholesterol Education Program (NCEP) Adult Treatment Panel III, the American Heart Association/National Heart, Lung, and Blood Institute (revised NCEP), and International Diabetes Federation (IDF). Logistic regression analysis was used to estimate association of MetS and its components with stroke. Using c statistics and the likelihood ratio test we compared the capability of discriminating participants with and without stroke of a logistic model containing all components of MetS and potential confounders and a model also including the MetS variable. We found that among the MetS components, high blood pressure and hypertriglyceridemia were independently and significantly related to stroke. MetS defined by the NCEP (odds ratio [OR], 1.64; 95% confidence interval [CI], 1.32-2.04), revised NCEP (OR, 2.27; 95% CI, 1.80-2.87), and IDF definitions (OR, 1.70; 95% CI, 1.37-2.13) was significantly associated with stroke after adjustment for age, sex, geographical area, education level, occupation, smoking status, alcohol consumption, and low-density lipoprotein cholesterol. After additional adjustment for all MetS components, these associations were not significant. There were no statistically significant difference (P=.723-.901) in c statistics between the model containing all MetS components and potential confounders and the model also including the MetS variable. The likelihood ratio test also showed no statistically significant (P=.166-.718) difference between these 2 models. Our findings suggest that MetS is associated with stroke, but not to a greater degree than the sum of its components. Thus, the focus should be on identification and appropriate control of its individual components, particularly high blood pressure and hypertriglyceridemia, rather than of MetS itself. Copyright © 2013 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Selection of a cardiac surgery provider in the managed care era.
Shahian, D M; Yip, W; Westcott, G; Jacobson, J
2000-11-01
Many health planners promote the use of competition to contain cost and improve quality of care. Using a standard econometric model, we examined the evidence for "value-based" cardiac surgery provider selection in eastern Massachusetts, where there is significant competition and managed care penetration. McFadden's conditional logit model was used to study cardiac surgery provider selection among 6952 patients and eight metropolitan Boston hospitals in 1997. Hospital predictor variables included beds, cardiac surgery case volume, objective clinical and financial performance, reputation (percent out-of-state referrals, cardiac residency program), distance from patient's home to hospital, and historical referral patterns. Subgroup analyses were performed for each major payer category. Distance from patient's home to hospital (odds ratio 0.90; P =.000) and the historical referral pattern from each patient's hometown (z = 45.305; P =.000) were important predictors in all models. A cardiac surgery residency enhanced the probability of selection (odds ratio 5.25; P =.000), as did percent out-of-state referrals (odds ratio 1.10; P =.001). Higher mortality rates were associated with decreased probability of selection (odds ratio 0.51; P =.027), but higher length of stay was paradoxically associated with greater probability (odds ratio 1.72; P =.000). Total hospital costs were irrelevant (odds ratio 1.00; P =.179). When analyzed by payer subgroup, Medicare patients appeared to select hospitals with both low mortality (odds ratio 0.43; P =.176) and short length of stay (odds ratio 0.76; P =.213), although the results did not achieve statistical significance. The commercial managed care subgroup exhibited the least "value-based" behavior. The odds ratio for length of stay was the highest of any group (odds ratio = 2.589; P =.000) and there was a subset of hospitals for which higher mortality was actually associated with greater likelihood of selection. The observable determinants of cardiac surgery provider selection are related to hospital reputation, historical referral patterns, and patient proximity, not objective clinical or cost performance. The paradoxic behavior of commercial managed care probably results from unobserved choice factors that are not primarily based on objective provider performance.
Choosing relatives for DNA identification of missing persons.
Ge, Jianye; Budowle, Bruce; Chakraborty, Ranajit
2011-01-01
DNA-based analysis is integral to missing person identification cases. When direct references are not available, indirect relative references can be used to identify missing persons by kinship analysis. Generally, more reference relatives render greater accuracy of identification. However, it is costly to type multiple references. Thus, at times, decisions may need to be made on which relatives to type. In this study, pedigrees for 37 common reference scenarios with 13 CODIS STRs were simulated to rank the information content of different combinations of relatives. The results confirm that first-order relatives (parents and fullsibs) are the most preferred relatives to identify missing persons; fullsibs are also informative. Less genetic dependence between references provides a higher on average likelihood ratio. Distant relatives may not be helpful solely by autosomal markers. But lineage-based Y chromosome and mitochondrial DNA markers can increase the likelihood ratio or serve as filters to exclude putative relationships. © 2010 American Academy of Forensic Sciences.
Bayesian framework for the evaluation of fiber evidence in a double murder--a case report.
Causin, Valerio; Schiavone, Sergio; Marigo, Antonio; Carresi, Pietro
2004-05-10
Fiber evidence found on a suspect vehicle was the only useful trace to reconstruct the dynamics of the transportation of two corpses. Optical microscopy, UV-Vis microspectrophotometry and infrared analysis were employed to compare fibers recovered in the trunk of a car to those of the blankets composing the wrapping in which the victims had been hidden. A "pseudo-1:1" taping permitted to reconstruct the spatial distribution of the traces and to further strengthen the support to one of the hypotheses. The Likelihood Ratio (LR) was calculated, in order to quantify the support given by forensic evidence to the explanations proposed. A generalization of the Likelihood Ratio equation to cases analogous to this has been derived. Fibers were the only traces that helped in the corroboration of the crime scenario, being absent any DNA, fingerprints and ballistic evidence.
Variance change point detection for fractional Brownian motion based on the likelihood ratio test
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz
2018-01-01
Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.
Xu, Stanley; Hambidge, Simon J; McClure, David L; Daley, Matthew F; Glanz, Jason M
2013-08-30
In the examination of the association between vaccines and rare adverse events after vaccination in postlicensure observational studies, it is challenging to define appropriate risk windows because prelicensure RCTs provide little insight on the timing of specific adverse events. Past vaccine safety studies have often used prespecified risk windows based on prior publications, biological understanding of the vaccine, and expert opinion. Recently, a data-driven approach was developed to identify appropriate risk windows for vaccine safety studies that use the self-controlled case series design. This approach employs both the maximum incidence rate ratio and the linear relation between the estimated incidence rate ratio and the inverse of average person time at risk, given a specified risk window. In this paper, we present a scan statistic that can identify appropriate risk windows in vaccine safety studies using the self-controlled case series design while taking into account the dependence of time intervals within an individual and while adjusting for time-varying covariates such as age and seasonality. This approach uses the maximum likelihood ratio test based on fixed-effects models, which has been used for analyzing data from self-controlled case series design in addition to conditional Poisson models. Copyright © 2013 John Wiley & Sons, Ltd.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
Likelihoods for fixed rank nomination networks
HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE
2014-01-01
Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586
Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.
Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael
2018-04-01
The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.